Dear Mr. Troyer,
I have another problem with a graph, which consists of 32 sites. fulldiag works perfectly with smaller graphs (8 and 18 sites) but won't with 32-sites one. I tried it under linux (command line, Vistrails) and Windows(Vistrails). There is no error, but it can't diagonalize it (time of diagonalizing is longer than several hours; I had to interrupt the process). Note, that I'm taking {N_total=1; Sz_total=1/2;}, and it's hard to believe, that it can take hours and hours to diagonalize it.
The lattice, model, and tasks files are attached.
Please, let me know what can be wrong. Probably, there is a limitation for big lattice size?
Thank you in advance, Volodymyr Derzhko
It's a limitation because the codes are not optimized for very few particles - it essentially builds up the full Hilbert space of fermions on 32 sites (2^64 states) and filters out your few states. What are the target numbers of particles that you aim for?
Matthias
On Nov 25, 2011, at 1:37 PM, Volodymyr Derzhko wrote:
Dear Mr. Troyer,
I have another problem with a graph, which consists of 32 sites. fulldiag works perfectly with smaller graphs (8 and 18 sites) but won't with 32-sites one. I tried it under linux (command line, Vistrails) and Windows(Vistrails). There is no error, but it can't diagonalize it (time of diagonalizing is longer than several hours; I had to interrupt the process). Note, that I'm taking {N_total=1; Sz_total=1/2;}, and it's hard to believe, that it can take hours and hours to diagonalize it.
The lattice, model, and tasks files are attached.
Please, let me know what can be wrong. Probably, there is a limitation for big lattice size?
Thank you in advance, Volodymyr Derzhko <job_chb2x2pbc.txt><job_chb3x3pbc.txt><job_chb4x4pbc.txt><latt.xml><models.xml>
Dear Mr. Troyer,
Thank you for your answer. I'm going to put N_total near half of the number of the lattice cites. In this case its near 16. But I also want to look a bit closer on one-particle problem, and compare ED results with analytical. Do you think, there is a solution to that? More powerful computer? Wait longer?
Sill, it is a bit strange, because for 8 and 18 sites diagonalization took 6 and 30 seconds, respectively. I suppose, it is less than the program need to build and diagonalize matrix in 2^16- or 2^32-dimensional space.
Thank you for help!
Volodymyr Derzhko
It's a limitation because the codes are not optimized for very few particles - it essentially builds up the full Hilbert space of fermions on 32 sites (2^64 states) and filters out your few states. What are the target numbers of particles that you aim for?
Matthias
Dear Volodymyr,
The Hilbert space size for 8 up and 8 down particles in 32 sites is about 10^14 elements. To store the matrix you will thus need 10^31 bytes of memory, or about 10^16 Petabyte, 16 orders of magnitude beyond the memory of the biggest supercomputer in the world. The scaling is exponential.
Matthias
On Nov 25, 2011, at 2:14 PM, Volodymyr Derzhko wrote:
Dear Mr. Troyer,
Thank you for your answer. I'm going to put N_total near half of the number of the lattice cites. In this case its near 16. But I also want to look a bit closer on one-particle problem, and compare ED results with analytical. Do you think, there is a solution to that? More powerful computer? Wait longer?
Sill, it is a bit strange, because for 8 and 18 sites diagonalization took 6 and 30 seconds, respectively. I suppose, it is less than the program need to build and diagonalize matrix in 2^16- or 2^32-dimensional space.
Thank you for help!
Volodymyr Derzhko
It's a limitation because the codes are not optimized for very few particles - it essentially builds up the full Hilbert space of fermions on 32 sites (2^64 states) and filters out your few states. What are the target numbers of particles that you aim for?
Matthias
Dear Mr Troyer,
Thank you for your answer. I can agree with you, when we talking about 16 fermions on 32-sites lattice. But I am still confused about one- (or two) particle case. As I understood, regardless of particle number, program builds the matrix dxd (where d is the dimension of the Hilbert space) due to the number of sites. Am I right? Or is it true only for the special case N_total=1? What about N_total=2,3?
Moreover, when I did ED on the other model, and I overestimated the computer capacities, there was an error message (I can't remember what exactly it said), and when I try to make ED on 32-sites graph, with one particle on it there are no errors at all. So why there is no message?
Thank you very much! Volodymyr Derzhko
Dear Volodymyr,
The Hilbert space size for 8 up and 8 down particles in 32 sites is about 10^14 elements. To store the matrix you will thus need 10^31 bytes of memory, or about 10^16 Petabyte, 16 orders of magnitude beyond the memory of the biggest supercomputer in the world. The scaling is exponential.
Matthias
On Nov 25, 2011, at 7:36 PM, Volodymyr Derzhko wrote:
Dear Mr Troyer,
Thank you for your answer. I can agree with you, when we talking about 16 fermions on 32-sites lattice. But I am still confused about one- (or two) particle case. As I understood, regardless of particle number, program builds the matrix dxd (where d is the dimension of the Hilbert space) due to the number of sites. Am I right? Or is it true only for the special case N_total=1? What about N_total=2,3?
Moreover, when I did ED on the other model, and I overestimated the computer capacities, there was an error message (I can't remember what exactly it said), and when I try to make ED on 32-sites graph, with one particle on it there are no errors at all. So why there is no message?
No, it does not build the matrix but does iterate over all 10^14 basis states, looking for those that fit the criterion N_total=1, and that just takes time. The code is not optimized for few particles.
Thank you very much! Volodymyr Derzhko
Dear Volodymyr,
The Hilbert space size for 8 up and 8 down particles in 32 sites is about 10^14 elements. To store the matrix you will thus need 10^31 bytes of memory, or about 10^16 Petabyte, 16 orders of magnitude beyond the memory of the biggest supercomputer in the world. The scaling is exponential.
Matthias
Dear Mr. Troyer,
I have another question concerning extracting eigenvalues from data file after ED procedure.
I need to have a column of eigenvalues, as it appeared in the out.xml file in the previous version of ALPS. In the new version, I know how to extract it from temporary files when the VisTrails is used, but have no idea how to do it when running program in terminal. I have got 'h5' file, which after conversion to txt or xml contains a table of eigenvalues, not a column. Moreover, the numbers in the table have only few significant digits (which is not the case in temporary VisTrails files). Is it possible to increase the accuracy?
Thank you in advance, Volodymyr Derzhko
Dear Volodymyr Derzhko,
The HDF-5 files have the same accuracy. Just the the Python tools to extract the eigenvalues. How to do that can be seen in the tutorials that make plots of spectra. Instead of plotting you can just print the energies or do anything else you need to do for your evaluation.
Matthias
On 5 Dec 2011, at 10:32, Volodymyr Derzhko wrote:
Dear Mr. Troyer,
I have another question concerning extracting eigenvalues from data file after ED procedure.
I need to have a column of eigenvalues, as it appeared in the out.xml file in the previous version of ALPS. In the new version, I know how to extract it from temporary files when the VisTrails is used, but have no idea how to do it when running program in terminal. I have got 'h5' file, which after conversion to txt or xml contains a table of eigenvalues, not a column. Moreover, the numbers in the table have only few significant digits (which is not the case in temporary VisTrails files). Is it possible to increase the accuracy?
Thank you in advance, Volodymyr Derzhko
comp-phys-alps-users@lists.phys.ethz.ch