Hello ALPS fellows,
I encountered constraint-breaking numerical errors in two-point correlation functions evaluated by ALPS on long eigenvectors [sparsediag in ALPS 2.1]. Correlation functions evaluated are <n(i) n(j)> - <n(i)><n(j)> and <bdag(i) b(j)> in a Bose-Hubbard model.
I looked at a *large problem*, 18 site lattice (4.9 x 10^6 states with U(1) and translation sym) and a *small problem*, 12 site lattice (~12300 states with the same symmetries). Large problem results are inconsistent* for some non-negligible parameter subset* with those for the small problem. The inconsistency is worse typically where most eigenvector entries are expected to vanish. <n(i)^2> - <n(i)>^2 can become large negative and the set <n(i)> may not obey the constraint on total boson number; the Green's function <bdag(i) b(j)> can occasionally vanish identically. This does not happen everywhere, but on a large parameter subset. Small problem correlation functions are as expected from alternative calculations.
Questions:
1) Would this issue be caused by numerical precision problems when the vector gets large? If so, what are the possible flags to pass to sparsediag?
2) A quick look at sparsediag source shows that value_type is either complex<double> or double. This seems to be changeable to <long double>. Are there any other places you might think of that would allow me to build a long double or (complex<long double>) Hamiltonian?
Thank you and regards, Alexandru Petrescu
Hi,
can you send the input files for the large errors that you observe? Any large error should in my opinion be due to convergence problems and not numerical precision.
Matthias
On May 12, 2014, at 12:12 PM, Alex Petrescu tudor.petrescu@yale.edu wrote:
Hello ALPS fellows,
I encountered constraint-breaking numerical errors in two-point correlation functions evaluated by ALPS on long eigenvectors [sparsediag in ALPS 2.1]. Correlation functions evaluated are <n(i) n(j)> - <n(i)><n(j)> and <bdag(i) b(j)> in a Bose-Hubbard model.
I looked at a large problem, 18 site lattice (4.9 x 10^6 states with U(1) and translation sym) and a small problem, 12 site lattice (~12300 states with the same symmetries). Large problem results are inconsistent for some non-negligible parameter subset with those for the small problem. The inconsistency is worse typically where most eigenvector entries are expected to vanish. <n(i)^2> - <n(i)>^2 can become large negative and the set <n(i)> may not obey the constraint on total boson number; the Green's function <bdag(i) b(j)> can occasionally vanish identically. This does not happen everywhere, but on a large parameter subset. Small problem correlation functions are as expected from alternative calculations.
Questions:
Would this issue be caused by numerical precision problems when the vector gets large? If so, what are the possible flags to pass to sparsediag?
A quick look at sparsediag source shows that value_type is either complex<double> or double. This seems to be changeable to <long double>. Are there any other places you might think of that would allow me to build a long double or (complex<long double>) Hamiltonian?
Thank you and regards, Alexandru Petrescu
comp-phys-alps-users@lists.phys.ethz.ch