Hi, everyone When I use sparsediag to diagnalize the Hamiltonian of Hubbard model, the paprameter is : CONSERVED_QUANTUMNUMBERS="Nup,Ndown" Nup_total=6 Ndown_total=6 the system size is L=12, no translation symmetry here, the process will take 100s, which is much slower than our own Davison code(It only take 20s).
if I add the measurements MEASURE_ENERGY="True" MEASURE_LOCAL[density]=n MEASURE_LOCAL[double occupancy]=U*double_occupancy MEASURE_CORRELATIONS[density matrix_up]="cdag_up:c_up" MEASURE_CORRELATIONS[density matrix_down]="cdag_down:c_down" the process will take 900s.
Does anyone know how to speed up this calculation? Or at least avoid the waste time on the measurements
Correlation measurements are not yet optimized, and neither is actually the creation of the matrix. The diagonalization codes are flexible codes, but not state of the art in terms of performance.
Best regards
Matthias
On Sep 1, 2012, at 10:48 PM, 陈巧妮 qiaoni2233@gmail.com wrote:
Hi, everyone When I use sparsediag to diagnalize the Hamiltonian of Hubbard model, the paprameter is : CONSERVED_QUANTUMNUMBERS="Nup,Ndown" Nup_total=6 Ndown_total=6 the system size is L=12, no translation symmetry here, the process will take 100s, which is much slower than our own Davison code(It only take 20s).
if I add the measurements MEASURE_ENERGY="True" MEASURE_LOCAL[density]=n MEASURE_LOCAL[double occupancy]=U*double_occupancy MEASURE_CORRELATIONS[density matrix_up]="cdag_up:c_up" MEASURE_CORRELATIONS[density matrix_down]="cdag_down:c_down" the process will take 900s.
Does anyone know how to speed up this calculation? Or at least avoid the waste time on the measurements
-- Qiaoni Chen Department of Chemistry, Princeton University Princeton, NJ, 08544 Tel +1-609-865-0817
comp-phys-alps-users@lists.phys.ethz.ch