Dear Synge, thank you for your attention.
So in my case, having the lattice with 16 atoms unit cell and 48 intra and intercell bonds (coupling constants), means I would need to multiply the estimated expected time for some simple reference model by 48?
The THERMALIZATION and SWEEPS parameters are chosen to meet the relative errors set up for observables being measured. If we increase the size of the system, namely L=8,12,16,24,..., but leave the SWEEPS the same, the relative error increases and the accuracy of simulation drops, aren't they?
I'm asking that to clarify the issue as the cluster I use limits the jobs running time to 48 hours.
And is it possible in principle in ALPS to stop the calculation and restart later as a new process? Or in that case when I need to analyse the QMC statistics manually?
Dear Oleh,
I think that the default value (SWEEPS = 65536, THERMALIZATION = SWEEPS/8) should be enough for rough estimation of the critical temperature of simple models.
The CPU time of the looper code is asymptotically proportional to the number of bonds (more precisely, the sum of coupling constants) and as well as to the inverse temperature. For example, QMC calculation of simple cubic S=1/2 ferromagnetic Heisenberg model near the Curie point (T \sim 0.8) takes about
16 sec (for L=8) 52 sec (for L=12) 134 sec (for L=16) 450 sec (for L=24) 1490 sec (for L=36)
on my x86 workstation.
Best, Synge
On 2014/10/01, at 3:40, Menchyshyn Oleh oleh.menchyshyn@gmail.com wrote:
Dear Synge,
thank you for your hints. It works nice. I think, it would be useful to update the tutorial on that subject, to
make it more clear,
that parameter file is not the lattice description one. I knew it was the two-dimensional model. I put it on purpose, just as an
example.
Can someone experienced in QMC-loop simulations comment on my bigger
problem, please.
Message: 10 Date: Mon, 29 Sep 2014 11:03:36 +0900 From: Synge Todo wistaria@comp-phys.org To: comp-phys-alps-users@lists.phys.ethz.ch Subject: Re: [ALPS-users] complexity estimation for the 3D ferromagnetic Heisenberg model Message-ID: 9E0990A4-3582-45B9-B8C6-F704BE452A27@comp-phys.org Content-Type: text/plain; charset=windows-1252
Dear Oleh,
It seems that your lattice ?p_lat? is not defined in THREE dimensions,
but in TWO dimensions.
Are you really simulating three-dimensional model?
Best, Synge
On 2014/09/29, at 5:09, Menchyshyn Oleh oleh.menchyshyn@gmail.com
wrote:
Dear ALPS community,
I am trying to obtain the critical temperature for the 3D
ferromagnetic Heisenberg model S=1/2 on the cubic lattice with a unit cell which contains 16 atoms. I have made the QMC "loop" simulations for the lattice with dimensions L = 8, i.e. 8*8*8 unit cells, also for L=10, and L=12.
But there was just a little bend on the magnetisation curve as a sign
of the phase transition which should occur for the ferromagnetic model for certain. I know I should use the Binder cumulant and the finite size scaling to locate the phase transition point correctly.
I have enlarged my lattice to L=24, but simulation goes very-very
slowly. As I have limited resources to just tens of cores and I have a feeling I would need to take L=32(48?) at least I want to ask:
Based on your experience what order should be the THERMALIZATION and
SWEEPS parameters?
How can I estimate the computational complexity of my problem and the
time it would take?
Or maybe, I do anything wrong?
A one more technical issue. The tutorial on the ALPS says that
correctness of a lattice definition can be checked with "printgraph" tool. I used it with a couple definition files but all resulted in:
Caught exception: parameter parse error at "<LATTICES> <LATTICEGRAPH
name="p"
Just for a reference I put a definition of a lattice I know worked
well with simulation tools, but failed with "printgraph"
<LATTICES> <LATTICEGRAPH name="p_lat"> <FINITELATTICE> <LATTICE dimension="2"> <BASIS> <VECTOR>1 0</VECTOR><VECTOR>0 1</VECTOR></BASIS> </LATTICE> <PARAMETER name="L"/> <PARAMETER name="M"/> <EXTENT dimension="1" size="L"/> <EXTENT dimension="2" size="M"/> <BOUNDARY type="periodic"/> </FINITELATTICE> <UNITCELL dimension="2" vertices="6"> <VERTEX id="1"><COORDINATE> 0.6 0.2 </COORDINATE></VERTEX> <VERTEX id="2"><COORDINATE> 0.6 0.6 </COORDINATE></VERTEX> <VERTEX id="3"><COORDINATE> 0.2 0.6 </COORDINATE></VERTEX> <VERTEX id="4"><COORDINATE> 0.2 0.2 </COORDINATE></VERTEX> <VERTEX id="5"><COORDINATE> 0.8 0.4 </COORDINATE></VERTEX> <VERTEX id="6"><COORDINATE> 0.4 0.8 </COORDINATE></VERTEX> <EDGE type="2"><SOURCE vertex="1"/><TARGET vertex="2"/></EDGE> <EDGE type="2"><SOURCE vertex="2"/><TARGET vertex="3"/></EDGE> <EDGE type="2"><SOURCE vertex="3"/><TARGET vertex="4"/></EDGE> <EDGE type="2"><SOURCE vertex="1"/><TARGET vertex="4"/></EDGE> <EDGE type="3"><SOURCE vertex="1"/><TARGET vertex="5"/></EDGE> <EDGE type="3"><SOURCE vertex="2"/><TARGET vertex="6"/></EDGE> <EDGE type="1"><SOURCE vertex="2"/><TARGET vertex="5"/></EDGE> <EDGE type="1"><SOURCE vertex="3"/><TARGET vertex="6"/></EDGE> <EDGE type="3"><SOURCE vertex="3"/><TARGET vertex="5" offset="-1
0"/></EDGE>
<EDGE type="1"><SOURCE vertex="4"/><TARGET vertex="5" offset="-1
0"/></EDGE>
<EDGE type="3"><SOURCE vertex="4"/><TARGET vertex="6" offset="0
-1"/></EDGE>
<EDGE type="1"><SOURCE vertex="1"/><TARGET vertex="6" offset="0
-1"/></EDGE>
</UNITCELL> </LATTICEGRAPH> </LATTICES>
Best regards, Oleh Menchyshyn
Dear Oleh,
On 2014/10/01, at 21:40, Menchyshyn Oleh oleh.menchyshyn@gmail.com wrote:
So in my case, having the lattice with 16 atoms unit cell and 48 intra and intercell bonds (coupling constants), means I would need to multiply the estimated expected time for some simple reference model by 48?
Since there are 3 bonds in a unit in my case, 48/3 = 16 would be the correct factor to be used.
The THERMALIZATION and SWEEPS parameters are chosen to meet the relative errors set up for observables being measured. If we increase the size of the system, namely L=8,12,16,24,..., but leave the SWEEPS the same, the relative error increases and the accuracy of simulation drops, aren't they?
The increase of error bar will be very slow (or sometimes negligible) for the loop algorithm, as the autocorrelation time stays O(1) irrespective of the system size.
I'm asking that to clarify the issue as the cluster I use limits the jobs running time to 48 hours.
And is it possible in principle in ALPS to stop the calculation and restart later as a new process? Or in that case when I need to analyse the QMC statistics manually?
You can use the command line option ‘—time-limit’ to specify the time after which the program will write final checkpoints and terminate. The simulation will continue from the checkpoints when the program is executed again.
Please read http://alps.comp-phys.org/mediawiki/index.php/ALPS_using_the_command_line#Co... for more details.
Best, Synge
comp-phys-alps-users@lists.phys.ethz.ch