Dear Giuseppe,
ok thank you very much and sorry for the name! I made an ugly mixture with your surname and the Monte Carlo method, which is actually my nightmare!
Ciao Rachele
2012/9/19 Giuseppe Carleo giuscarl@gmail.com
Dear Rachele,
actually you should reason in terms of the statistical error you want to achieve for a given observable, not in terms of the running time.
Say you want an error of 1.0e-4 on the total energy, and that with 2 processors this will be achieved after 30 minutes. Now, with 8 processors, the same statistical error will be achieved after about 15 minutes (in the ideal case). So, this means that you can run your simulation for only 15 minutes instead of 30 minutes, and get the same statistical error. That's how parallelisation can greatly help in this case.
Ciao,
Giuseppe (not Carlo)
Dear Evgeni and Carlo,
thank you very much for your answers.
Naively I thought that if I run a Monte Carlo simulation on several processors (let's say 4 processors), the time needed to the simulation to finish should be much less (let's say around 4 times less) then the time nedeed by the same simulation on 1 processor.
This means that if I want to simulate the Ising model on a cubic lattice of edge 8, the simulation will take ~200 seconds to finish on 1 processor ~100 seconds to finish on 2 processors ~50 seconds to finish on 4 processors and so on...
I see instead, and you confirm my suspicious, that I can only improve the statistical error on the results but not the running time.
Thank you very much for your help. I whish you all the best, Rachele
2012/9/19 Giuseppe Carleo giuscarl@gmail.com
Dear Rachele,
if I understand correctly your situation, the fact that each simulation on 2,4,8,12.. processors takes exactly the same computational time means that the total number of Monte Carlo iterations on each processor is always the same. In other words, on each processor you make a certain number of iterations which is **independent** on the total number of processors.
The advantage of using more processors is that the statistical error you will have with say 8 processors should be about a factor 2 smaller than the statistical error you have when running with 2 processor.
You might verify if this is the case.
Regards,
Giuseppe
Dear Mr Troyer,
sorry if I write again is only that I didn't hear anything from you and I'm not sure you received my mail. I have to run MC simulations for classical O(n) spin model in parallel.
I used the command
mpirun -np 4 spinmc --mpi --Tmin 300 --time-limit 2400 --write-xml Ising3DL32mpiprocs4.out.xml
to mean that I want to run it on 4 processors. The simulation goes fine and in the end I have these output files:
Ising3DL32mpiprocs4.out.xml Ising3DL32mpiprocs4.task1.out.run1.xml Ising3DL32mpiprocs4.task1.out.run2.xml Ising3DL32mpiprocs4.task1.out.run3.xml Ising3DL32mpiprocs4.task1.out.run4.xml
more other files. I am a bit surprised since in every file Ising3DL32mpiprocs4.task1.out.run*.xml It looks like I have a separate simulation of the same model and it doesn't seems like that the whole simulation (the one generated by the file Ising3DL32mpiprocs4.in.xml) is being partitioned in 4 parts.
In fact the simulation takes exactly the same time if I run it on 2, 4, 8, 12 .... processors. I don't have any gain in time with the parallelization of the program.
Why? Do I missing any flags in the command above or is the program really written for parallelization?
Thank you very much for the help.
Waiting for your answer, all the best
Rachele Nerattini
I have to run very long simulations of classical O(n) model in three spatial dimensions. In particular I study the Ising model, the XY model, the Heisenberg model and the O(4) model at the critical value of the temperature. To gain time I wanted to run the simulations in parallel.
An example of the input file I used is:
============================
LATTICE="simple cubic lattice" LATTICE_LIBRARY="lattices.xml" T=4.511441614 J=1 THERMALIZATION=200000 SWEEPS=1000000 UPDATE="cluster" MODEL="Ising" {L=32;} ============================
And the command I used to run the simulation on 4 processors is
mpirun -np 4 spinmc --mpi --Tmin 300 --time-limit 2400 --write-xml Ising3DL32mpiprocs4.out.xml (1)
To run the same simulation in a serial manner I use the command
spinmc --Tmin 300 --time-limit 2400 --write-xml Ising3DL32serial.out.xml (2)
In both cases the simulation runs properly BUT I the gain, in terms of time, of the parallel run with respect to the serial run is almost zero. This is true independentely on the number of processors that I used (2,4,8,12).
Am I doing some mistakes in the command line (1) ? Am I missing any flags or something like that? Or the parallelization of the program is done only at a compliler level (In the sense that the program spinmc can run on different processors but as a matter of fact it is not really optimized in this sense...)?
Thank you for your attention and for your answer.
With best regards,
Rachele
2012/7/1 Rachele Nerattini r.nerattini@gmail.com
Thank you again for the quick and precise help.
2012/7/1 Rachele Nerattini r.nerattini@gmail.com
ok, I'll try to run it on CINECA and I'll tell you if it works.
2012/7/1 Matthias Troyer troyer@phys.ethz.ch
On 1 Jul 2012, at 15:25, Rachele Nerattini wrote:
Thank you all very much! Yes, I think this could help. I'll send your mail to the CINECA-help-desk so that they will tell me if this is all we need and how to concatenate jobs.
In any case, if I understand well, all I have to do is:
Nearly
0)use parameter2xml to generate the file .*.in.xml;
- to run the first block inserting the flag ' --time-limit #sec ' in
the command line wich tells the program to stop after #sec seconds;
- to run the second block using again the flag ' --time-limit #sec '
in the command line and the file .out.xml as input file;
- to go on with this till the end of the simulations.
Yes
Is that ok? For what concerns the walltime limit I think it is 24 hours (24:00:00)...
24 ho0urs will be 86400 seconds, but the code will need time to write the checkpoints, thus specify about 1000 seconds less to be on the safe side.
Thnak you again I'll let you know if everything works. Bye for now Rachele
2012/7/1 Fabien Alet alet@irsamc.ups-tlse.fr
> Dear Rachele, > > if I understood correctly, what you need to do is to : > 0) run for 10 seconds or so your job locally on the main server such > that file Ising3DL10mpiprocs10.out.xml gets generated > (e.g. something like spinmc --time-limit 10 --write-xml > Ising3DL10mpiprocs10.in.xml, alternatively just stop the job with CTRL-C) > > 1) add the time limit of 2.4 hours in the command line where you > execute your observable, such as e.g. > mpirun -np 10 spinmc --mpi --Tmin 100 --time-limit 9000 --write-xml > Ising3DL10mpiprocs10.out.xml > [actually you should give slightly less than 2.4 hours such that > files have the time to be written to disk, this is why I used 9000 seconds] > > Remark that I used .out.xml , and not .in.xml such that your > script really continues your ongoing jobs, and does not restart from > scratch every time. > > I hope this can help, > Best > Fabien > > Le 1 juil. 2012 à 14:40, Rachele Nerattini a écrit : > > Dear Mr. Troyer, > > I read the page you suggested me but I still don't understand. It > says how to restart a simulation stopped because the computer shut down or > how to make it stop before its natural end in case you achieved the > required error but I don't understad how to stop it in a controlled way. > > For istance, let's suppose I have to run a simulation which is 24 > hours long. > I want to divide the long run in 10 runs of 2.4 hours each. The > output of the first run must be the input of the following and I want to do > all the runs in cascade. > > What do I have to write in the input file to do that? Sorry but I > didn't find it in the page > > http://alps.comp-phys.org/mediawiki/index.php/Documentation:Running > > certeinly because I'm not an expert in these kind of things. > > I copy and paste here in the following an example of input file I've > wrote for the CINECA and an example of script I had to write to run a tet > job.... > Can you tell me what I have to change to make it stop after a > certain numeber of seconds? Then I'll ask to CINECA assistence how to make > all the block run in cascade. > > Thank you and all the best, > > Rachele Nerattini > > input file = Ising3DL10mpiprocs10 > > LATTICE="simple cubic lattice" > LATTICE_LIBRARY="lattices.xml" > T=4.511441614 > J=1 > THERMALIZATION=200000 > SWEEPS=1000000 > UPDATE="cluster" > MODEL="Ising" > {L=10;} > > script = Ising3DL10mpiprocs10.sh > > #!/bin/bash > #PBS -A name_of_the_project > #PBS -l walltime=1:00:00 > #PBS -l select=1:ncpus=10:mpiprocs=10:mem=40GB > #PBS -q parallel > #PBS -o Ising3DL10mpiprocs10.out > #PBS -e Ising3DL10mpiprocs10.err > # put the executable in the PATH > module load autoload alps > # cd in the directory where you have input e job.sh > cd $PBS_O_WORKDIR > # prep gli input > parameter2xml Ising3DL10mpiprocs10 > > # it mounts openmpi to have mpirun in the PATH > > module load profile/advanced > openmpi/1.4.4--intel--co-2011.6.233--binary > > mpirun -np 10 spinmc --mpi --Tmin 100 --write-xml > Ising3DL10mpiprocs10.in.xml > > qsub Ising3DL10mpiprocs10.sh > > What do I have to add to have what I need? > > Thank you again for the help, > > I whish you all the best > > Rachele Nerattini > > > 2012/7/1 Rachele Nerattini r.nerattini@gmail.com > >> Thank you very much for the help! I'll do it immediately. >> All the best >> Rachele Nerattini >> >> >> 2012/7/1 Matthias Troyer troyer@phys.ethz.ch >> >>> It is explained here: >>> >>> >>> http://alps.comp-phys.org/mediawiki/index.php/Documentation:Running >>> >>> Best regards >>> >>> Matthias Troyer >>> >>> On Jul 1, 2012, at 12:18 PM, Rachele Nerattini wrote: >>> >>> Dear Mr.Troyer, >>> >>> I'm using Monte Carlo simulations for classical O(n) spin models. >>> To be more precise I'm using the spinmc algorithm for Ising, XY and >>> Heisenberg models. >>> >>> Thank you for the help, >>> >>> all the best >>> >>> Rachele Nerattini >>> >>> >>> 2012/6/28 Matthias Troyer troyer@phys.ethz.ch >>> >>>> Dear Rachele Nerattini, >>>> >>>> Are you using Monte Carlo simulations or one of the other codes? >>>> >>>> With best regards >>>> >>>> Matthias Troyer >>>> >>>> On 28 Jun 2012, at 16:59, Rachele Nerattini wrote: >>>> >>>> > Dear Mr.Troyer, >>>> > >>>> > I'm running some simulations using ALPS at the CINECA cluster >>>> in Bologna. I will have to run really long simulations which go well beyond >>>> the walltime limit of the machine. >>>> > >>>> > To do that I have to divide the whole run in several >>>> subsections and then run all the processes in cascade. >>>> > >>>> > Can you tell me how can I do that with ALPS? Which are the >>>> commands of stop/restart that I have to use? >>>> > >>>> > Thank you for your help, >>>> > >>>> > Rachele Nerattini >>>> >>>> >>> >>> >> > >