Dear Dominik,
The parapack scheduler supports parallel workers. You can run the exchange worker in parallel by using multiple threads or processes provided that
1. the parameters are given by XML files instead of the standard input, and (running with standard input is for debugging or testing) 2. multiple threads are assigned to each worker by using -p (--threads-per-clone) option (default is -p 1).
Please try, e.g.,
$ parameter2xml exchange.ip $ mpirun -np 4 ./exchange --mpi --threads-per-clone 4 exchange.ip.in.xml
Best, Synge
From: Dominik Schildknecht dominik.schildknecht@psi.ch Subject: [ALPS-users] execution time of example/parapack/exchange Date: 16 May 2017 at 01:48:28 GMT-7 To: comp-phys-alps-users@lists.phys.ethz.ch Reply-To: comp-phys-alps-users@lists.phys.ethz.ch
Dear all,
I was testing your example in example/parapack/exchange with the exchange.ip modified to only test the ising case, however with more sweeps, to test the time spent.
I know that the parallel_exchange_worker gets called (visible via the line "EXMC: number of replicas on each process = 2 2", which will only be outputted by the parallel_exchange_worker). I would by this output assume that different replicas get distributed over the cores,.
The time spent in the simulation however stays the same irrespective of the number of processes I provide to MPI or not even call the MPI library at all.
I call the function in the following way:
mpirun -np 2 ./exchange --mpi < exchange.ip
resp.
./exchange < exchange.ip
Do I miss an important argument in the input file / command line argument, that prevents the parallel execution of replicas?
Best, Dominik
-- Paul Scherrer Institut Dominik Schildknecht PhD Student WHGA/129 CH-5232 Villigen-PSI Phone: +41 56 310 55 68
Comp-phys-alps-users Mailing List for the ALPS Project http://alps.comp-phys.org/
List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
Dear Synge,
thanks for your fast reply. I tried your suggestions, however without success. I wrote a small script (which I appended, if you want to also test it), which analyses time of execution in several attempts and does some statistics on it.
It calls the command with
mpirun -np N ./exchange --mpi --threads-per-clone N exchange.ip.in.xml
for N in 1 to 4. It seems that execution time is slightly increasing with number of MPI cores, such that I assume that this comes from the additional MPI overhead. However I see no speed-up in terms of parallel execution of different replicas.
Did I miss something else? Best, Dominik
On 16.05.2017 17:02, Synge Todo wrote:
Dear Dominik,
The parapack scheduler supports parallel workers. You can run the exchange worker in parallel by using multiple threads or processes provided that
- the parameters are given by XML files instead of the standard input, and (running with standard input is for debugging or testing)
- multiple threads are assigned to each worker by using -p (--threads-per-clone) option (default is -p 1).
Please try, e.g.,
$ parameter2xml exchange.ip $ mpirun -np 4 ./exchange --mpi --threads-per-clone 4 exchange.ip.in.xml
Best, Synge
From: Dominik Schildknecht dominik.schildknecht@psi.ch Subject: [ALPS-users] execution time of example/parapack/exchange Date: 16 May 2017 at 01:48:28 GMT-7 To: comp-phys-alps-users@lists.phys.ethz.ch Reply-To: comp-phys-alps-users@lists.phys.ethz.ch
Dear all,
I was testing your example in example/parapack/exchange with the exchange.ip modified to only test the ising case, however with more sweeps, to test the time spent.
I know that the parallel_exchange_worker gets called (visible via the line "EXMC: number of replicas on each process = 2 2", which will only be outputted by the parallel_exchange_worker). I would by this output assume that different replicas get distributed over the cores,.
The time spent in the simulation however stays the same irrespective of the number of processes I provide to MPI or not even call the MPI library at all.
I call the function in the following way:
mpirun -np 2 ./exchange --mpi < exchange.ip
resp.
./exchange < exchange.ip
Do I miss an important argument in the input file / command line argument, that prevents the parallel execution of replicas?
Best, Dominik
-- Paul Scherrer Institut Dominik Schildknecht PhD Student WHGA/129 CH-5232 Villigen-PSI Phone: +41 56 310 55 68
Comp-phys-alps-users Mailing List for the ALPS Project http://alps.comp-phys.org/
List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
Comp-phys-alps-users Mailing List for the ALPS Project http://alps.comp-phys.org/
List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
comp-phys-alps-users@lists.phys.ethz.ch