Hello,
Sometimes QMC worm code (used on Bose-Hubbard) terminates with the following error:
here come first earlier succesful simulations then ... Created run 85 remote on Host ID: 84 Created run 86 remote on Host ID: 85 Created run 87 remote on Host ID: 86 Created run 88 remote on Host ID: 87 All processes have been assigned Checking if Simulation 1 is finished: not yet, next check in 120 seconds ( 0% done). q = -0 state1 = 0 state2 = 0 bond_type = 0 zero matrix element in remove_jump application called MPI_Abort(MPI_COMM_WORLD, -2) - process 67
Could somebody advice what I can do?
Best regards Kuba
We are aware of this problem and will fix it soon.
Matthias
On 12 May 2009, at 08:39, Jakub Zakrzewski wrote:
Hello,
Sometimes QMC worm code (used on Bose-Hubbard) terminates with the following error:
here come first earlier succesful simulations then ... Created run 85 remote on Host ID: 84 Created run 86 remote on Host ID: 85 Created run 87 remote on Host ID: 86 Created run 88 remote on Host ID: 87 All processes have been assigned Checking if Simulation 1 is finished: not yet, next check in 120 seconds ( 0% done). q = -0 state1 = 0 state2 = 0 bond_type = 0 zero matrix element in remove_jump application called MPI_Abort(MPI_COMM_WORLD, -2) - process 67
Could somebody advice what I can do?
Best regards Kuba
On 12 May 2009, at 16:39, Jakub Zakrzewski wrote:
Hello,
Sometimes QMC worm code (used on Bose-Hubbard) terminates with the following error:
here come first earlier succesful simulations then ... Created run 85 remote on Host ID: 84 Created run 86 remote on Host ID: 85 Created run 87 remote on Host ID: 86 Created run 88 remote on Host ID: 87 All processes have been assigned Checking if Simulation 1 is finished: not yet, next check in 120 seconds ( 0% done). q = -0 state1 = 0 state2 = 0 bond_type = 0 zero matrix element in remove_jump application called MPI_Abort(MPI_COMM_WORLD, -2) - process 67
Could somebody advice what I can do?
Best regards Kuba
Hi Kuba,
I finally found a reproducible case myself and have fixed it. The ALPS 2.0 nightly snapshot which should be available tomorrow will have fixed it.
Matthias
comp-phys-alps-users@lists.phys.ethz.ch