Hello,

 

I am running ALPS 2.3.0 on Windows 10 using conda 4.8.3

python 2.7.15                boost 1.64.0          hdf5 1.8.18

 

When I attempt to run the loop simulation it starts the scheduler and initializes:

 

ALPS/looper version 3.2b12-20100128 (2010/01/28)

  multi-cluster quantum Monte Carlo algorithms for spin systems

  available from http://wistaria.comp-phys.org/alps-looper/

  copyright (c) 1997-2010 by Synge Todo <wistaria@comp-phys.org>

 

using ALPS/parapack scheduler

  a Monte Carlo scheduler for multiple-level parallelization

  copyright (c) 1997-2016 by Synge Todo <wistaria@comp-phys.org>

 

based on the ALPS libraries version 2.3.0

  available from http://alps.comp-phys.org/

  copyright (c) 1994-2016 by the ALPS collaboration.

  Consult the web page for license details.

  For details see the publication:

  B. Bauer et al., J. Stat. Mech. (2011) P05001.

 

[2020-May-12 08:41:47]: starting scheduler on unnamed

  master input file  = [...]\parm2c.in.xml

  master output file = [...]\parm2c.out.xml

  termination file   = [disabled]

  total number of thread(s) = 1

  thread(s) per clone       = 1

  number of thread group(s) = 1

  auto evaluation = yes

  time limit = unlimited

  interval between checkpointing  = 3600 seconds

  interval between progress report = 600 seconds

  interval between vmusage report = infinity

  task range = all

  worker dump format = hdf5

  worker dump policy = running workers only

[2020-May-12 08:41:47]: task status: total number of tasks = 15

  new = 15, running = 0, continuing = 0, suspended = 0, finished = 0, completed = 0, skipped = 0

[2020-May-12 08:41:47]: starting 1 threadgroup(s)

[2020-May-12 08:41:47]: dispatching a new clone[1,1] on threadgroup[1]

 

but stops here without any error or warning, leaving behind a parm2c.out.xml.lck file. Additionally, pyalps.runApplication  returns the value -1073740777.

 

For completeness, the parameters are defined as:

for t in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.25, 1.5, 1.75, 2.0]:

    parms.append(

       {

         'LATTICE':"chain lattice",

         'MODEL':"spin",

         'local_S':0.5,

         'L':60,

         'J':1,

         'THERMALIZATION':15000,

         'SWEEPS':150000,

         'ALGORITHM':"loop",

         'T':t

       }

   )

 

I’ve also tried running this using Miniconda2 with

python 2.7.16                boost 1.65.1          hdf5 1.10.1

with and without setting the environment variable HDF5_USE_FILE_LOCKING to FALSE, with no change in behaviour.

I don’t encounter this issue with the spinmc, dirloop_sse, worm, qwl, sparsediag, or dmrg applications.

However running dwa, following the DWA-01 tutorial, the simulation initializes and starts on task 1 but eventually crashes without warning/error and returns the same value (no .lck file left behind, might be a different issue).

 

Has anyone else encountered this issue and/or have ideas about it?

Thanks in advance!