Hi Michael,
Thanks for checking my code, there must be something wrong at my end somewhere. I'll see what I can do in terms of the cluster.
I did have another question, which is whether it is possible to extract the wave function from the HDF5 files produced by TEBD? Can this be done using the pyalps.hdf5 module?
Thanks,
Joseph
On 19 Sep 2013, at 11:00, <comp-phys-alps-users-request@lists.phys.ethz.ch>
<comp-phys-alps-users-request@lists.phys.ethz.ch> wrote:
> Send Comp-phys-alps-users mailing list submissions to
> comp-phys-alps-users@lists.phys.ethz.ch
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.phys.ethz.ch/listinfo/comp-phys-alps-users
> or, via email, send a message with subject or body 'help' to
> comp-phys-alps-users-request@lists.phys.ethz.ch
>
> You can reach the person managing the list at
> comp-phys-alps-users-owner@lists.phys.ethz.ch
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Comp-phys-alps-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Time propagation in TEBD stops prematurely (Michael Wall)
> 2. Re: Making the scheduler switch tasks, obtaining MPI rank
> (Peter Br?cker)
> 3. Re: Making the scheduler switch tasks, obtaining MPI rank
> (Matthias Troyer)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 18 Sep 2013 08:48:45 -0600
> From: Michael Wall <mwall.physics@gmail.com>
> To: comp-phys-alps-users@lists.phys.ethz.ch
> Subject: Re: [ALPS-users] Time propagation in TEBD stops prematurely
> Message-ID:
> <CA+DwVMrz2Y_1Kz7yDSyEqALjskd0iavzFM_9_KTMjPiNzkSzJA@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Joseph,
>
> Your code runs without stopping for me well past 5 time units. Is there a
> time limit for jobs on your cluster? You may also check any memory
> restrictions and make sure you are not going over them. Otherwise, I am
> not able to reproduce this problem.
>
> -Michael
>
>
> On Mon, Sep 16, 2013 at 4:10 AM, Joseph Prentice <
> Joseph.Prentice@physics.ox.ac.uk> wrote:
>
>> Hi Michael,
>>
>> Unfortunately, the fix you suggested, deleting one of my quenches, only
>> worked for a little bit (although I had to change it a little, by making
>> POW 1 instead of 0 - otherwise the quench did not occur at all). As I've
>> increased Chi up to around 800, I've encountered the same issue as before -
>> namely that the program just stops at around t=5 in the real time
>> propagation when it should continue to t=15, with no error message at all.
>> The parameters of my new Python script, incorporating the suggestion from
>> before is below. Is there anything else I can do? I'm running on a Linux
>> cluster. Any help would be brilliant.
>>
>> Thank you very much,
>> Joseph
>>
>> Python script parameters:
>> parms = [ {
>> 'L'
>> : 128,
>> 'MODEL'
>> : 'spin',
>> 'local_S'
>> : 0.5,
>> 'CONSERVED_QUANTUMNUMBERS' : 'Sz',
>> 'Sz'
>> : 0,
>> 'Jxy'
>> : 1,
>> 'Jz'
>> : 3.0,
>> 'ITP_CHIS'
>> : [80, 80],
>> 'ITP_DTS'
>> : [0.05, 0.025],
>> 'ITP_CONVS' :
>> [1E-10, 1E-11],
>> 'INITIAL_STATE' :
>> 'ground',
>> 'CHI_LIMIT'
>> : 800,
>> 'TRUNC_LIMIT' :
>> 1E-12,
>> 'NUM_THREADS' : 1,
>> 'TAUS'
>> : [15.0],
>> 'POWS'
>> : [1.0],
>> 'GS'
>> : ['Jz'],
>> 'GIS'
>> : [6.0],
>> 'GFS'
>> : [6.0],
>> 'NUMSTEPS'
>> : [625],
>> 'STEPSFORSTORE' : [10],
>> 'VERBOSE'
>> : 'true'
>> } ]
>>
>> On 22 Aug 2013, at 11:00, <comp-phys-alps-users-request@lists.phys.ethz.ch
>>>
>> <comp-phys-alps-users-request@lists.phys.ethz.ch> wrote:
>>
>>> Send Comp-phys-alps-users mailing list submissions to
>>> comp-phys-alps-users@lists.phys.ethz.ch
>>>
>>> To subscribe or unsubscribe via the World Wide Web, visit
>>> https://lists.phys.ethz.ch/listinfo/comp-phys-alps-users
>>> or, via email, send a message with subject or body 'help' to
>>> comp-phys-alps-users-request@lists.phys.ethz.ch
>>>
>>> You can reach the person managing the list at
>>> comp-phys-alps-users-owner@lists.phys.ethz.ch
>>>
>>> When replying, please edit your Subject line so it is more specific
>>> than "Re: Contents of Comp-phys-alps-users digest..."
>>>
>>>
>>> Today's Topics:
>>>
>>> 1. Alps 2.1.2 r6963 hybridization expansion: No parameter
>>> available (backes@th.physik.uni-frankfurt.de)
>>> 2. Time propagation in TEBD stops prematurely (Joseph Prentice)
>>> 3. Re: Alps 2.1.2 r6963 hybridization expansion: No parameter
>>> available (Emanuel Gull)
>>> 4. Re: Alps 2.1.2 r6963 hybridization expansion: No parameter
>>> available (Hartmut Hafermann)
>>> 5. Re: Time propagation in TEBD stops prematurely (Michael Wall)
>>> 6. Re: Alps 2.1.2 r6963 hybridization expansion: No parameter
>>> available (backes@th.physik.uni-frankfurt.de)
>>>
>>>
>>> ----------------------------------------------------------------------
>>>
>>> Message: 1
>>> Date: Wed, 21 Aug 2013 13:51:25 +0200
>>> From: backes@th.physik.uni-frankfurt.de
>>> To: comp-phys-alps-users@lists.phys.ethz.ch
>>> Subject: [ALPS-users] Alps 2.1.2 r6963 hybridization expansion: No
>>> parameter available
>>> Message-ID:
>>> <
>> 70b90575915bdffa61ec46f7481c0ecb.squirrel@th.physik.uni-frankfurt.de>
>>> Content-Type: text/plain;charset=utf-8
>>>
>>> Dear all,
>>>
>>> I want to use the newest version of the ALPS Hybridization Expansion
>>> Impurity Solver from the nightly build 2.1.2-r6963.
>>> Compilation is successful, but when running the solver with a simple test
>>> case I get the following Error (I shortened the paths for readability):
>>>
>>>> hybridization parm.h5
>>>
>>> terminate called after throwing an instance of 'std::runtime_error'
>>> what(): No parameter available
>>> In [..]/alps/src/alps/ngs/detail/paramproxy.hpp on 78 in cast
>>>
>> [..]applications/dmft/qmc/hybridization(_ZNK4alps6detail10paramproxy4castIiEET_v+0x13d)
>>> [0x42dbad]
>>>
>> [..]applications/dmft/qmc/hybridization(_ZN13hybridizationC1ERKN4alps6paramsEi+0x10b8)
>>> [0x43f968]
>>> [..]applications/dmft/qmc/hybridization(main+0x103) [0x41dfe3]
>>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x7ffff415fcdd]
>>> [..]applications/dmft/qmc/hybridization() [0x41d409]
>>>
>>> I tested a previous version 2.1.1-r6670 with identical compilation
>> process
>>> on the same input files and it works flawlessly.
>>>
>>> There seem to be some significant changes in this version: There is a new
>>> hybridization2 folder in applications/dmft/qmc/ and also the
>> Documentation
>>> inside this directory vanished. Do I have to change something in my
>>> parm.h5 input file for this new version?
>>>
>>> Thanks for your help.
>>> Best wishes,
>>> Steffen
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 2
>>> Date: Wed, 21 Aug 2013 12:05:05 +0000
>>> From: Joseph Prentice <Joseph.Prentice@physics.ox.ac.uk>
>>> To: "comp-phys-alps-users@lists.phys.ethz.ch"
>>> <comp-phys-alps-users@lists.phys.ethz.ch>
>>> Subject: [ALPS-users] Time propagation in TEBD stops prematurely
>>> Message-ID:
>>> <747861816F6A5C47BE50FCA07F5B2B4A0E25BB@EXCHNG16.physics.ox.ac.uk>
>>> Content-Type: text/plain; charset="us-ascii"
>>>
>>> Hi all,
>>>
>>> I'm having a bit of trouble with getting a quench using the XXZ model to
>> run in TEBD. I'm running the job on a Linux cluster, quenching Delta
>> instantaneously from 3 to 6, and the program stops earlier than it should.
>> I turned on the Verbose option, and it seems as though the imaginary time
>> propagation works fine, but the real time propagation stops prematurely,
>> with no error message. Does anyone have any idea why this has happened? It
>> seems very odd. I will attach my Python script in case there is something
>> wrong with it, and the output I obtained.
>>>
>>> Thanks very much,
>>>
>>> Joseph Prentice
>>> -------------- next part --------------
>>> A non-text attachment was scrubbed...
>>> Name: alpspython.sh.o919929
>>> Type: application/octet-stream
>>> Size: 27402 bytes
>>> Desc: alpspython.sh.o919929
>>> URL: <
>> https://lists.phys.ethz.ch/pipermail/comp-phys-alps-users/attachments/20130821/287d4d1c/attachment-0001.obj
>>>
>>> -------------- next part --------------
>>> A non-text attachment was scrubbed...
>>> Name: 3to6Quench.py
>>> Type: text/x-python
>>> Size: 751 bytes
>>> Desc: 3to6Quench.py
>>> URL: <
>> https://lists.phys.ethz.ch/pipermail/comp-phys-alps-users/attachments/20130821/287d4d1c/attachment-0001.py
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 3
>>> Date: Wed, 21 Aug 2013 09:25:08 -0400
>>> From: Emanuel Gull <emanuel.gull@gmail.com>
>>> To: "comp-phys-alps-users@lists.phys.ethz.ch"
>>> <comp-phys-alps-users@lists.phys.ethz.ch>
>>> Subject: Re: [ALPS-users] Alps 2.1.2 r6963 hybridization expansion: No
>>> parameter available
>>> Message-ID: <9EC1669A-BCAF-4D34-B5AE-24373BE94D65@gmail.com>
>>> Content-Type: text/plain; charset=us-ascii
>>>
>>> Hi Steffen,
>>>
>>> could you please send me the new and old parameter file? There have been
>> quite a few changes since December and I'll need to figure out which of
>> these caused the problem...
>>>
>>> Emanuel
>>>
>>> On Aug 21, 2013, at 7:51 AM, backes@th.physik.uni-frankfurt.de wrote:
>>>
>>>> Dear all,
>>>>
>>>> I want to use the newest version of the ALPS Hybridization Expansion
>>>> Impurity Solver from the nightly build 2.1.2-r6963.
>>>> Compilation is successful, but when running the solver with a simple
>> test
>>>> case I get the following Error (I shortened the paths for readability):
>>>>
>>>>> hybridization parm.h5
>>>>
>>>> terminate called after throwing an instance of 'std::runtime_error'
>>>> what(): No parameter available
>>>> In [..]/alps/src/alps/ngs/detail/paramproxy.hpp on 78 in cast
>>>>
>> [..]applications/dmft/qmc/hybridization(_ZNK4alps6detail10paramproxy4castIiEET_v+0x13d)
>>>> [0x42dbad]
>>>>
>> [..]applications/dmft/qmc/hybridization(_ZN13hybridizationC1ERKN4alps6paramsEi+0x10b8)
>>>> [0x43f968]
>>>> [..]applications/dmft/qmc/hybridization(main+0x103) [0x41dfe3]
>>>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x7ffff415fcdd]
>>>> [..]applications/dmft/qmc/hybridization() [0x41d409]
>>>>
>>>> I tested a previous version 2.1.1-r6670 with identical compilation
>> process
>>>> on the same input files and it works flawlessly.
>>>>
>>>> There seem to be some significant changes in this version: There is a
>> new
>>>> hybridization2 folder in applications/dmft/qmc/ and also the
>> Documentation
>>>> inside this directory vanished. Do I have to change something in my
>>>> parm.h5 input file for this new version?
>>>>
>>>> Thanks for your help.
>>>> Best wishes,
>>>> Steffen
>>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 4
>>> Date: Wed, 21 Aug 2013 15:36:31 +0200
>>> From: Hartmut Hafermann <hartmut.hafermann@cpht.polytechnique.fr>
>>> To: comp-phys-alps-users@lists.phys.ethz.ch
>>> Subject: Re: [ALPS-users] Alps 2.1.2 r6963 hybridization expansion: No
>>> parameter available
>>> Message-ID:
>>> <203DFAA5-FDC5-4B80-A265-54C3E4D60312@cpht.polytechnique.fr>
>>> Content-Type: text/plain; charset=us-ascii
>>>
>>> Dear Steffen,
>>>
>>> the revision r6963 is relatively old. The current version in the SVN is
>> 6991. Could you please check if it
>>> works with the most recent SVN version? The hybridization tutorials work
>> for me in the current version.
>>> Please check the latest documentation to see if the parameters in your
>> input file are up to date. There
>>> have been some changes in the past. In case it doesn't work, could you
>> please send the input file for
>>> which it fails?
>>>
>>> Both revisions, 6963 nor 6670 do not have a folder hybridization2 in the
>> SVN. This folder was removed a
>>> while ago. The snapshot seems to be outdated.
>>>
>>> Best regards,
>>> Hartmut
>>>
>>>
>>>
>>> On 21.08.2013, at 13:51, backes@th.physik.uni-frankfurt.de wrote:
>>>
>>>> Dear all,
>>>>
>>>> I want to use the newest version of the ALPS Hybridization Expansion
>>>> Impurity Solver from the nightly build 2.1.2-r6963.
>>>> Compilation is successful, but when running the solver with a simple
>> test
>>>> case I get the following Error (I shortened the paths for readability):
>>>>
>>>>> hybridization parm.h5
>>>>
>>>> terminate called after throwing an instance of 'std::runtime_error'
>>>> what(): No parameter available
>>>> In [..]/alps/src/alps/ngs/detail/paramproxy.hpp on 78 in cast
>>>>
>> [..]applications/dmft/qmc/hybridization(_ZNK4alps6detail10paramproxy4castIiEET_v+0x13d)
>>>> [0x42dbad]
>>>>
>> [..]applications/dmft/qmc/hybridization(_ZN13hybridizationC1ERKN4alps6paramsEi+0x10b8)
>>>> [0x43f968]
>>>> [..]applications/dmft/qmc/hybridization(main+0x103) [0x41dfe3]
>>>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x7ffff415fcdd]
>>>> [..]applications/dmft/qmc/hybridization() [0x41d409]
>>>>
>>>> I tested a previous version 2.1.1-r6670 with identical compilation
>> process
>>>> on the same input files and it works flawlessly.
>>>>
>>>> There seem to be some significant changes in this version: There is a
>> new
>>>> hybridization2 folder in applications/dmft/qmc/ and also the
>> Documentation
>>>> inside this directory vanished. Do I have to change something in my
>>>> parm.h5 input file for this new version?
>>>>
>>>> Thanks for your help.
>>>> Best wishes,
>>>> Steffen
>>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 5
>>> Date: Wed, 21 Aug 2013 08:43:02 -0600
>>> From: Michael Wall <mwall.physics@gmail.com>
>>> To: comp-phys-alps-users@lists.phys.ethz.ch
>>> Subject: Re: [ALPS-users] Time propagation in TEBD stops prematurely
>>> Message-ID:
>>> <CA+DwVMoCSRfXTWMys4NZdga6=
>> GvqZCZqwbQPQJh+KuKM53m8hA@mail.gmail.com>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>> Hi Joseph,
>>>
>>> Are you trying to quench instantaneously from 3 to 6 at t=0? If so, your
>>> code should be modified to
>>>
>>> 'TAUS' : [30.0],
>>> 'POWS' : [0.0],
>>> 'GS' : ['Jz'],
>>> 'GIS' : [6.0],
>>> 'GFS' : [6.0],
>>> 'NUMSTEPS' : [1250],
>>> 'STEPSFORSTORE' : [10]
>>>
>>> Note that you only need single-element arrays, as you only have one
>> quench.
>>> As it stands you have two quenches. The first is for zero time, and
>>> leaves \Delta at 3. The second ramps \Delta from 6 to 6 linearly over a
>>> timescale of 30. The error may be due to propagating with zero time step
>>> for the first quench. Let me know if this is still not clear, or if this
>>> doesn't help.
>>>
>>> -Michael
>>>
>>>
>>> On Wed, Aug 21, 2013 at 6:05 AM, Joseph Prentice <
>>> Joseph.Prentice@physics.ox.ac.uk> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I'm having a bit of trouble with getting a quench using the XXZ model to
>>>> run in TEBD. I'm running the job on a Linux cluster, quenching Delta
>>>> instantaneously from 3 to 6, and the program stops earlier than it
>> should.
>>>> I turned on the Verbose option, and it seems as though the imaginary
>> time
>>>> propagation works fine, but the real time propagation stops prematurely,
>>>> with no error message. Does anyone have any idea why this has happened?
>> It
>>>> seems very odd. I will attach my Python script in case there is
>> something
>>>> wrong with it, and the output I obtained.
>>>>
>>>> Thanks very much,
>>>>
>>>> Joseph Prentice
>>>>
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: <
>> https://lists.phys.ethz.ch/pipermail/comp-phys-alps-users/attachments/20130821/fd64135f/attachment-0001.html
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 6
>>> Date: Wed, 21 Aug 2013 18:45:36 +0200
>>> From: backes@th.physik.uni-frankfurt.de
>>> To: comp-phys-alps-users@lists.phys.ethz.ch
>>> Subject: Re: [ALPS-users] Alps 2.1.2 r6963 hybridization expansion: No
>>> parameter available
>>> Message-ID:
>>> <
>> 20df7c852a9b65546090e52c1ff8bbaf.squirrel@th.physik.uni-frankfurt.de>
>>> Content-Type: text/plain;charset=utf-8
>>>
>>> Hi Emanuel,
>>>
>>> thanks for the quick answer! Here is the parameter file I've been using
>>> for both versions of the code. alps-2.1.2-r6963 gives the error,
>>> alps-2.1.1-r6670 runs fine.
>>> But I will try the newest version from the SVN repositories and will
>>> report the results.
>>>
>>> SEED = 0
>>> THERMALIZATION = 1000
>>> SWEEPS = 10000000
>>> MAX_TIME = 60
>>> BETA = 40.0
>>> N_MEAS = 50
>>> N_HISTOGRAM_ORDERS = 50
>>> N_ORBITALS = 10
>>> U_MATRIX = "umatrix.dat"
>>> MU_VECTOR = "mu_vector.dat"
>>> DELTA = "delta_0.dat"
>>> N_TAU = 1000
>>> TEXT_OUTPUT = 1
>>> VERBOSE = 1
>>> OUTPUT_PERIOD = 100000
>>> MEASURE_freq = 0
>>> MEASURE_legendre = 1
>>> N_LEGENDRE = 50
>>> N_MATSUBARA = 500
>>>
>>> best wishes,
>>> Steffen
>>>
>>>> Hi Steffen,
>>>>
>>>> could you please send me the new and old parameter file? There have been
>>>> quite a few changes since December and I'll need to figure out which of
>>>> these caused the problem...
>>>>
>>>> Emanuel
>>>>
>>>> On Aug 21, 2013, at 7:51 AM, backes@th.physik.uni-frankfurt.de wrote:
>>>>
>>>>> Dear all,
>>>>>
>>>>> I want to use the newest version of the ALPS Hybridization Expansion
>>>>> Impurity Solver from the nightly build 2.1.2-r6963.
>>>>> Compilation is successful, but when running the solver with a simple
>>>>> test
>>>>> case I get the following Error (I shortened the paths for readability):
>>>>>
>>>>>> hybridization parm.h5
>>>>>
>>>>> terminate called after throwing an instance of 'std::runtime_error'
>>>>> what(): No parameter available
>>>>> In [..]/alps/src/alps/ngs/detail/paramproxy.hpp on 78 in cast
>>>>>
>> [..]applications/dmft/qmc/hybridization(_ZNK4alps6detail10paramproxy4castIiEET_v+0x13d)
>>>>> [0x42dbad]
>>>>>
>> [..]applications/dmft/qmc/hybridization(_ZN13hybridizationC1ERKN4alps6paramsEi+0x10b8)
>>>>> [0x43f968]
>>>>> [..]applications/dmft/qmc/hybridization(main+0x103) [0x41dfe3]
>>>>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x7ffff415fcdd]
>>>>> [..]applications/dmft/qmc/hybridization() [0x41d409]
>>>>>
>>>>> I tested a previous version 2.1.1-r6670 with identical compilation
>>>>> process
>>>>> on the same input files and it works flawlessly.
>>>>>
>>>>> There seem to be some significant changes in this version: There is a
>>>>> new
>>>>> hybridization2 folder in applications/dmft/qmc/ and also the
>>>>> Documentation
>>>>> inside this directory vanished. Do I have to change something in my
>>>>> parm.h5 input file for this new version?
>>>>>
>>>>> Thanks for your help.
>>>>> Best wishes,
>>>>> Steffen
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> End of Comp-phys-alps-users Digest, Vol 89, Issue 9
>>> ***************************************************
>>
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.phys.ethz.ch/pipermail/comp-phys-alps-users/attachments/20130918/91ce8737/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 18 Sep 2013 17:35:53 +0200
> From: Peter Br?cker <peter.broecker@uni-koeln.de>
> To: comp-phys-alps-users@lists.phys.ethz.ch
> Subject: Re: [ALPS-users] Making the scheduler switch tasks, obtaining
> MPI rank
> Message-ID: <51C6AFD1-5F6E-4941-8251-3C64B793675C@uni-koeln.de>
> Content-Type: text/plain; charset=us-ascii
>
> Hello,
>
>>>
>>> No, currently it is not possible.
>>> Could you tell me a bit more why you need such functionality?
>>
>
> when I run my simulations, I like to have a rough idea of where it is going. So before switching to the ALPS libraries for scheduling, I defined a threshold for how many sweeps where done on a given task before switching to a different one. That way, I could run a job for may be 24h and see where the results were headed. This is especially useful when the cluster queue is loaded and it might take some time after a resubmitted job starts. I could set the number of sweeps to a very large number and see whether the result had converged or not. If it did, I would deactivate the task via an entry in its file. That way, jobs that require less work don't take up computing time from those that need more.
>
> I am guessing the same can be done in ALPS by defining a rather low number of sweeps and then changing the xml-files if needed, right?
>
> Does the parapack scheduler have this functionality?
>
>> The easiest is just to split the simulation into several input files, each containing four instances and then run them alternatingly for some fixed time.
>>
>> Matthias
>>
>
> That's of course true. When running alpspython on an input file, can I define the number of input files the job should be split into?
>
> Best, Peter
>
> ------------------------------
>
> Message: 3
> Date: Thu, 19 Sep 2013 08:26:39 +0200
> From: Matthias Troyer <troyer@phys.ethz.ch>
> To: comp-phys-alps-users@lists.phys.ethz.ch
> Subject: Re: [ALPS-users] Making the scheduler switch tasks, obtaining
> MPI rank
> Message-ID: <1D20AA66-5016-46D9-AE29-9AB1D241E51E@phys.ethz.ch>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> On Sep 18, 2013, at 5:35 PM, Peter Br?cker <peter.broecker@uni-koeln.de> wrote:
>
>> Hello,
>>
>>>>
>>>> No, currently it is not possible.
>>>> Could you tell me a bit more why you need such functionality?
>>>
>>
>> when I run my simulations, I like to have a rough idea of where it is going. So before switching to the ALPS libraries for scheduling, I defined a threshold for how many sweeps where done on a given task before switching to a different one. That way, I could run a job for may be 24h and see where the results were headed. This is especially useful when the cluster queue is loaded and it might take some time after a resubmitted job starts. I could set the number of sweeps to a very large number and see whether the result had converged or not. If it did, I would deactivate the task via an entry in its file. That way, jobs that require less work don't take up computing time from those that need more.
>>
>> I am guessing the same can be done in ALPS by defining a rather low number of sweeps and then changing the xml-files if needed, right?
>>
>> Does the parapack scheduler have this functionality?
>>
>>> The easiest is just to split the simulation into several input files, each containing four instances and then run them alternatingly for some fixed time.
>>>
>>> Matthias
>>>
>>
>> That's of course true. When running alpspython on an input file, can I define the number of input files the job should be split into?
>
>
> No, but you can simply change the Python code and write multiple files, e.g.:
>
> input_file_a = pyalps.writeInputFiles('parm1a',parms[0:4])
> input_file_b = pyalps.writeInputFiles('parm1b',parms[4:8])
>
> Matthias
>
>
>
> End of Comp-phys-alps-users Digest, Vol 90, Issue 8
> ***************************************************