Thank you for your reply. I recheked again. I did not have problem in getting the original tutorial mc-02 plots although there was some matplotlib problem.

I already reproduced also tutorial mc-03 and there was no problem in getting the plots.

So, is there any other solution possible.


Thanks,

Santu




On 3 August 2018 at 18:33, Synge Todo <wistaria@phys.s.u-tokyo.ac.jp> wrote:
Dear Santu,

It seems to me that the QMC calculation it self finishes normally, and there is a problem in
installation of matplotlib on your system. Have you succeeded in plotting the susceptibility
for the original tutorial2d on the spin ladder system?

Best,
Synge


> 2018/08/03 16:57、S Baidya <santubaidya2009@gmail.com>のメール:
>
> Dear Synge and alps user,
>
>    This time I did a refresh calculation for FM Kagome lattice with S=3/2 and J=-4.2 using tutorial2d.py but still got no data as output.
>
>
> /usr/local/lib/python2.7/dist-packages/matplotlib/style/core.py:203: UserWarning: In /usr/share/matplotlib/stylelib/classic.mplstyle: text.dvipnghack is obsolete. Please remove it from your matplotlibrc and/or style files.
>   warnings.warn(message)
> loop parm2d.in.xml
> ALPS/looper version 3.2b12-20100128 (2010/01/28)
>   multi-cluster quantum Monte Carlo algorithms for spin systems
>   available from http://wistaria.comp-phys.org/alps-looper/
>   copyright (c) 1997-2010 by Synge Todo <wistaria@comp-phys.org>
>
> using ALPS/parapack scheduler
>   a Monte Carlo scheduler for multiple-level parallelization
>   copyright (c) 1997-2016 by Synge Todo <wistaria@comp-phys.org>
>
> based on the ALPS libraries version 2.3.0
>   available from http://alps.comp-phys.org/
>   copyright (c) 1994-2016 by the ALPS collaboration.
>   Consult the web page for license details.
>   For details see the publication:
>   B. Bauer et al., J. Stat. Mech. (2011) P05001.
>
> [2018-Aug-03 16:05:55]: starting scheduler on cces-System-Product-Name
>   master input file  = /home/cces/kagome-alps/mc-02/parm2d.in.xml
>   master output file = /home/cces/kagome-alps/mc-02/parm2d.out.xml
>   termination file   = [disabled]
>   total number of thread(s) = 1
>   thread(s) per clone       = 1
>   number of thread group(s) = 1
>   auto evaluation = yes
>   time limit = unlimited
>   interval between checkpointing  = 3600 seconds
>   interval between progress report = 600 seconds
>   interval between vmusage report = infinity
>   task range = all
>   worker dump format = hdf5
>   worker dump policy = running workers only
> [2018-Aug-03 16:05:55]: task status: total number of tasks = 15
>   new = 15, running = 0, continuing = 0, suspended = 0, finished = 0, completed = 0, skipped = 0
> [2018-Aug-03 16:05:55]: starting 1 threadgroup(s)
> [2018-Aug-03 16:05:55]: dispatching a new clone[1,1] on threadgroup[1]
> [2018-Aug-03 16:11:55]: checkpointing task files
> save task
> [2018-Aug-03 16:11:55]: task status: total number of tasks = 15
>   new = 14, running = 1, continuing = 0, suspended = 0, finished = 0, completed = 0, skipped = 0
> [2018-Aug-03 16:15:55]: progress report: clone[1,1] is running (50.9% done)
> [2018-Aug-03 16:25:54]: clone[1,1] finished on threadgroup[1]
> [2018-Aug-03 16:25:54]: dispatching a new clone[2,1] on threadgroup[1]
> [2018-Aug-03 16:34:18]: clone[2,1] finished on threadgroup[1]
> [2018-Aug-03 16:34:18]: dispatching a new clone[3,1] on threadgroup[1]
> [2018-Aug-03 16:38:29]: clone[3,1] finished on threadgroup[1]
> [2018-Aug-03 16:38:29]: dispatching a new clone[4,1] on threadgroup[1]
> [2018-Aug-03 16:41:19]: clone[4,1] finished on threadgroup[1]
> [2018-Aug-03 16:41:19]: dispatching a new clone[5,1] on threadgroup[1]
> [2018-Aug-03 16:43:27]: clone[5,1] finished on threadgroup[1]
> [2018-Aug-03 16:43:27]: dispatching a new clone[6,1] on threadgroup[1]
> [2018-Aug-03 16:45:09]: clone[6,1] finished on threadgroup[1]
> [2018-Aug-03 16:45:09]: dispatching a new clone[7,1] on threadgroup[1]
> [2018-Aug-03 16:46:33]: clone[7,1] finished on threadgroup[1]
> [2018-Aug-03 16:46:33]: dispatching a new clone[8,1] on threadgroup[1]
> [2018-Aug-03 16:47:46]: clone[8,1] finished on threadgroup[1]
> [2018-Aug-03 16:47:46]: dispatching a new clone[9,1] on threadgroup[1]
> [2018-Aug-03 16:48:50]: clone[9,1] finished on threadgroup[1]
> [2018-Aug-03 16:48:50]: dispatching a new clone[10,1] on threadgroup[1]
> [2018-Aug-03 16:49:47]: clone[10,1] finished on threadgroup[1]
> [2018-Aug-03 16:49:47]: dispatching a new clone[11,1] on threadgroup[1]
> [2018-Aug-03 16:50:40]: clone[11,1] finished on threadgroup[1]
> [2018-Aug-03 16:50:40]: dispatching a new clone[12,1] on threadgroup[1]
> [2018-Aug-03 16:51:22]: clone[12,1] finished on threadgroup[1]
> [2018-Aug-03 16:51:22]: dispatching a new clone[13,1] on threadgroup[1]
> [2018-Aug-03 16:51:58]: clone[13,1] finished on threadgroup[1]
> [2018-Aug-03 16:51:58]: dispatching a new clone[14,1] on threadgroup[1]
> [2018-Aug-03 16:52:30]: clone[14,1] finished on threadgroup[1]
> [2018-Aug-03 16:52:30]: dispatching a new clone[15,1] on threadgroup[1]
> [2018-Aug-03 16:52:58]: clone[15,1] finished on threadgroup[1]
> [2018-Aug-03 16:52:58]: all tasks have been finished
> [2018-Aug-03 16:52:58]: task status: total number of tasks = 15
>   new = 0, running = 0, continuing = 0, suspended = 0, finished = 0, completed = 15, skipped = 0
> [2018-Aug-03 16:52:58]: all threads halted
> [2018-Aug-03 16:52:58]: starting evaluation on cces-System-Product-Name
> evaluating parm2d.task1.out.xml
>   loading clones: 1
> evaluating parm2d.task2.out.xml
>   loading clones: 1
> evaluating parm2d.task3.out.xml
>   loading clones: 1
> evaluating parm2d.task4.out.xml
>   loading clones: 1
> evaluating parm2d.task5.out.xml
>   loading clones: 1
> evaluating parm2d.task6.out.xml
>   loading clones: 1
> evaluating parm2d.task7.out.xml
>   loading clones: 1
> evaluating parm2d.task8.out.xml
>   loading clones: 1
> evaluating parm2d.task9.out.xml
>   loading clones: 1
> evaluating parm2d.task10.out.xml
>   loading clones: 1
> evaluating parm2d.task11.out.xml
>   loading clones: 1
> evaluating parm2d.task12.out.xml
>   loading clones: 1
> evaluating parm2d.task13.out.xml
>   loading clones: 1
> evaluating parm2d.task14.out.xml
>   loading clones: 1
> evaluating parm2d.task15.out.xml
>   loading clones: 1
> [2018-Aug-03 16:52:58]: all tasks evaluated
> Cannot open file '/usr/share/matplotlib/images/matplotlib.svg', because: No such file or directory
> Cannot open file '/usr/share/matplotlib/images/matplotlib.svg', because: No such file or directory
>
>
> Can anyone please tell me if there is any modification is needed for two dimensional Kagome spin lattice.
>
>
>
> Thnaks,
>
> Santu
>
>
>
>
>
>
>
>
> On 3 August 2018 at 14:56, S Baidya <santubaidya2009@gmail.com> wrote:
> Dear Synge,
>
>    I just removed param2d and used tutorial2d.py as
>
> import pyalps
> import matplotlib.pyplot as plt
> import pyalps.plot
>
> #prepare the input parameters
> parms = []
> for t in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.25, 1.5, 1.75, 2.0]:
>     parms.append(
>         {
>           'LATTICE'        : "Kagome lattice",
>           'MODEL'          : "spin",
>           'local_S'        : 1.5,
>           'T'              : t,
>           'J'             : -4.2 ,
>           'THERMALIZATION' : 5000,
>           'SWEEPS'         : 50000,
>           'L'              : 60,
>           'ALGORITHM'      : "loop"
>         }
>     )
>
> #write the input file and run the simulation
> input_file = pyalps.writeInputFiles('parm2d',parms)
> pyalps.runApplication('loop',input_file)
>
> #load the susceptibility and collect it as function of temperature T
> data = pyalps.loadMeasurements(pyalps.getResultFiles(prefix='parm2d'),'Susceptibility')
> susceptibility = pyalps.collectXY(data,x='T',y='Susceptibility')
>
> #make plot
> plt.figure()
> pyalps.plot.plot(susceptibility)
> plt.xlabel('Temperature $T/J$')
> plt.ylabel('Susceptibility $\chi J$')
> plt.ylim(0,0.22)
> plt.title('Quantum Heisenberg ladder')
> plt.show()
>
>
>
> Then typed alpspython tutorial2d.py and it shows the same situation again.....
>
>
> /usr/local/lib/python2.7/dist-packages/matplotlib/style/core.py:203: UserWarning: In /usr/share/matplotlib/stylelib/classic.mplstyle: text.dvipnghack is obsolete. Please remove it from your matplotlibrc and/or style files.
>   warnings.warn(message)
> loop parm2d.in.xml
> ALPS/looper version 3.2b12-20100128 (2010/01/28)
>   multi-cluster quantum Monte Carlo algorithms for spin systems
>   available from http://wistaria.comp-phys.org/alps-looper/
>   copyright (c) 1997-2010 by Synge Todo <wistaria@comp-phys.org>
>
> using ALPS/parapack scheduler
>   a Monte Carlo scheduler for multiple-level parallelization
>   copyright (c) 1997-2016 by Synge Todo <wistaria@comp-phys.org>
>
> based on the ALPS libraries version 2.3.0
>   available from http://alps.comp-phys.org/
>   copyright (c) 1994-2016 by the ALPS collaboration.
>   Consult the web page for license details.
>   For details see the publication:
>   B. Bauer et al., J. Stat. Mech. (2011) P05001.
>
> [2018-Aug-03 14:54:16]: starting scheduler on cces-System-Product-Name
>   master input file  = /home/cces/kagome-alps/mc-02/parm2d.in.xml
>   master output file = /home/cces/kagome-alps/mc-02/parm2d.out.xml
>   termination file   = [disabled]
>   total number of thread(s) = 1
>   thread(s) per clone       = 1
>   number of thread group(s) = 1
>   auto evaluation = yes
>   time limit = unlimited
>   interval between checkpointing  = 3600 seconds
>   interval between progress report = 600 seconds
>   interval between vmusage report = infinity
>   task range = all
>   worker dump format = hdf5
>   worker dump policy = running workers only
> [2018-Aug-03 14:54:16]: task status: total number of tasks = 15
>   new = 15, running = 0, continuing = 0, suspended = 0, finished = 0, completed = 0, skipped = 0
> [2018-Aug-03 14:54:16]: starting 1 threadgroup(s)
> [2018-Aug-03 14:54:16]: dispatching a new clone[1,1] on threadgroup[1]
>
>
>
> Thanks,
>
> Santu
>
>
>
>
>
>
>
>
>
>
> On 3 August 2018 at 14:51, Synge Todo <wistaria@phys.s.u-tokyo.ac.jp> wrote:
> Dear Santu,
>
> Strange.  Could you remove parm2d.* in your working directory and run tutorial2d.py again?
>
> BTW, 2d lattice with L=60 is a bit large as a starting point. I would recommend you to begin with
> smaller value, say, L=8.
>
> Best,
> Synge
>
>
> > 2018/08/03 14:39、S Baidya <santubaidya2009@gmail.com>のメール:
> >
> > Dear Synge,
> >
> >   Thank you for your reply. I then made a small mistake. The "J" value should be "-4.2" as it is FM. If it is now FM it should work like ising model in any 2d lattice. But I am still getting the same error.
> >
> > Is there any other solution for dealing this situation.
> >
> >
> > Thanking you,
> >
> > Santu
> >
> >
> >
> > On 3 August 2018 at 14:30, Synge Todo <wistaria@phys.s.u-tokyo.ac.jp> wrote:
> > Dear Santu Baidya,
> >
> > The antiferromagnetic (positive J) Heisenberg model on the kagome lattice has frustration.
> > This causes the so-called negative sign problem in the quantum Monte Carlo methods.
> > Although the loop code can simulate such a system and produce some numbers, the errorbar
> > becomes very large especially at low temperatures.
> >
> > You can use the exact diagonalization for the frustrated spin models, but the lattice size is
> > very limited in this case due to the spin size S=3/2.
> >
> > Best,
> > Synge
> >
> >
> > > 2018/08/03 14:10、S Baidya <santubaidya2009@gmail.com>のメール:
> > >
> > > Dear Alps users,
> > >
> > >   I am totally new to ALPS code using quantum monte carlo. I have started to use this code for few small purpose like magnetic suceptibility vs temperature for various spin lattice models.
> > >
> > > Currently I have used ALPS code for calculating susceptibility for ising type S=3/2 Kagome lattice. I found in the lattice library that Kagome lattice is defined. So in the tutorial mc-02-susceptibilities/ I changed tutorial2d.py by editing
> > >
> > > #prepare the input parameters
> > > parms = []
> > > for t in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.25, 1.5, 1.75, 2.0]:
> > >     parms.append(
> > >         {
> > >           'LATTICE'        : "Kagome lattice",
> > >           'MODEL'          : "spin",
> > >           'local_S'        : 1.5,
> > >           'T'              : t,
> > >           'J'             : 4.2 ,
> > >           'THERMALIZATION' : 5000,
> > >           'SWEEPS'         : 50000,
> > >           'L'              : 60,
> > >           'ALGORITHM'      : "loop"
> > >         }
> > >     )
> > >
> > > Then I changed the same way param2d file and ran the alpspython tutorial2d.py and I got strange error
> > >
> > >
> > > /usr/local/lib/python2.7/dist-packages/matplotlib/style/core.py:203: UserWarning: In /usr/share/matplotlib/stylelib/classic.mplstyle: text.dvipnghack is obsolete. Please remove it from your matplotlibrc and/or style files.
> > >   warnings.warn(message)
> > > loop parm2d.in.xml
> > > ALPS/looper version 3.2b12-20100128 (2010/01/28)
> > >   multi-cluster quantum Monte Carlo algorithms for spin systems
> > >   available from http://wistaria.comp-phys.org/alps-looper/
> > >   copyright (c) 1997-2010 by Synge Todo <wistaria@comp-phys.org>
> > >
> > > using ALPS/parapack scheduler
> > >   a Monte Carlo scheduler for multiple-level parallelization
> > >   copyright (c) 1997-2016 by Synge Todo <wistaria@comp-phys.org>
> > >
> > > based on the ALPS libraries version 2.3.0
> > >   available from http://alps.comp-phys.org/
> > >   copyright (c) 1994-2016 by the ALPS collaboration.
> > >   Consult the web page for license details.
> > >   For details see the publication:
> > >   B. Bauer et al., J. Stat. Mech. (2011) P05001.
> > >
> > > [2018-Aug-03 14:02:51]: starting scheduler on cces-System-Product-Name
> > >   master input file  = /home/cces/kagome-alps/mc-02/parm2d.in.xml
> > >   master output file = /home/cces/kagome-alps/mc-02/parm2d.out.xml
> > >   termination file   = [disabled]
> > >   total number of thread(s) = 1
> > >   thread(s) per clone       = 1
> > >   number of thread group(s) = 1
> > >   auto evaluation = yes
> > >   time limit = unlimited
> > >   interval between checkpointing  = 3600 seconds
> > >   interval between progress report = 600 seconds
> > >   interval between vmusage report = infinity
> > >   task range = all
> > >   worker dump format = hdf5
> > >   worker dump policy = running workers only
> > > [2018-Aug-03 14:02:51]: task status: total number of tasks = 15
> > >   new = 15, running = 0, continuing = 0, suspended = 0, finished = 0, completed = 0, skipped = 0
> > > [2018-Aug-03 14:02:51]: starting 1 threadgroup(s)
> > > [2018-Aug-03 14:02:51]: dispatching a new clone[1,1] on threadgroup[1]
> > > WARNING: model is classically frustrated
> > > WARNING: model has negative signs
> > >
> > >
> > >
> > > I donot understand the actual reason and how to solve this situation. I used this tutorial before and it worked for ladder spin lattice.
> > >
> > >
> > > Can anyone please tell me what should I do to solve this situation.
> > >
> > >
> > > Thanking you,
> > >
> > > Santu Baidya
> > >
> > >
> > >
> > >
> > >
> > >
> > > ----
> > > Comp-phys-alps-users Mailing List for the ALPS Project
> > > http://alps.comp-phys.org/
> > >
> > > List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
> > > Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
> > >
> > > Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
> >
> >
> >
> > ----
> > Comp-phys-alps-users Mailing List for the ALPS Project
> > http://alps.comp-phys.org/
> >
> > List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
> > Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
> >
> > Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
> >
> >
> >
> > ----
> > Comp-phys-alps-users Mailing List for the ALPS Project
> > http://alps.comp-phys.org/
> >
> > List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
> > Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
> >
> > Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
>
>
>
> ----
> Comp-phys-alps-users Mailing List for the ALPS Project
> http://alps.comp-phys.org/
>
> List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
> Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
>
> Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.
>
>
>
>
> ----
> Comp-phys-alps-users Mailing List for the ALPS Project
> http://alps.comp-phys.org/
>
> List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
> Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
>
> Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.



----
Comp-phys-alps-users Mailing List for the ALPS Project
http://alps.comp-phys.org/

List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users
Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users

Unsubscribe by writing a mail to comp-phys-alps-users-leave@lists.phys.ethz.ch.