On Feb 14, 2012, at 1:35 PM, Rachele Nerattini wrote:
Dear Mr Troyer,
I'm using ALPS to run MC simulations of classical Ising, XY and Heisenberg models in three dimensions. I want to study how the critical values of some observables varies with L in a regular cubic lattice so I run the simulations at T_c (critical value of the temperature).
I know that at the phase transition you have strong correlation in the system and it is difficult to obtain values of the observables with convergence on the errors. For this reasons I have the following questions:
- Errors on the observables are given with three different levels of 'accuracy': converged (white), check the convergence (yellow), not converged (red).
How does the program test if the convergence is achieved? I mean I see the program evaluate the errors dividing the total number of steps in groups of binning and then evaluating the errors group by group. I suppose that it says that the convergence is achieved if the errors in different groups vary less then some interval that is fixed...is it true? Which is the test that it does?
The best will be if you look at the details of the binning analysis, as explained in tutorial MC-01. If the errors converge as a function of the bin size we mark it as converged, if they still increase slightly (which might just be a statistical fluctuation) mark it as yellow, and if it is clearly not converged we mark it as red. Still, to be sure you should always look at it yourself.
- I suppose Tau is the correlation time. Does the program take into account the value of Tau in the final valuation of the errors? I mean, are those errors already multiplied by the sqrt of tau as it must be in case of correlations, or must be corrected bye the value of tau that is given?
Indeed it is multiplied by sqrt(1+2tau). Well, actually, sqrt(1+2tau) and thus tau is determined from the ratio between converged and naive error. But, to answer your question, the error indeed includes autocorrelation effects.
- Since the correlation is strong at T_c do you think that it could be easier to get to convergence if we change the initial configuration of the MC? For example we could run a simulation for a while then take the last configuration it uses to evaluate the values of observables and then use this last configuration as the input configuration for a new run...can it be done in some way?
Yes, that could be done, it however will not help much. The time needed to equilibrate is of the same order as the autocorrelation time, and if you cannot afford to equilibrate you will also not be able to get many uncorrelated measurements.
- Some months ago I wrote to you to ask if it could be possible to run a classical MC simulation for an O(4) model. in the tutorials you say that it can be done simply editing the spin_factory.C file. I tried but still it doesn't work. I'm afraid I made some mistakes in the installation of the ALPS library on my PC. Did you add this option in the new version 2.0?
What did not work? it should be very easy to add it. Can you tell me what you tried to do and what the problem was?
I hope this helps
Matthias Troyer