2016-10-31 12:00 GMT+01:00 comp-phys-alps-users-request@lists.phys.ethz.ch :
Send Comp-phys-alps-users mailing list submissions to comp-phys-alps-users@lists.phys.ethz.ch
To subscribe or unsubscribe via the World Wide Web, visit https://lists.phys.ethz.ch/listinfo/comp-phys-alps-users or, via email, send a message with subject or body 'help' to comp-phys-alps-users-request@lists.phys.ethz.ch
You can reach the person managing the list at comp-phys-alps-users-owner@lists.phys.ethz.ch
When replying, please edit your Subject line so it is more specific than "Re: Contents of Comp-phys-alps-users digest..."
Today's Topics:
- MPI problem (Tadeusz Wasiuty?ski)
- Re: MPI problem (Michele Dolfi)
---------- Wiadomość przekazana dalej ---------- From: "Tadeusz Wasiutyński" tadeusz.wasiutynski@gmail.com To: comp-phys-alps-users@lists.phys.ethz.ch Cc: Date: Sat, 29 Oct 2016 21:42:49 +0200 Subject: [ALPS-users] MPI problem Hello, I met the problem on my CENTOS 7 installed on station with 24 threads. After:
~/opt/bin/cmake -D Boost_ROOT_DIR:PATH=~/alps-2.2.b4-src-with-boost/boost/ -D MPI_C_LIBRARIES=/usr/lib64/openmpi/libmpi.so -D MPI_C_INCLUDE_PATH=/usr/lib64/openmpi/include/ -D MPI_CXX_INCLUDE_PATH=/usr/lib64/openmpi/include -D LPSolve_LIBRARY=/usr/lib64/liblpsolve55.so -D LPSolve_INCLUDE_DIR=/usr/include/lpsolve/ -D HDF5_LIBRARIES=~/opt/lib/libhdf5.so -D HDF5_INCLUDE_DIR=~/opt/include/ -D SZIP_LIBRARIES=/usr/lib64/lib/libsz.so -D SZIP_INCLUDE_DIRS=/usr/lib64/include/ ~/alps-2.2.b4-src-with-boost/alps/
I receive:
---- Compiler version: c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) -- Build type: Release -- Python interpreter /usr/bin/python -- Python interpreter ok : version 2. PYTHON_INCLUDE_DIRS = /usr/include/python2.7 -- PYTHON_NUMPY_INCLUDE_DIR = /usr/lib64/python2.7/site- packages/numpy/core/include -- PYTHON_SITE_PKG = /usr/lib/python2.7/site-packages -- PYTHON_LIBRARY = /usr/lib64/python2.7/config/libpython2.7.so -- PYTHON_EXTRA_LIBS =-lpthread -ldl -lutil -- PYTHON_LINK_FOR_SHARED = -Xlinker -export-dynamic -- ALPS version: 2.2.b4 -- Looking for Boost Source -- Found Boost Source: /home/twasiutynsk/alps-2.2.b4-src-with-boost/boost -- Boost Version: 1_58_0 -- Adding Boost dir: /home/twasiutynsk/alps-2.2.b4-src-with-boost/boost -- MPI compiler was /usr/lib64/openmpi/bin/mpicxx -- Falling back to CMake provied LAPACK/BLAS detection. -- A library with BLAS API found. -- A library with BLAS API found. -- A library with LAPACK API found. -- SQLite Library: not found -- Could NOT find SZIP (missing: SZIP_LIBRARIES SZIP_INCLUDE_DIRS) -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- Python interpreter /usr/bin/python -- Python interpreter ok : version 2.7.5 -- PYTHON_INCLUDE_DIRS = /usr/include/python2.7 -- PYTHON_NUMPY_INCLUDE_DIR = /usr/lib64/python2.7/site- packages/numpy/core/include -- PYTHON_SITE_PKG = /usr/lib/python2.7/site-packages -- PYTHON_LIBRARY = /usr/lib64/python2.7/config/libpython2.7.so -- PYTHON_EXTRA_LIBS =-lpthread -ldl -lutil -- PYTHON_LINK_FOR_SHARED = -Xlinker -export-dynamic -- Numpy include in /usr/lib64/python2.7/site-packages/numpy/core/include -- ALPS XML dir is /opt/alps/lib/xml -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- MPS: enabling NU1 symmetry. -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- tebd will not be built -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- Configuring done -- Generating done -- Build files have been written to: /home/twasiutynsk/build
in ccmake I see MPI ON. make and make tests (all passed) go smoothly. In tutorial runs I see however mess with MPI=24 while everything goes OK with MPI=1. Is something wrong in my HDF5?
In docker hub I found dolfim/alps but could not run probably because of some path problems. Anyone did it successfully? Regards 
-- Tadeusz Wasiutyński
-- Tadeusz Wasiutyński
---------- Wiadomość przekazana dalej ---------- From: Michele Dolfi dolfim@phys.ethz.ch To: comp-phys-alps-users@lists.phys.ethz.ch Cc: Date: Mon, 31 Oct 2016 11:11:51 +0100 Subject: Re: [ALPS-users] MPI problem The configuration seems fine and the hdf5 message are not related to the MPI execution.
Can you clarify a bit what you mean by “see a mess”? How do you run the mpi application?
Michele
-- ETH Zurich Michele Dolfi Institute for Theoretical Physics HIT G 32.4 Wolfgang-Pauli-Str. 27 8093 Zurich Switzerland
dolfim@phys.ethz.ch www.itp.phys.ethz.ch
+41 44 633 78 56 phone +41 44 633 11 15 fax
On Oct 29, 2016, at 9:42 PM, Tadeusz Wasiutyński < tadeusz.wasiutynski@gmail.com> wrote:
Hello, I met the problem on my CENTOS 7 installed on station with 24 threads. After:
~/opt/bin/cmake -D Boost_ROOT_DIR:PATH=~/alps-2.2.b4-src-with-boost/boost/ -D MPI_C_LIBRARIES=/usr/lib64/openmpi/libmpi.so -D MPI_C_INCLUDE_PATH=/usr/lib64/openmpi/include/ -D MPI_CXX_INCLUDE_PATH=/usr/lib64/openmpi/include -D LPSolve_LIBRARY=/usr/lib64/liblpsolve55.so -D LPSolve_INCLUDE_DIR=/usr/include/lpsolve/ -D HDF5_LIBRARIES=~/opt/lib/libhdf5.so -D HDF5_INCLUDE_DIR=~/opt/include/ -D SZIP_LIBRARIES=/usr/lib64/lib/libsz.so -D SZIP_INCLUDE_DIRS=/usr/lib64/include/ ~/alps-2.2.b4-src-with-boost/alps/
I receive:
---- Compiler version: c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) -- Build type: Release -- Python interpreter /usr/bin/python -- Python interpreter ok : version 2. PYTHON_INCLUDE_DIRS = /usr/include/python2.7 -- PYTHON_NUMPY_INCLUDE_DIR = /usr/lib64/python2.7/site- packages/numpy/core/include -- PYTHON_SITE_PKG = /usr/lib/python2.7/site-packages -- PYTHON_LIBRARY = /usr/lib64/python2.7/config/libpython2.7.so -- PYTHON_EXTRA_LIBS =-lpthread -ldl -lutil -- PYTHON_LINK_FOR_SHARED = -Xlinker -export-dynamic -- ALPS version: 2.2.b4 -- Looking for Boost Source -- Found Boost Source: /home/twasiutynsk/alps-2.2.b4-src-with-boost/boost -- Boost Version: 1_58_0 -- Adding Boost dir: /home/twasiutynsk/alps-2.2.b4-src-with-boost/boost -- MPI compiler was /usr/lib64/openmpi/bin/mpicxx -- Falling back to CMake provied LAPACK/BLAS detection. -- A library with BLAS API found. -- A library with BLAS API found. -- A library with LAPACK API found. -- SQLite Library: not found -- Could NOT find SZIP (missing: SZIP_LIBRARIES SZIP_INCLUDE_DIRS) -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- Python interpreter /usr/bin/python -- Python interpreter ok : version 2.7.5 -- PYTHON_INCLUDE_DIRS = /usr/include/python2.7 -- PYTHON_NUMPY_INCLUDE_DIR = /usr/lib64/python2.7/site- packages/numpy/core/include -- PYTHON_SITE_PKG = /usr/lib/python2.7/site-packages -- PYTHON_LIBRARY = /usr/lib64/python2.7/config/libpython2.7.so -- PYTHON_EXTRA_LIBS =-lpthread -ldl -lutil -- PYTHON_LINK_FOR_SHARED = -Xlinker -export-dynamic -- Numpy include in /usr/lib64/python2.7/site-packages/numpy/core/include -- ALPS XML dir is /opt/alps/lib/xml -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- MPS: enabling NU1 symmetry. -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- tebd will not be built -- HDF5 without THREADSAFE mode. ALPS will ensure thread safety by HDF5 running sequentially. -- Configuring done -- Generating done -- Build files have been written to: /home/twasiutynsk/build
in ccmake I see MPI ON. make and make tests (all passed) go smoothly. In tutorial runs I see however mess with MPI=24 while everything goes OK with MPI=1. Is something wrong in my HDF5?
In docker hub I found dolfim/alps but could not run probably because of some path problems. Anyone did it successfully? Regards 
-- Tadeusz Wasiutyński
-- Tadeusz Wasiutyński
Comp-phys-alps-users Mailing List for the ALPS Project http://alps.comp-phys.org/
List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
Unsubscribe by writing a mail to comp-phys-alps-users-leave@ lists.phys.ethz.ch.
Comp-phys-alps-users Mailing List for the ALPS Project http://alps.comp-phys.org/
List info: https://lists.phys.ethz.ch//listinfo/comp-phys-alps-users Archive: https://lists.phys.ethz.ch//pipermail/comp-phys-alps-users
Unsubscribe by writing a mail to comp-phys-alps-users-leave@ lists.phys.ethz.ch.
When I run e.g. tutorial7a.py : .......................... #pyalps.runApplication('spinmc',input_file,Tmin=5) # use the following instead if you have MPI pyalps.runApplication('spinmc',input_file,Tmin=5,MPI=24)
pyalps.evaluateSpinMC(pyalps.getResultFiles(prefix='parm7a')) ................................. result is different when MPI=1, 2, 8, 12, 24. I figured out that some temperatures are missed in result file. For temperatures well done all tasks have: ...out.run1 while the missed have more (run1, run2 and so on ). Finally while magnetization is collected from all temperatures, susceptibility and specific heat and Binder cumulants only from those which have only run1. Only MPI=1 gives full result. BTW it seems that for linear chain problem does not exist (Ising, Heisenberg, spinmc, loop). Tadeusz
comp-phys-alps-users@lists.phys.ethz.ch