Hello All,
I have the following questions related to running ALPS on a cluster.
1) *Compiling static executables.* Is it possible to compile the ALPS executables statically, such that they can be run on a cluster without the need to compile all of ALPS? If not, any hint as to which shared libraries should be copied along with the executable?
2) *MPI error.* Is there any way to run fulldiag or sparse diag with mpi? mpirun -np 2 fulldiag *.xml halts with the following error message:
It seems that [at least] one of the processes that was started with mpirun did not invoke MPI_INIT before quitting (it is possible that more than one process did not invoke MPI_INIT -- mpirun was only notified of the first one, which was on node n0).
mpirun can *only* be used with MPI programs (i.e., programs that invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program to run non-MPI programs over the lambooted nodes.
Thanks for your help, Alex
On Feb 27, 2013, at 2:48 PM, Alex Petrescu tpetresc@gmail.com wrote:
Hello All,
I have the following questions related to running ALPS on a cluster.
- Compiling static executables. Is it possible to compile the ALPS executables statically, such that they can be run on a cluster without the need to compile all of ALPS?
If not, any hint as to which shared libraries should be copied along with the executable?
Yes, that is possible. Just turn dynamic linking off?
- MPI error. Is there any way to run fulldiag or sparse diag with mpi?
mpirun -np 2 fulldiag *.xml halts with the following error message:
It seems that [at least] one of the processes that was started with mpirun did not invoke MPI_INIT before quitting (it is possible that more than one process did not invoke MPI_INIT -- mpirun was only notified of the first one, which was on node n0).
mpirun can *only* be used with MPI programs (i.e., programs that invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program to run non-MPI programs over the lambooted nodes.
You need to specify the --mpi command line option to run ALPS codes with MPI.
Matthias Troyer
comp-phys-alps-users@lists.phys.ethz.ch