Thanks for the reply. What about for DMRG?
Justin
-----Original Message----- From: comp-phys-alps-users-bounces@phys.ethz.ch on behalf of Matthias Troyer Sent: Sun 3/30/2008 7:31 AM To: comp-phys-alps-users@phys.ethz.ch Subject: Re: [ALPS-users] HDF and some other questions
Just disable HDF - we had some limited HDF support in a former version but not in the current ones. pthreads are not used in current ALPS applications, the configure option is for other programs also based on ALPS but not distributed freely.
The question which cluster will work best depends on the problems and algorithms you want to run. For Monte Carlo codes any of the clusters will be fine.
Matthias
On 27 Mar 2008, at 00:03, Justin David Peel wrote:
Hi,
I have been using ALPS on my home computer using cygwin, but I recently installed it on a system that is hooked to some computer clusters at my university. I got ALPS to install except for HDF. I tried the HDF library on the server and installed one myself from the HDF website, but neither one worked. When I did the ./configure it would say that it couldn't find H5File() or something like that in one of the libraries. How important is it to have HDF? Any suggestions on what to try to get ALPS to install with HDF?
My second question is with regard to parallel processing and ALPS. I used the --with-pthread flag when I installed and it is installed with MPI support, but is ALPS really set up for parallel processing very well? I ask because I saw that the --with-pthread flag was experimental.
Also, I have several different clusters available to me for use. Any suggestions as to which one would work best? Here are their characteristics: # Tunnel Arch (48 nodes, 96 procs)
- "data mining cluster" for jobs requiring large memory.
- 1.4 and 1.8 GHz Opteron processors
- 4 Gbytes memory per node
- Gigabit Ethernet interconnect
# Marching Men (164 nodes, 328 procs)
- "cycle farm" for serial and smaller parallel
- jobs which do not require a high speed interconnect.
- 1.4 and 1.8 GHz Opteron processors
- 2 Gbytes memory per node
- Gigabit Ethernet interconnect
# Delicate Arch (256 nodes, 512 procs)
- "parallel cluster" for highly parallel parallel jobs requiring
high speed interconnect.
- 1.4 GHz Opteron processors
- 2 Gbytes memory per node
- Both Myrinet and Gigabit Ethernet interconnects
# Sanddune Arch (156 nodes, 312 procs, 624 cores)
- "parallel cluster" for highly parallel parallel jobs requiring
high speed interconnect.
- 2.4 GHz dual-core Opteron processors
- 8 Gbytes memory per node (2 Gbytes per processor core)
- Both Infiniband and Gigabit Ethernet interconnects
Thank you for the help, Justin Peel <winmail.dat>