MPI: Difference between revisions
No edit summary |
|||
Line 3: | Line 3: | ||
There are several MPI libraries available on zcluster, as described below. | There are several MPI libraries available on zcluster, as described below. | ||
'''Note''': | |||
<pre class="gcomment"> | <pre class="gcomment"> | ||
If you plan to run jobs on any queue except the Infiniband (IB) queue, e.g. rcc-30d, rcc-mc-30d, where all nodes are interconnected with 1g Ethernet, then you can use either OpenMPI or MPICH2 (you cannot use MVAPICH2). If you plan to run jobs on the Infiniband queue (rcc-ib-30d), then you can use OpenMPI (same compilation as for non-IB jobs) or MVAPICH2, but you should not use MPICH2, because doing that will cause your job to use Ethernet, even if it runs on the IB nodes. | |||
1g Ethernet, then you can use either OpenMPI or MPICH2 (you cannot use MVAPICH2). If you plan to run jobs on the Infiniband queue (rcc-ib-30d), then you | </pre> | ||
can use OpenMPI (same compilation as for non-IB jobs) or MVAPICH2, but you should not use MPICH2, because doing that will cause your job to use | |||
Ethernet, even if it runs on the IB nodes. | |||
Sample scripts for using different MPI libraries on different queues are provided at [[Running Jobs on zcluster]]. | Sample scripts for using different MPI libraries on different queues are provided at [[Running Jobs on zcluster]]. | ||
====MPICH==== | ====MPICH==== |
Revision as of 10:22, 21 January 2015
MPI Libraries for parallel jobs on zcluster
There are several MPI libraries available on zcluster, as described below.
Note:
If you plan to run jobs on any queue except the Infiniband (IB) queue, e.g. rcc-30d, rcc-mc-30d, where all nodes are interconnected with 1g Ethernet, then you can use either OpenMPI or MPICH2 (you cannot use MVAPICH2). If you plan to run jobs on the Infiniband queue (rcc-ib-30d), then you can use OpenMPI (same compilation as for non-IB jobs) or MVAPICH2, but you should not use MPICH2, because doing that will cause your job to use Ethernet, even if it runs on the IB nodes.
Sample scripts for using different MPI libraries on different queues are provided at Running Jobs on zcluster.
MPICH
MPICH 1.2.7 binaries (mpif77, mpif90, mpicc, mpicxx, mpirun, etc) use the PGI compilers and are installed in /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/, a directory that is NOT in all users’ default path. MPICH is outdated and is not tightly integrated with the queueing system; therefore, we do not recommend its use. However, if your application requires MPICH (and it does not work with MPICH2), you can use these commands with the full path, e.g. /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpicc, /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpirun, etc.
For information on how to run MPICH jobs, please refer to Running Jobs on zcluster.
MPICH2
MPICH2 is a Message Passing Interface (MPI) library that implements both MPI-1 and MPI-2 standards. For more information on MPICH2, please see http://www.mcs.anl.gov/research/projects/mpich2. Because MPICH is not under active development anymore, we recommend users to use MPICH2 whenever possible. The following versions of MPICH2 are installed:
- MPICH2 version 1.4.1p1, using GNU 4.1.2: installed in /usr/local/mpich2/1.4.1p1/gcc412/bin
- MPICH2 version 1.4.1p1, using GNU 4.4.4: installed in /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin
- MPICH2 version 1.4.1p1, using GNU 4.4.7: installed in /usr/local/mpich2/1.4.1p1/gcc447/bin
- MPICH2 version 1.4.1p1, using GNU 4.5.3: installed in /usr/local/mpich2/1.4.1p1/gcc_4.5.3/bin
- MPICH2 version 1.4.1p1, using PGI 11.8: installed in /usr/local/mpich2/1.4.1p1/pgi_11.8/bin
- MPICH2 version 1.4.1p1, using PGI 12.3: installed in /usr/local/mpich2/1.4.1p1/pgi123/bin
- MPICH2 version 1.4.1p1, using Intel 13.0: installed in /usr/local/mpich2/1.4.1p1/intel130/bin
- MPICH2 version 3.0.4, using GNU 4.1.2: installed in /usr/local/mpich2/3.0.4/gcc412/bin
- MPICH2 version 3.0.4, using GNU 4.4.7: installed in /usr/local/mpich2/3.0.4/gcc447/bin
- MPICH2 version 3.0.4, using GNU 4.5.3: installed in /usr/local/mpich2/3.0.4/gcc453/bin
- MPICH2 version 3.0.4, using PGI 12.10: installed in /usr/local/mpich2/3.0.4/pgi1210/bin
- MPICH2 version 3.0.4, using Intel 14.0: installed in /usr/local/mpich2/3.0.4/intel140/bin
The default version of MPICH2, which is on users’ default path is MPICH2 version 1.4.1p1, using PGI 12.3. The other directories are NOT in users’ default path. To use these installations of MPICH2 you need to use the full path to the executables, e.g. /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpicc, /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpif90, etc.
Example using the default version of MPICH2
To compile a Fortran90 MPI code:
mpif90 -o program program.f90
An executable program will be created. You can add other PGI compiler flags (e.g. optimization flags) if appropriate.
To run the executable program using for example 4 processors, type
mpirun -np 4 -f host.list ./program
Example using MPICH2 3.0.4 with GNU 4.4.7 compilers
To compile a C MPI code:
/usr/local/mpich2/3.0.4/gcc447/bin/mpicc -o program program.c
An executable program will be created. You can add other GNU compiler flags (e.g. optimization flags) if appropriate. Use the corresponding mpirun to run the binary, i.e. use /usr/local/mpich2/3.0.4/gcc447/bin/mpirun to run it.
NOTE:
- Because the working directory is not in users’ default path, it is necessary to precede the binary by ./ when running it.
- SGE batch jobs that use MPICH2 do not need to have the mpirun -machinefile (or -f) options.
- When running jobs compiled with non-default MPICH2 libraries, you might have to add the corresponding library path to your LD_LIBRARY_PATH variable.
For example, to use MPICH2 3.0.4 with GNU 4.5.3 compilers, you need to add /usr/local/mpich2/3.0.4/gcc453/lib and also /usr/local/gcc/4.5.3/lib64 to your LD_LIBRARY_PATH, which can be done with
For bash:
export LD_LIBRARY_PATH=/usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=/usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
For csh/tcsh:
setenv LD_LIBRARY_PATH /usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH} setenv LD_LIBRARY_PATH /usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
And use /usr/local/mpich2/3.0.4/gcc453/bin/mpirun to run the binary.
For more information on how to run executables linked to MPICH2, please refer to Running Jobs on zcluster.
OpenMPI
OpenMPI is an open source Message Passing Interface (MPI) library that implements the MPI-2 standard. For more information on OpenMPI, please see http://www.open-mpi.org. The following versions of OpenMPI are available:
- OpenMPI version 1.4.4, using GNU 4.1.2: installed in /usr/local/openmpi/1.4.4/gcc412/bin
- OpenMPI version 1.4.4, using GNU 4.4.4: installed in /usr/local/openmpi/1.4.4/gcc444/bin
- OpenMPI version 1.4.4, using GNU 4.5.3: installed in /usr/local/openmpi/1.4.4/gcc453/bin
- OpenMPI version 1.4.4, using PGI 11.8: installed in /usr/local/openmpi/1.4.4/pgi118/bin
- OpenMPI version 1.5.5, using GNU 4.4.7: installed in /usr/local/openmpi/1.5.5/gcc447/bin
- OpenMPI version 1.6.2, using GNU 4.1.2: installed in /usr/local/openmpi/1.6.2/gcc412/bin
- OpenMPI version 1.6.2, using GNU 4.4.4: installed in /usr/local/openmpi/1.6.2/gcc444/bin
- OpenMPI version 1.6.2, using GNU 4.5.3: installed in /usr/local/openmpi/1.6.2/gcc453/bin
- OpenMPI version 1.6.2, using GNU 4.7.1: installed in /usr/local/openmpi/1.6.2/gcc471/bin
- OpenMPI version 1.6.2, using PGI 12.8: installed in /usr/local/openmpi/1.6.2/pgi128/bin
- OpenMPI version 1.6.2, using Intel 13.0: installed in /usr/local/openmpi/1.6.2/intel130/bin
These directories are NOT in users’ default path. To use these installations of OpenMPI you need to use the full path to the executables, e.g. /usr/local/openmpi/1.4.4/pgi118/bin/mpicc, /usr/local/openmpi/1.4.4/pgi118/bin/mpif90, etc. Note that when using OpenMPI/GNU 4.7.1, you will need to add /usr/local/gcc/4.7.1/lib64 to the LD_LIBRARY_PATH. For information on how to run executables linked to OpenMPI, please refer to Running Jobs on zcluster.
MVAPICH2
MVAPICH2 is an MPI-2 implementation (conforming to MPI 2.2 standard) which includes all MPI-1 features for Infiniband. It is based on MPICH2 and MVICH. For information on MVAPICH2, please see http://mvapich.cse.ohio-state.edu/. The following versions of MVAPICH2 are available:
- MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.7/gcc444/bin
- MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using PGI 12.3: installed in /usr/local/mvapich2/1.7/pgi123/bin
- MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.8/gcc444/bin
- MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using PGI 12.4: installed in /usr/local/mvapich2/1.8/pgi124/bin
- MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using PGI 13.2: installed in /usr/local/mvapich2/1.8.1/pgi132/bin
- MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using Intel 13.0: installed in /usr/local/mvapich2/1.8.1/intel130/bin
- MVAPICH2 version 2.0 (includes mpich-3.x.x ) using GNU 4.4.7: installed in /usr/local/mvapich2/2.0/gcc447/bin
- MVAPICH2 version 2.0.1 (includes mpich-3.1.2) using Intel 14.0: installed in /usr/local/mvapich2/2.0.1/intel140/bin
Before running applications linked with MVAPICH2 2.0 and 2.0.1, you need to set MV2_SMP_USE_CMA=0. This can be done with the following command:
For bash/sh:
export MV2_SMP_USE_CMA=0
For csh/tcsh:
setenv MV2_SMP_USE_CMA 0
These directories are NOT in users’ default path. To use these installations of MVAPICH2 you need to use the full path to the executables.
NOTE We recommend using MVAPICH2 for applications that have a lot of internodal communication, which can benefit from the high-speed, low-latency Infiniband interconnect.
For information on how to run executables linked to MVAPICH2, please refer to Running Jobs on zcluster.