MPI: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
Line 67: Line 67:
[[#top|Back to Top]]
[[#top|Back to Top]]


===MPI Libraries for parallel jobs on zcluster===


There are several MPI libraries available on zcluster, as described below.


'''Note''': 
===MPI Libraries for parallel jobs on Sapelo2===
<pre class="gcomment">
If you plan to run jobs on any queue except the Infiniband (IB) queue, e.g. rcc-30d, rcc-mc-30d, where all nodes are interconnected with 1g Ethernet, then you can use either OpenMPI or MPICH2 (you cannot use MVAPICH2). If you plan to run jobs on the Infiniband queue (rcc-ib-30d), then you can use OpenMPI (same compilation as for non-IB jobs) or MVAPICH2, but you should not use MPICH2, because doing that will cause your job to use Ethernet, even if it runs on the IB nodes.
</pre>


Sample scripts for using different MPI libraries on different queues are provided at [[Running Jobs on zcluster]].
All compute nodes on Sapelo2 have Infiniband (IB) interconnect. Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file. For more information on Environment Modules, please see the [[Lmod]] page.
The following MPI libraries are available:


====MVAPICH2====


====MPICH====
You can find all MVAPICH2 modules available on Sapelo2 by running the following command on a Sapelo2 node:
<pre class="gcommand">
module spider mvapich2
</pre>


MPICH 1.2.7 binaries (mpif77, mpif90, mpicc, mpicxx, mpirun, etc) use the PGI compilers and are installed in /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/, a directory that is NOT in all users’ default path. MPICH is outdated and is not tightly integrated with the queueing system; therefore, we do not recommend its use. However, if your application requires MPICH (and it does not work with MPICH2), you can use these commands with the full path, e.g. /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpicc, /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpirun, etc.
These are some of the modules available:


For information on how to run MPICH jobs, please refer to [[Running Jobs on zcluster]].
* MVAPICH2 2.2, using GNU 5.4.0 compilers, available in module MVAPICH2/2.2-GCC-5.4.0-2.26. To use it, load the module with
<pre class="gcommand">
module load MVAPICH2/2.2-GCC-5.4.0-2.26
</pre>


====MPICH2====
* MVAPICH2 2.2, using GNU 6.4.0 compilers, available in module MVAPICH2/2.2-GCC-6.4.0-2.28. To use it, load the module with
<pre class="gcommand">
module load MVAPICH2/2.2-GCC-6.4.0-2.28
</pre>


MPICH2 is a Message Passing Interface (MPI) library that implements both MPI-1 and MPI-2 standards. For more information on MPICH2, please see http://www.mcs.anl.gov/research/projects/mpich2. Because MPICH is not under active development anymore, we recommend users to use MPICH2 whenever possible. The following versions of MPICH2 are installed:
* MVAPICH2 2.2, Intel 2013_sp1.0.080 compilers, available in module MVAPICH2/2.2-iccifort-2013_sp1.0.080. To use it, load the module with
<pre class="gcommand">
module load MVAPICH2/2.2-iccifort-2013_sp1.0.080
</pre>


* MPICH2 version 1.4.1p1, using GNU 4.1.2: installed in /usr/local/mpich2/1.4.1p1/gcc412/bin
* MVAPICH2 2.2, Intel 2015.2.164 compilers, available in module MVAPICH2/2.2-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
* MPICH2 version 1.4.1p1, using GNU 4.4.4: installed in /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin
<pre class="gcommand">
* MPICH2 version 1.4.1p1, using GNU 4.4.7: installed in /usr/local/mpich2/1.4.1p1/gcc447/bin
module load MVAPICH2/2.2-iccifort-2015.2.164-GCC-4.8.5
* MPICH2 version 1.4.1p1, using GNU 4.5.3: installed in /usr/local/mpich2/1.4.1p1/gcc_4.5.3/bin
</pre>
* MPICH2 version 1.4.1p1, using PGI 11.8: installed in /usr/local/mpich2/1.4.1p1/pgi_11.8/bin
* MPICH2 version 1.4.1p1, using PGI 12.3: installed in /usr/local/mpich2/1.4.1p1/pgi123/bin
* MPICH2 version 1.4.1p1, using Intel 13.0: installed in /usr/local/mpich2/1.4.1p1/intel130/bin
* MPICH2 version 3.0.4, using GNU 4.1.2: installed in /usr/local/mpich2/3.0.4/gcc412/bin
* MPICH2 version 3.0.4, using GNU 4.4.7: installed in /usr/local/mpich2/3.0.4/gcc447/bin
* MPICH2 version 3.0.4, using GNU 4.5.3: installed in /usr/local/mpich2/3.0.4/gcc453/bin
* MPICH2 version 3.0.4, using PGI 12.10: installed in /usr/local/mpich2/3.0.4/pgi1210/bin
* MPICH2 version 3.0.4, using Intel 14.0: installed in /usr/local/mpich2/3.0.4/intel140/bin
* MPICH2 version 3.2, using GNU 5.3.0: installed in /usr/local/mpich2/3.2/gcc530/bin


The default version of MPICH2, which is on users’ default path is MPICH2 version 1.4.1p1, using PGI 12.3. The other directories are NOT in users’ default path. To use these installations of MPICH2 you need to use the full path to the executables, e.g. /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpicc, /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpif90, etc.
* MVAPICH2 2.2, Intel 2018.1.163 compilers, available in module MVAPICH2/2.2-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
<pre class="gcommand">
module load MVAPICH2/2.2-iccifort-2018.1.163-GCC-6.4.0-2.28
</pre>


'''Example using the default version of MPICH2'''
Once the appropriate module is loaded, you can compile code with '''mpicc''', '''mpic++''', '''mpif90''', etc
and you can run applications that were linked to the MPI libraries loaded by the module.


To compile a Fortran90 MPI code:
====OpenMPI====


You can find all OpenMPI modules available on Sapelo2 by running the following command on a Sapelo2 node:
<pre class="gcommand">
<pre class="gcommand">
mpif90 -o program program.f90
module spider openmpi
</pre>
</pre>


An executable program will be created. You can add other PGI compiler flags (e.g. optimization flags) if appropriate.
These are some of the modules available:


To run the executable program using for example 4 processors, type
* OpenMPI 3.0.0, using GNU 7.2.0 compilers, available in module OpenMPI/3.0.0-GCC-7.2.0-2.29. To use it, load the module with
<pre class="gcommand">
module load OpenMPI/3.0.0-GCC-7.2.0-2.29
</pre>


* OpenMPI 2.1.2, using GNU 6.4.0 compilers, available in module OpenMPI/2.1.2-GCC-6.4.0-2.28. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
mpirun -np 4 -f host.list ./program
module load OpenMPI/2.1.2-GCC-6.4.0-2.28
</pre>
</pre>


'''Example using MPICH2 3.0.4 with GNU 4.4.7 compilers'''
* OpenMPI 1.10.3, using GNU 5.4.0 compilers, available in module OpenMPI/1.10.3-GCC-5.4.0-2.26. To use it, load the module with
 
<pre class="gcommand">
To compile a C MPI code:
module load OpenMPI/1.10.3-GCC-5.4.0-2.26
</pre>


* OpenMPI 1.10.3, using GNU 4.4.7 compilers, available in module OpenMPI/1.10.3-GCC-4.4.7. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
/usr/local/mpich2/3.0.4/gcc447/bin/mpicc -o program program.c
module load OpenMPI/1.10.3-GCC-4.4.7
</pre>
</pre>


An executable program will be created. You can add other GNU compiler flags (e.g. optimization flags) if appropriate. Use the corresponding mpirun to run the binary, i.e. use /usr/local/mpich2/3.0.4/gcc447/bin/mpirun to run it.
* OpenMPI 3.0.0, using Intel 2018.1.163 compilers, available in module OpenMPI/3.0.0-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
 
'''NOTE:'''
 
*Because the working directory is not in users’ default path, it is necessary to precede the binary by ./ when running it.
 
*SGE batch jobs that use MPICH2 do not need to have the mpirun -machinefile (or -f) options.
 
*When running jobs compiled with non-default MPICH2 libraries, you might have to add the corresponding library path to your LD_LIBRARY_PATH variable.
For example, to use MPICH2 3.0.4 with GNU 4.5.3 compilers, you need to add /usr/local/mpich2/3.0.4/gcc453/lib and also /usr/local/gcc/4.5.3/lib64 to your LD_LIBRARY_PATH, which can be done with
 
For bash:
<pre class="gcommand">
<pre class="gcommand">
export LD_LIBRARY_PATH=/usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH}
module load OpenMPI/3.0.0-iccifort-2018.1.163-GCC-6.4.0-2.28
export LD_LIBRARY_PATH=/usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
</pre>
</pre>


For csh/tcsh:
* OpenMPI 2.1.2, using Intel 2018.1.163 compilers, available in module OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
setenv LD_LIBRARY_PATH  /usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH}
module load OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28
setenv LD_LIBRARY_PATH  /usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
</pre>
</pre>


And use /usr/local/mpich2/3.0.4/gcc453/bin/mpirun to run the binary.
* OpenMPI 1.10.7, using Intel 2018.1.163 compilers, available in module OpenMPI/1.10.7-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
 
Another example, to use MPICH2 3.2 with GNU 5.3.0 compilers, you need to add /usr/local/gcc/5.3.0/lib64 to your LD_LIBRARY_PATH, which can be done with
 
For bash:
<pre class="gcommand">
<pre class="gcommand">
export LD_LIBRARY_PATH=/usr/local/gcc/5.3.0/lib64:${LD_LIBRARY_PATH}
module load OpenMPI/1.10.7-iccifort-2018.1.163-GCC-6.4.0-2.28
</pre>
</pre>


For csh/tcsh:
* OpenMPI 1.10.7, using Intel 2015.2.164 compilers, available in module OpenMPI/1.10.7-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
setenv LD_LIBRARY_PATH  /usr/local/gcc/5.3.0/lib64:${LD_LIBRARY_PATH}
module load OpenMPI/1.10.7-iccifort-2015.2.164-GCC-4.8.5
 
</pre>
</pre>


And use /usr/local/mpich2/3.2/gcc530/bin/mpirun to run the binary.
* OpenMPI 1.8.4, using Intel 2015.2.164 compilers, available in module OpenMPI/1.8.4-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
 
For more information on how to run executables linked to MPICH2, please refer to [[Running Jobs on zcluster]].
 
====OpenMPI====
 
OpenMPI is an open source Message Passing Interface (MPI) library that implements the MPI-2 standard. For more information on OpenMPI, please see http://www.open-mpi.org. The following versions of OpenMPI are available:
 
* OpenMPI version 1.4.4, using GNU 4.1.2: installed in /usr/local/openmpi/1.4.4/gcc412/bin
* OpenMPI version 1.4.4, using GNU 4.4.4: installed in /usr/local/openmpi/1.4.4/gcc444/bin
* OpenMPI version 1.4.4, using GNU 4.5.3: installed in /usr/local/openmpi/1.4.4/gcc453/bin
* OpenMPI version 1.4.4, using PGI 11.8: installed in /usr/local/openmpi/1.4.4/pgi118/bin
* OpenMPI version 1.5.5, using GNU 4.4.7: installed in /usr/local/openmpi/1.5.5/gcc447/bin
* OpenMPI version 1.6.2, using GNU 4.1.2: installed in /usr/local/openmpi/1.6.2/gcc412/bin
* OpenMPI version 1.6.2, using GNU 4.4.4: installed in /usr/local/openmpi/1.6.2/gcc444/bin
* OpenMPI version 1.6.2, using GNU 4.5.3: installed in /usr/local/openmpi/1.6.2/gcc453/bin
* OpenMPI version 1.6.2, using GNU 4.7.1: installed in /usr/local/openmpi/1.6.2/gcc471/bin
* OpenMPI version 1.6.2, using PGI 12.8: installed in /usr/local/openmpi/1.6.2/pgi128/bin
* OpenMPI version 1.6.2, using Intel 13.0: installed in /usr/local/openmpi/1.6.2/intel130/bin
* OpenMPI version 1.6.3, using GNU 4.4.6: installed in /usr/local/openmpi/1.6.3/gcc446/bin
* OpenMPI version 1.6.5, using GNU 4.4.7: installed in /usr/local/openmpi/1.6.5/gcc447/bin
 
These directories are NOT in users’ default path. To use these installations of OpenMPI you need to use the full path to the executables, e.g.  /usr/local/openmpi/1.4.4/pgi118/bin/mpicc, /usr/local/openmpi/1.4.4/pgi118/bin/mpif90, etc. Note that when using OpenMPI/GNU 4.7.1, you will need to add /usr/local/gcc/4.7.1/lib64 to the LD_LIBRARY_PATH. For information on how to run executables linked to OpenMPI, please refer to [[Running Jobs on zcluster]].
 
====MVAPICH2====
 
MVAPICH2 is an MPI-2 implementation (conforming to MPI 2.2 standard) which includes all MPI-1 features for Infiniband. It is based on MPICH2 and MVICH. For information on MVAPICH2, please see http://mvapich.cse.ohio-state.edu/. The following versions of MVAPICH2 are available:
 
* MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.7/gcc444/bin
* MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using PGI 12.3: installed in /usr/local/mvapich2/1.7/pgi123/bin
* MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.8/gcc444/bin
* MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using PGI 12.4: installed in /usr/local/mvapich2/1.8/pgi124/bin
* MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using PGI 13.2: installed in /usr/local/mvapich2/1.8.1/pgi132/bin
* MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using Intel 13.0: installed in /usr/local/mvapich2/1.8.1/intel130/bin
* MVAPICH2 version 2.0 (includes mpich-3.x.x ) using GNU 4.4.7: installed in /usr/local/mvapich2/2.0/gcc447/bin
* MVAPICH2 version 2.0.1 (includes mpich-3.1.2) using Intel 14.0: installed in /usr/local/mvapich2/2.0.1/intel140/bin
 
Before running applications linked with MVAPICH2 2.0 and 2.0.1, you need to set MV2_SMP_USE_CMA=0. This can be done with the following command:
 
For bash/sh:
<pre class="gcommand">
<pre class="gcommand">
export MV2_SMP_USE_CMA=0
module load OpenMPI/1.8.4-iccifort-2015.2.164-GCC-4.8.5
</pre>
</pre>


For csh/tcsh:
* OpenMPI 1.8.4, using Intel 2013_sp1.0.080 compilers, available in module OpenMPI/1.8.4-iccifort-2013_sp1.0.080. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
setenv MV2_SMP_USE_CMA 0
module load OpenMPI/1.8.4-iccifort-2013_sp1.0.080
</pre>
</pre>
These directories are NOT in users’ default path. To use these installations of MVAPICH2 you need to use the full path to the executables.
NOTE We recommend using MVAPICH2 for applications that have a lot of internodal communication, which can benefit from the high-speed, low-latency Infiniband interconnect.


For information on how to run executables linked to MVAPICH2, please refer to [[Running Jobs on zcluster]].
Once the appropriate module is loaded, you can compile code with '''mpicc''', '''mpic++''', '''mpif90''', etc
and you can run applications that were linked to the MPI libraries loaded by the module.

Revision as of 07:48, 15 June 2018


MPI Libraries for parallel jobs on Sapelo

All compute nodes on Sapelo have Qlogic Infiniband (IB) interconnect. Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file. For more information on Environment Modules, please see the Lmod page.

The following MPI libraries are available:

MVAPICH2

  • MVAPICH2 2.0.0, using GNU 4.4.7 compilers, available in module mvapich2/2.0.0/gcc/4.4.7. To use it, load the module with
module load mvapich2/2.0.0/gcc/4.4.7
  • MVAPICH2 2.0.0, using PGI 14.9 compilers, available in module mvapich2/2.0.0/pgi/14.9. To use it, load the module with
module load mvapich2/2.0.0/pgi/14.9
  • MVAPICH2 2.1, using PGI 14.10 compilers, available in module mvapich2/2.1/pgi/14.10. To use it, load the module with
module load mvapich2/2.1/pgi/14.10
  • MVAPICH2 2.1, using Intel 14.0 compilers, available in module mvapich2/2.1/intel/14.0. To use it, load the module with
module load mvapich2/2.1/intel/14.0
  • MVAPICH2 2.1, using GNU 4.4.7 compilers, available in module mvapich2/2.1/gcc/4.4.7. To use it, load the module with
module load mvapich2/2.1/gcc/4.4.7

Once the appropriate module is loaded, you can compile code with mpicc, mpic++, mpif90, etc and you can run applications that were linked to the MPI libraries loaded by the module.

OpenMPI

  • OpenMPI 1.8.3, using GNU 4.4.7 compilers, available in module openmpi/1.8.3/gcc/4.4.7. To use it, load the module with
module load openmpi/1.8.3/gcc/4.4.7
  • OpenMPI 1.8.3, using PGI 14.9 compilers, available in module openmpi/1.8.3/pgi/14.9. To use it, load the module with
module load openmpi/1.8.3/pgi/14.9
  • OpenMPI 1.8.3, using Intel 15.0.2 compilers, available in module openmpi/1.8.3/intel/15.0.2. To use it, load the module with
module load openmpi/1.8.3/intel/15.0.2
  • OpenMPI 1.8.3, using Intel 14.0.0 compilers, available in module openmpi/1.8.3/intel/14.0. To use it, load the module with
module load openmpi/1.8.3/intel/14.0
  • OpenMPI 1.8.3, using GNU 5.3.0 compilers, available in module openmpi/1.8.3/gcc/5.3.0. To use it, load the module with
module load openmpi/1.8.3/gcc/5.3.0
  • OpenMPI 1.8.3, using GNU 4.7.4 compilers, available in module openmpi/1.8.3/gcc/4.7.4. To use it, load the module with
module load openmpi/1.8.3/gcc/4.7.4

Once the appropriate module is loaded, you can compile code with mpicc, mpic++, mpif90, etc and you can run applications that were linked to the MPI libraries loaded by the module.



Back to Top


MPI Libraries for parallel jobs on Sapelo2

All compute nodes on Sapelo2 have Infiniband (IB) interconnect. Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file. For more information on Environment Modules, please see the Lmod page.

The following MPI libraries are available:

MVAPICH2

You can find all MVAPICH2 modules available on Sapelo2 by running the following command on a Sapelo2 node:

module spider mvapich2

These are some of the modules available:

  • MVAPICH2 2.2, using GNU 5.4.0 compilers, available in module MVAPICH2/2.2-GCC-5.4.0-2.26. To use it, load the module with
module load MVAPICH2/2.2-GCC-5.4.0-2.26
  • MVAPICH2 2.2, using GNU 6.4.0 compilers, available in module MVAPICH2/2.2-GCC-6.4.0-2.28. To use it, load the module with
module load MVAPICH2/2.2-GCC-6.4.0-2.28
  • MVAPICH2 2.2, Intel 2013_sp1.0.080 compilers, available in module MVAPICH2/2.2-iccifort-2013_sp1.0.080. To use it, load the module with
module load MVAPICH2/2.2-iccifort-2013_sp1.0.080
  • MVAPICH2 2.2, Intel 2015.2.164 compilers, available in module MVAPICH2/2.2-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
module load MVAPICH2/2.2-iccifort-2015.2.164-GCC-4.8.5
  • MVAPICH2 2.2, Intel 2018.1.163 compilers, available in module MVAPICH2/2.2-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
module load MVAPICH2/2.2-iccifort-2018.1.163-GCC-6.4.0-2.28

Once the appropriate module is loaded, you can compile code with mpicc, mpic++, mpif90, etc and you can run applications that were linked to the MPI libraries loaded by the module.

OpenMPI

You can find all OpenMPI modules available on Sapelo2 by running the following command on a Sapelo2 node:

module spider openmpi

These are some of the modules available:

  • OpenMPI 3.0.0, using GNU 7.2.0 compilers, available in module OpenMPI/3.0.0-GCC-7.2.0-2.29. To use it, load the module with
module load OpenMPI/3.0.0-GCC-7.2.0-2.29
  • OpenMPI 2.1.2, using GNU 6.4.0 compilers, available in module OpenMPI/2.1.2-GCC-6.4.0-2.28. To use it, load the module with
module load OpenMPI/2.1.2-GCC-6.4.0-2.28
  • OpenMPI 1.10.3, using GNU 5.4.0 compilers, available in module OpenMPI/1.10.3-GCC-5.4.0-2.26. To use it, load the module with
module load OpenMPI/1.10.3-GCC-5.4.0-2.26
  • OpenMPI 1.10.3, using GNU 4.4.7 compilers, available in module OpenMPI/1.10.3-GCC-4.4.7. To use it, load the module with
module load OpenMPI/1.10.3-GCC-4.4.7
  • OpenMPI 3.0.0, using Intel 2018.1.163 compilers, available in module OpenMPI/3.0.0-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
module load OpenMPI/3.0.0-iccifort-2018.1.163-GCC-6.4.0-2.28
  • OpenMPI 2.1.2, using Intel 2018.1.163 compilers, available in module OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
module load OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28
  • OpenMPI 1.10.7, using Intel 2018.1.163 compilers, available in module OpenMPI/1.10.7-iccifort-2018.1.163-GCC-6.4.0-2.28. To use it, load the module with
module load OpenMPI/1.10.7-iccifort-2018.1.163-GCC-6.4.0-2.28
  • OpenMPI 1.10.7, using Intel 2015.2.164 compilers, available in module OpenMPI/1.10.7-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
module load OpenMPI/1.10.7-iccifort-2015.2.164-GCC-4.8.5
  • OpenMPI 1.8.4, using Intel 2015.2.164 compilers, available in module OpenMPI/1.8.4-iccifort-2015.2.164-GCC-4.8.5. To use it, load the module with
module load OpenMPI/1.8.4-iccifort-2015.2.164-GCC-4.8.5
  • OpenMPI 1.8.4, using Intel 2013_sp1.0.080 compilers, available in module OpenMPI/1.8.4-iccifort-2013_sp1.0.080. To use it, load the module with
module load OpenMPI/1.8.4-iccifort-2013_sp1.0.080

Once the appropriate module is loaded, you can compile code with mpicc, mpic++, mpif90, etc and you can run applications that were linked to the MPI libraries loaded by the module.