MPI: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
 
(36 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[Category:Sapelo2]][[Category:Teaching]]


==MPI Libraries for parallel jobs on Sapelo2==


===MPI Libraries for parallel jobs on Sapelo===
All compute nodes on Sapelo2 have Infiniband (IB) interconnect via '''EDR''' Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.
<!-- Some of the MPI libraries were installed specifically for the QDR network or for the EDR network and those module files will have an '''-QDR''' or '''-EDR''' extension in their names. -->


All compute nodes on Sapelo have Qlogic Infiniband (IB) interconnect. Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file. For more information on Environment Modules, please see the [[Lmod]] page.
For more information on Environment Modules, please see the [[Lmod]] page.
   
   
The following MPI libraries are available:
The following MPI libraries are available:
 
<!--
====MVAPICH2====
====MVAPICH2====


* MVAPICH2 2.0.0, using GNU 4.4.7 compilers, available in module mvapich2/2.0.0/gcc/4.4.7. To use it, load the module with
You can find all MVAPICH2 modules available on Sapelo2 by running the following command on a Sapelo2 node:
<pre class="gcommand">
<pre class="gcommand">
module load mvapich2/2.0.0/gcc/4.4.7
module spider MVAPICH2
</pre>
</pre>
* MVAPICH2 2.0.0, using PGI 14.9 compilers, available in module mvapich2/2.0.0/pgi/14.9. To use it, load the module with
Module names that have an -QDR or -EDR extension only work for the corresponding IB network. Module names without such an extension can be used on either IB network. The module names have the format MVAPICH2/''Version''-''CompilerToolchain''-''ToolchainVersion''.
<pre class="gcommand">
 
module load mvapich2/2.0.0/pgi/14.9
For example, these are some of the modules available:
</pre>
 
* MVAPICH2 2.1, using PGI 14.10 compilers, available in module mvapich2/2.1/pgi/14.10. To use it, load the module with
<pre class="gcomment">
<pre class="gcommand">
[shtsai@ss-sub1 ~]$ module spider MVAPICH2
module load mvapich2/2.1/pgi/14.10
 
</pre>
-----------------------------------------------------------------------------------------------------------------------------------------------
* MVAPICH2 2.1, using Intel 14.0 compilers, available in module mvapich2/2.1/intel/14.0. To use it, load the module with
  MVAPICH2:
<pre class="gcommand">
-----------------------------------------------------------------------------------------------------------------------------------------------
module load mvapich2/2.1/intel/14.0
    Description:
</pre>
      This is an MPI 3.1 implementation. It is based on MPICH2 and MVICH.
* MVAPICH2 2.1, using GNU 4.4.7 compilers, available in module mvapich2/2.1/gcc/4.4.7. To use it, load the module with
 
<pre class="gcommand">
    Versions:
module load mvapich2/2.1/gcc/4.4.7
        MVAPICH2/2.3.6-GCC-9.3.0
        MVAPICH2/2.3.7-1-GCC-9.3.0
    Other possible modules matches:
        gmvapich2
 
-----------------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:
 
      $ module -r spider '.*MVAPICH2.*'
 
-----------------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "MVAPICH2" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:
 
    $ module spider MVAPICH2/2.3.7-1-GCC-9.3.0
-----------------------------------------------------------------------------------------------------------------------------------------------
</pre>
</pre>
Once the appropriate module is loaded, you can compile code with '''mpicc''', '''mpic++''', '''mpif90''', etc
and you can run applications that were linked to the MPI libraries loaded by the module.


Once the appropriate module is loaded, you can compile code with '''mpicc, mpicxx,''' '''mpic++,''' '''mpifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.
-->
====OpenMPI====
====OpenMPI====


* OpenMPI 1.8.3, using GNU 4.4.7 compilers, available in module openmpi/1.8.3/gcc/4.4.7. To use it, load the module with
You can find all OpenMPI modules available on Sapelo2 by running the following command on a Sapelo2 node:
<pre class="gcommand">
module load openmpi/1.8.3/gcc/4.4.7
</pre>
* OpenMPI 1.8.3, using PGI 14.9 compilers, available in module openmpi/1.8.3/pgi/14.9. To use it, load the module with
<pre class="gcommand">
module load openmpi/1.8.3/pgi/14.9
</pre>
* OpenMPI 1.8.3, using Intel 15.0.2 compilers, available in module openmpi/1.8.3/intel/15.0.2. To use it, load the module with
<pre class="gcommand">
module load openmpi/1.8.3/intel/15.0.2
</pre>
* OpenMPI 1.8.3, using Intel 14.0.0 compilers, available in module openmpi/1.8.3/intel/14.0. To use it, load the module with
<pre class="gcommand">
<pre class="gcommand">
module load openmpi/1.8.3/intel/14.0
module spider OpenMPI
</pre>
</pre>


Once the appropriate module is loaded, you can compile code with '''mpicc''', '''mpic++''', '''mpif90''', etc
The module names have the format OpenMPI/''Version''-''CompilerToolchain''-''ToolchainVersion''.
and you can run applications that were linked to the MPI libraries loaded by the module.


For example, these are some of the modules available:


<pre class="gcomment">
[shtsai@ss-sub1 ~]$ module spider OpenMPI


----
-----------------------------------------------------------------------------------------------------------------------------------------------
[[#top|Back to Top]]
  OpenMPI:
-----------------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The Open MPI Project is an open source MPI-3 implementation.


===MPI Libraries for parallel jobs on zcluster===
    Versions:
        OpenMPI/3.1.4-GCC-8.3.0
        OpenMPI/3.1.4-iccifort-2019.5.281
        OpenMPI/4.0.5-GCC-10.2.0
        OpenMPI/4.1.1-GCC-10.3.0
        OpenMPI/4.1.1-GCC-11.2.0
        OpenMPI/4.1.4-GCC-11.3.0
        OpenMPI/4.1.4-GCC-12.2.0
        OpenMPI/4.1.4-intel-compilers-2022.1.0
        OpenMPI/4.1.5-GCC-12.3.0
        OpenMPI/4.1.6-GCC-13.2.0


There are several MPI libraries available on zcluster, as described below.  
------------------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:


'''Note''': 
    $ module spider OpenMPI/4.1.6-GCC-13.2.0
<pre class="gcomment">
-----------------------------------------------------------------------------------------------------------------------------------------------
If you plan to run jobs on any queue except the Infiniband (IB) queue, e.g. rcc-30d, rcc-mc-30d, where all nodes are interconnected with 1g Ethernet, then you can use either OpenMPI or MPICH2 (you cannot use MVAPICH2). If you plan to run jobs on the Infiniband queue (rcc-ib-30d), then you can use OpenMPI (same compilation as for non-IB jobs) or MVAPICH2, but you should not use MPICH2, because doing that will cause your job to use Ethernet, even if it runs on the IB nodes.
</pre>
</pre>


Sample scripts for using different MPI libraries on different queues are provided at [[Running Jobs on zcluster]].
Once the appropriate module is loaded, you can compile code with '''mpicc, mpiCC, mpicxx,''' '''mpic++,''' '''mpifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.
 
=== Intel MPI ===
You can find all Intel MPI modules available on Sapelo2 by running the following command on a Sapelo2 node:<pre class="gcommand">
module spider impi
</pre>The module names have the format impi/''Version''-''CompilerToolchain''-''ToolchainVersion.''


For example, these are some of the modules available:<pre class="gcomment">
[shtsai@ss-sub1 ~]$ module spider impi


====MPICH====
------------------------------------------------------------------------------------------------------------------------------------
  impi:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      Intel MPI Library, compatible with MPICH ABI


MPICH 1.2.7 binaries (mpif77, mpif90, mpicc, mpicxx, mpirun, etc) use the PGI compilers and are installed in /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/, a directory that is NOT in all users’ default path. MPICH is outdated and is not tightly integrated with the queueing system; therefore, we do not recommend its use. However, if your application requires MPICH (and it does not work with MPICH2), you can use these commands with the full path, e.g. /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpicc, /usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpirun, etc.
    Versions:
        impi/2021.4.0-intel-compilers-2021.4.0
        impi/2021.6.0-intel-compilers-2022.1.0
        impi/2021.9.0-intel-compilers-2023.1.0
    Other possible modules matches:
        iimpi


For information on how to run MPICH jobs, please refer to [[Running Jobs on zcluster]].
------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:


====MPICH2====
      $ module -r spider '.*impi.*'


MPICH2 is a Message Passing Interface (MPI) library that implements both MPI-1 and MPI-2 standards. For more information on MPICH2, please see http://www.mcs.anl.gov/research/projects/mpich2. Because MPICH is not under active development anymore, we recommend users to use MPICH2 whenever possible. The following versions of MPICH2 are installed:
------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "impi" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:


* MPICH2 version 1.4.1p1, using GNU 4.1.2: installed in /usr/local/mpich2/1.4.1p1/gcc412/bin
    $ module spider impi/2021.9.0-intel-compilers-2023.1.0
* MPICH2 version 1.4.1p1, using GNU 4.4.4: installed in /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin
------------------------------------------------------------------------------------------------------------------------------------
* MPICH2 version 1.4.1p1, using GNU 4.4.7: installed in /usr/local/mpich2/1.4.1p1/gcc447/bin
</pre>Once the appropriate module is loaded, you can compile code with '''mpicc, mpiicc, mpicxx, mpiicpc,''' '''mpiifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.
* MPICH2 version 1.4.1p1, using GNU 4.5.3: installed in /usr/local/mpich2/1.4.1p1/gcc_4.5.3/bin
* MPICH2 version 1.4.1p1, using PGI 11.8: installed in /usr/local/mpich2/1.4.1p1/pgi_11.8/bin
* MPICH2 version 1.4.1p1, using PGI 12.3: installed in /usr/local/mpich2/1.4.1p1/pgi123/bin
* MPICH2 version 1.4.1p1, using Intel 13.0: installed in /usr/local/mpich2/1.4.1p1/intel130/bin
* MPICH2 version 3.0.4, using GNU 4.1.2: installed in /usr/local/mpich2/3.0.4/gcc412/bin
* MPICH2 version 3.0.4, using GNU 4.4.7: installed in /usr/local/mpich2/3.0.4/gcc447/bin
* MPICH2 version 3.0.4, using GNU 4.5.3: installed in /usr/local/mpich2/3.0.4/gcc453/bin
* MPICH2 version 3.0.4, using PGI 12.10: installed in /usr/local/mpich2/3.0.4/pgi1210/bin
* MPICH2 version 3.0.4, using Intel 14.0: installed in /usr/local/mpich2/3.0.4/intel140/bin
* MPICH2 version 3.2, using GNU 5.3.0: installed in /usr/local/mpich2/3.2/gcc530/bin


The default version of MPICH2, which is on users’ default path is MPICH2 version 1.4.1p1, using PGI 12.3. The other directories are NOT in users’ default path. To use these installations of MPICH2 you need to use the full path to the executables, e.g.  /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpicc, /usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpif90, etc.
===MPI commands and how to launch MPI programs===


'''Example using the default version of MPICH2'''
{|  width="100%" border="1"  cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable"
|-
! scope="col" | MPI library
! scope="col" | Version
! scope="col" | Base toolchain
! scope="col" | Toolchain
! scope="col" | Fortran
! scope="col" | C
! scope="col" | C++
! scope="col" | How to launch with Slurm
|-


To compile a Fortran90 MPI code:
| OpenMPI || 3.1.4-GCC-8.3.0 || GCCcore-8.3.0 || foss/2019b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2
|-
| OpenMPI || 3.1.4-iccifort-2019.5.281 || GCCcore-8.3.0 || iomkl/2019b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2
|-
| OpenMPI || 4.0.5-GCC-10.2.0 || GCCcore-10.2.0 || foss/2020b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2
|-
| OpenMPI || 4.1.1-GCC-11.2.0 || GCCcore-11.2.0 || foss/2021b || mpif90 || mpicc || mpicxx || srun
|-
| OpenMPI || 4.1.4-GCC-11.3.0 || GCCcore-11.3.0 || foss/2022a || mpif90 || mpicc || mpicxx || srun
|-
| OpenMPI || 4.1.4-intel-compilers-2022.1.0 || GCCcore-11.3.0 || iomkl/2022a || mpif90 || mpicc || mpicxx || srun
|-
| OpenMPI || 4.1.4-GCC-12.2.0 || GCCcore-12.2.0 || foss/2022b || mpif90 || mpicc || mpicxx || srun
|-
| OpenMPI || 4.1.5-GCC-12.3.0 || GCCcore-12.3.0 || foss/2023a || mpif90 || mpicc || mpicxx || srun
|-
| OpenMPI || 4.1.6-GCC-13.2.0 || GCCcore-13.2.0 || foss/2023b || mpif90 || mpicc || mpicxx || srun
|-
| Intel MPI || 2021.4.0-intel-compilers-2021.4.0 || GCCcore-11.2.0 || intel/2021b || mpiifort || mpiicc || mpiicpc || srun
|-
| Intel MPI || 2021.6.0-intel-compilers-2022.1.0 || GCCcore-11.3.0 || intel/2022a || mpiifort || mpiicc || mpiicpc || srun
|-
| Intel MPI || 2021.9.0-intel-compilers-2023.1.0 || GCCcore-12.3.0 || intel/2023a || mpiifort || mpiicc || mpiicpc || srun
|-
|}


<pre class="gcommand">
====Note====
mpif90 -o program program.f90
</pre>


An executable program will be created. You can add other PGI compiler flags (e.g. optimization flags) if appropriate.
If your MPI job receives any of the following or similar errors:
*PMIX ERROR: OUT-OF-RESOURCE in file base/bfrop_base_unpack.c at line 750
*PMIX ERROR: UNPACK-PAST-END in file base/bfrop_base_unpack.c at line 750
*PMIX ERROR: UNPACK-INADEQUATE-SPACE in file base/gds_base_fns.c at line 138
*UNPACK-PMIX-VALUE: UNSUPPORTED TYPE 126
then please use <code>srun --mpi=pmi2</code> to start the MPI application.


To run the executable program using for example 4 processors, type
----
[[#top|Back to Top]]


<pre class="gcommand">
==MPI Libraries for parallel jobs on the teaching cluster==
mpirun -np 4 -f host.list ./program
 
</pre>
All compute nodes on Sapelo2 have Infiniband (IB) interconnect via '''EDR''' Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.


'''Example using MPICH2 3.0.4 with GNU 4.4.7 compilers'''
For more information on Environment Modules, please see the [[Lmod]] page.


To compile a C MPI code:
The following MPI libraries are available:
<!--
====MVAPICH2====


You can find all MVAPICH2 modules available on the teaching cluster by running the following command:
<pre class="gcommand">
<pre class="gcommand">
/usr/local/mpich2/3.0.4/gcc447/bin/mpicc -o program program.c
module spider MVAPICH2
</pre>
</pre>
Module names that have an -QDR or -EDR extension only work for the corresponding IB network. Module names without such an extension can be used on either IB network. The module names have the format MVAPICH2/''Version''-''CompilerToolchain''-''ToolchainVersion''.
For example, these are some of the modules available:


An executable program will be created. You can add other GNU compiler flags (e.g. optimization flags) if appropriate. Use the corresponding mpirun to run the binary, i.e. use /usr/local/mpich2/3.0.4/gcc447/bin/mpirun to run it.
<pre class="gcomment">
zhuofei@teach-sub1 ~$ ml spider MVAPICH2


'''NOTE:'''
------------------------------------------------------------------------------------------------------------------------------------
  MVAPICH2:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      This is an MPI 3.1 implementation. It is based on MPICH2 and MVICH.


*Because the working directory is not in users’ default path, it is necessary to precede the binary by ./ when running it.
    Versions:
        MVAPICH2/2.3.2-GCC-7.3.0-2.30-EDR
        MVAPICH2/2.3.2-GCC-8.3.0-EDR
        MVAPICH2/2.3.2-iccifort-2013_sp1.0.080-EDR
        MVAPICH2/2.3.2-iccifort-2015.2.164-GCC-4.8.5-EDR
        MVAPICH2/2.3.2-iccifort-2018.1.163-GCC-6.4.0-2.28-EDR
        MVAPICH2/2.3.2-iccifort-2019.5.281-EDR
        MVAPICH2/2.3.4-GCC-7.3.0-2.30
    Other possible modules matches:
        gmvapich2


*SGE batch jobs that use MPICH2 do not need to have the mpirun -machinefile (or -f) options.
------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:


*When running jobs compiled with non-default MPICH2 libraries, you might have to add the corresponding library path to your LD_LIBRARY_PATH variable.
      $ module -r spider '.*MVAPICH2.*'
For example, to use MPICH2 3.0.4 with GNU 4.5.3 compilers, you need to add /usr/local/mpich2/3.0.4/gcc453/lib and also /usr/local/gcc/4.5.3/lib64 to your LD_LIBRARY_PATH, which can be done with


For bash:
------------------------------------------------------------------------------------------------------------------------------------
<pre class="gcommand">
  For detailed information about a specific "MVAPICH2" package (including how to load the modules) use the module's full name.
export LD_LIBRARY_PATH=/usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH}
  Note that names that have a trailing (E) are extensions provided by other modules.
export LD_LIBRARY_PATH=/usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
  For example:
</pre>


For csh/tcsh:
    $ module spider MVAPICH2/2.3.4-GCC-7.3.0-2.30
<pre class="gcommand">
------------------------------------------------------------------------------------------------------------------------------------
setenv LD_LIBRARY_PATH  /usr/local/gcc/4.5.3/lib64:${LD_LIBRARY_PATH}
setenv LD_LIBRARY_PATH  /usr/local/mpich2/3.0.4/gcc453/lib:${LD_LIBRARY_PATH}
</pre>
</pre>


And use /usr/local/mpich2/3.0.4/gcc453/bin/mpirun to run the binary.
Once the appropriate module is loaded, you can compile code with '''mpicc, mpicxx,''' '''mpic++,''' '''mpifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.
 
-->
Another example, to use MPICH2 3.2 with GNU 5.3.0 compilers, you need to add /usr/local/gcc/5.3.0/lib64 to your LD_LIBRARY_PATH, which can be done with
====OpenMPI====


For bash:
You can find all OpenMPI modules available on the teaching cluster by running the following command:
<pre class="gcommand">
<pre class="gcommand">
export LD_LIBRARY_PATH=/usr/local/gcc/5.3.0/lib64:${LD_LIBRARY_PATH}
module spider OpenMPI
</pre>
</pre>


For csh/tcsh:
The module names have the format OpenMPI/''Version''-''CompilerToolchain''-''ToolchainVersion''.
<pre class="gcommand">
 
setenv LD_LIBRARY_PATH  /usr/local/gcc/5.3.0/lib64:${LD_LIBRARY_PATH}
For example, these are some of the modules available:


</pre>
<pre class="gcomment">
zhuofei@teach-sub1 ~$ ml spider OpenMPI


And use /usr/local/mpich2/3.2/gcc530/bin/mpirun to run the binary.
------------------------------------------------------------------------------------------------------------------------------------
  OpenMPI:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The Open MPI Project is an open source MPI-3 implementation.


For more information on how to run executables linked to MPICH2, please refer to [[Running Jobs on zcluster]].
    Versions:
        OpenMPI/3.1.4-GCC-8.3.0
        OpenMPI/4.1.1-GCC-11.2.0
        OpenMPI/4.1.2-GCC-11.2.0
        OpenMPI/4.1.4-GCC-11.3.0
        OpenMPI/4.1.4-GCC-12.2.0


====OpenMPI====
--------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:


OpenMPI is an open source Message Passing Interface (MPI) library that implements the MPI-2 standard. For more information on OpenMPI, please see http://www.open-mpi.org. The following versions of OpenMPI are available:
    $ module spider OpenMPI/4.1.4-GCC-12.2.0
------------------------------------------------------------------------------------------------------------------------------------
</pre>


* OpenMPI version 1.4.4, using GNU 4.1.2: installed in /usr/local/openmpi/1.4.4/gcc412/bin
Once the appropriate module is loaded, you can compile code with '''mpicc, mpiCC, mpicxx,''' '''mpic++,''' '''mpifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.
* OpenMPI version 1.4.4, using GNU 4.4.4: installed in /usr/local/openmpi/1.4.4/gcc444/bin
* OpenMPI version 1.4.4, using GNU 4.5.3: installed in /usr/local/openmpi/1.4.4/gcc453/bin
* OpenMPI version 1.4.4, using PGI 11.8: installed in /usr/local/openmpi/1.4.4/pgi118/bin
* OpenMPI version 1.5.5, using GNU 4.4.7: installed in /usr/local/openmpi/1.5.5/gcc447/bin
* OpenMPI version 1.6.2, using GNU 4.1.2: installed in /usr/local/openmpi/1.6.2/gcc412/bin
* OpenMPI version 1.6.2, using GNU 4.4.4: installed in /usr/local/openmpi/1.6.2/gcc444/bin
* OpenMPI version 1.6.2, using GNU 4.5.3: installed in /usr/local/openmpi/1.6.2/gcc453/bin
* OpenMPI version 1.6.2, using GNU 4.7.1: installed in /usr/local/openmpi/1.6.2/gcc471/bin
* OpenMPI version 1.6.2, using PGI 12.8: installed in /usr/local/openmpi/1.6.2/pgi128/bin
* OpenMPI version 1.6.2, using Intel 13.0: installed in /usr/local/openmpi/1.6.2/intel130/bin
* OpenMPI version 1.6.3, using GNU 4.4.6: installed in /usr/local/openmpi/1.6.3/gcc446/bin
* OpenMPI version 1.6.5, using GNU 4.4.7: installed in /usr/local/openmpi/1.6.5/gcc447/bin


These directories are NOT in users’ default path. To use these installations of OpenMPI you need to use the full path to the executables, e.g.  /usr/local/openmpi/1.4.4/pgi118/bin/mpicc, /usr/local/openmpi/1.4.4/pgi118/bin/mpif90, etc. Note that when using OpenMPI/GNU 4.7.1, you will need to add /usr/local/gcc/4.7.1/lib64 to the LD_LIBRARY_PATH. For information on how to run executables linked to OpenMPI, please refer to [[Running Jobs on zcluster]].
=== Intel MPI ===
You can find all Intel MPI modules available on the teaching cluster by running the following command:<pre class="gcommand">
module spider impi
</pre>The module names have the format impi/''Version''-''CompilerToolchain''-''ToolchainVersion''.


====MVAPICH2====
For example, these are some of the modules available:<pre class="gcomment">
zhuofei@teach-sub1 ~$ ml spider impi


MVAPICH2 is an MPI-2 implementation (conforming to MPI 2.2 standard) which includes all MPI-1 features for Infiniband. It is based on MPICH2 and MVICH. For information on MVAPICH2, please see http://mvapich.cse.ohio-state.edu/. The following versions of MVAPICH2 are available:
------------------------------------------------------------------------------------------------------------------------------------
  impi: impi/2018.5.288-iccifort-2019.5.281
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      Intel MPI Library, compatible with MPICH ABI


* MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.7/gcc444/bin
    Versions:
* MVAPICH2 version 1.7 (includes mpich2 1.4.1p1) using PGI 12.3: installed in /usr/local/mvapich2/1.7/pgi123/bin
        impi/2021.4.0-intel-compilers-2021.4.0
* MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using GNU 4.4.4: installed in /usr/local/mvapich2/1.8/gcc444/bin
        impi/2021.6.0-intel-compilers-2022.1.0
* MVAPICH2 version 1.8 (includes mpich2 1.4.1p1) using PGI 12.4: installed in /usr/local/mvapich2/1.8/pgi124/bin
        impi/2021.9.0-intel-compilers-2023.1.0
* MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using PGI 13.2: installed in /usr/local/mvapich2/1.8.1/pgi132/bin
    Other possible modules matches:
* MVAPICH2 version 1.8.1 (includes mpich2 1.4.1p1) using Intel 13.0: installed in /usr/local/mvapich2/1.8.1/intel130/bin
        iimpi
* MVAPICH2 version 2.0 (includes mpich-3.x.x ) using GNU 4.4.7: installed in /usr/local/mvapich2/2.0/gcc447/bin
* MVAPICH2 version 2.0.1 (includes mpich-3.1.2) using Intel 14.0: installed in /usr/local/mvapich2/2.0.1/intel140/bin


Before running applications linked with MVAPICH2 2.0 and 2.0.1, you need to set MV2_SMP_USE_CMA=0. This can be done with the following command:
------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:


For bash/sh:
      $ module -r spider '.*impi.*'
<pre class="gcommand">
export MV2_SMP_USE_CMA=0
</pre>


For csh/tcsh:
------------------------------------------------------------------------------------------------------------------------------------
<pre class="gcommand">
  For detailed information about a specific "impi" package (including how to load the modules) use the module's full name.
setenv MV2_SMP_USE_CMA 0
  Note that names that have a trailing (E) are extensions provided by other modules.
</pre>
  For example:
These directories are NOT in users’ default path. To use these installations of MVAPICH2 you need to use the full path to the executables.


NOTE We recommend using MVAPICH2 for applications that have a lot of internodal communication, which can benefit from the high-speed, low-latency Infiniband interconnect.
    $ module spider impi/2021.9.0-intel-compilers-2023.1.0
------------------------------------------------------------------------------------------------------------------------------------
</pre>Once the appropriate module is loaded, you can compile code with '''mpicc, mpiicc, mpicxx, mpiicpc,''' '''mpiifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module.


For information on how to run executables linked to MVAPICH2, please refer to [[Running Jobs on zcluster]].
[[#top|Back to Top]]

Latest revision as of 12:22, 5 August 2024


MPI Libraries for parallel jobs on Sapelo2

All compute nodes on Sapelo2 have Infiniband (IB) interconnect via EDR Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.

For more information on Environment Modules, please see the Lmod page.

The following MPI libraries are available:

OpenMPI

You can find all OpenMPI modules available on Sapelo2 by running the following command on a Sapelo2 node:

module spider OpenMPI

The module names have the format OpenMPI/Version-CompilerToolchain-ToolchainVersion.

For example, these are some of the modules available:

[shtsai@ss-sub1 ~]$ module spider OpenMPI

-----------------------------------------------------------------------------------------------------------------------------------------------
  OpenMPI:
-----------------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The Open MPI Project is an open source MPI-3 implementation.

     Versions:
        OpenMPI/3.1.4-GCC-8.3.0
        OpenMPI/3.1.4-iccifort-2019.5.281
        OpenMPI/4.0.5-GCC-10.2.0
        OpenMPI/4.1.1-GCC-10.3.0
        OpenMPI/4.1.1-GCC-11.2.0
        OpenMPI/4.1.4-GCC-11.3.0
        OpenMPI/4.1.4-GCC-12.2.0
        OpenMPI/4.1.4-intel-compilers-2022.1.0
        OpenMPI/4.1.5-GCC-12.3.0
        OpenMPI/4.1.6-GCC-13.2.0

------------------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider OpenMPI/4.1.6-GCC-13.2.0
-----------------------------------------------------------------------------------------------------------------------------------------------

Once the appropriate module is loaded, you can compile code with mpicc, mpiCC, mpicxx, mpic++, mpifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.

Intel MPI

You can find all Intel MPI modules available on Sapelo2 by running the following command on a Sapelo2 node:

module spider impi

The module names have the format impi/Version-CompilerToolchain-ToolchainVersion. For example, these are some of the modules available:

[shtsai@ss-sub1 ~]$ module spider impi

------------------------------------------------------------------------------------------------------------------------------------
  impi:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      Intel MPI Library, compatible with MPICH ABI

     Versions:
        impi/2021.4.0-intel-compilers-2021.4.0
        impi/2021.6.0-intel-compilers-2022.1.0
        impi/2021.9.0-intel-compilers-2023.1.0
     Other possible modules matches:
        iimpi

------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:

      $ module -r spider '.*impi.*'

------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "impi" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider impi/2021.9.0-intel-compilers-2023.1.0
------------------------------------------------------------------------------------------------------------------------------------

Once the appropriate module is loaded, you can compile code with mpicc, mpiicc, mpicxx, mpiicpc, mpiifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.

MPI commands and how to launch MPI programs

MPI library Version Base toolchain Toolchain Fortran C C++ How to launch with Slurm
OpenMPI 3.1.4-GCC-8.3.0 GCCcore-8.3.0 foss/2019b mpif90 mpicc mpicxx srun --mpi=pmi2
OpenMPI 3.1.4-iccifort-2019.5.281 GCCcore-8.3.0 iomkl/2019b mpif90 mpicc mpicxx srun --mpi=pmi2
OpenMPI 4.0.5-GCC-10.2.0 GCCcore-10.2.0 foss/2020b mpif90 mpicc mpicxx srun --mpi=pmi2
OpenMPI 4.1.1-GCC-11.2.0 GCCcore-11.2.0 foss/2021b mpif90 mpicc mpicxx srun
OpenMPI 4.1.4-GCC-11.3.0 GCCcore-11.3.0 foss/2022a mpif90 mpicc mpicxx srun
OpenMPI 4.1.4-intel-compilers-2022.1.0 GCCcore-11.3.0 iomkl/2022a mpif90 mpicc mpicxx srun
OpenMPI 4.1.4-GCC-12.2.0 GCCcore-12.2.0 foss/2022b mpif90 mpicc mpicxx srun
OpenMPI 4.1.5-GCC-12.3.0 GCCcore-12.3.0 foss/2023a mpif90 mpicc mpicxx srun
OpenMPI 4.1.6-GCC-13.2.0 GCCcore-13.2.0 foss/2023b mpif90 mpicc mpicxx srun
Intel MPI 2021.4.0-intel-compilers-2021.4.0 GCCcore-11.2.0 intel/2021b mpiifort mpiicc mpiicpc srun
Intel MPI 2021.6.0-intel-compilers-2022.1.0 GCCcore-11.3.0 intel/2022a mpiifort mpiicc mpiicpc srun
Intel MPI 2021.9.0-intel-compilers-2023.1.0 GCCcore-12.3.0 intel/2023a mpiifort mpiicc mpiicpc srun

Note

If your MPI job receives any of the following or similar errors:

  • PMIX ERROR: OUT-OF-RESOURCE in file base/bfrop_base_unpack.c at line 750
  • PMIX ERROR: UNPACK-PAST-END in file base/bfrop_base_unpack.c at line 750
  • PMIX ERROR: UNPACK-INADEQUATE-SPACE in file base/gds_base_fns.c at line 138
  • UNPACK-PMIX-VALUE: UNSUPPORTED TYPE 126

then please use srun --mpi=pmi2 to start the MPI application.


Back to Top

MPI Libraries for parallel jobs on the teaching cluster

All compute nodes on Sapelo2 have Infiniband (IB) interconnect via EDR Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.

For more information on Environment Modules, please see the Lmod page.

The following MPI libraries are available:

OpenMPI

You can find all OpenMPI modules available on the teaching cluster by running the following command:

module spider OpenMPI

The module names have the format OpenMPI/Version-CompilerToolchain-ToolchainVersion.

For example, these are some of the modules available:

zhuofei@teach-sub1 ~$ ml spider OpenMPI

------------------------------------------------------------------------------------------------------------------------------------
  OpenMPI:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The Open MPI Project is an open source MPI-3 implementation.

     Versions:
        OpenMPI/3.1.4-GCC-8.3.0
        OpenMPI/4.1.1-GCC-11.2.0
        OpenMPI/4.1.2-GCC-11.2.0
        OpenMPI/4.1.4-GCC-11.3.0
        OpenMPI/4.1.4-GCC-12.2.0

--------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider OpenMPI/4.1.4-GCC-12.2.0
------------------------------------------------------------------------------------------------------------------------------------

Once the appropriate module is loaded, you can compile code with mpicc, mpiCC, mpicxx, mpic++, mpifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.

Intel MPI

You can find all Intel MPI modules available on the teaching cluster by running the following command:

module spider impi

The module names have the format impi/Version-CompilerToolchain-ToolchainVersion. For example, these are some of the modules available:

zhuofei@teach-sub1 ~$ ml spider impi

------------------------------------------------------------------------------------------------------------------------------------
  impi: impi/2018.5.288-iccifort-2019.5.281
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      Intel MPI Library, compatible with MPICH ABI

     Versions:
        impi/2021.4.0-intel-compilers-2021.4.0
        impi/2021.6.0-intel-compilers-2022.1.0
        impi/2021.9.0-intel-compilers-2023.1.0
     Other possible modules matches:
        iimpi

------------------------------------------------------------------------------------------------------------------------------------
  To find other possible module matches execute:

      $ module -r spider '.*impi.*'

------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "impi" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider impi/2021.9.0-intel-compilers-2023.1.0
------------------------------------------------------------------------------------------------------------------------------------

Once the appropriate module is loaded, you can compile code with mpicc, mpiicc, mpicxx, mpiicpc, mpiifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.

Back to Top