MPI: Difference between revisions
No edit summary |
No edit summary |
||
(8 intermediate revisions by 2 users not shown) | |||
Line 30: | Line 30: | ||
Versions: | Versions: | ||
MVAPICH2/2.3.6-GCC-9.3.0 | MVAPICH2/2.3.6-GCC-9.3.0 | ||
MVAPICH2/2.3.7-1-GCC-9.3.0 | |||
Other possible modules matches: | Other possible modules matches: | ||
gmvapich2 | gmvapich2 | ||
Line 53: | Line 45: | ||
For example: | For example: | ||
$ module spider MVAPICH2/2.3. | $ module spider MVAPICH2/2.3.7-1-GCC-9.3.0 | ||
----------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ||
</pre> | </pre> | ||
Line 81: | Line 73: | ||
Versions: | Versions: | ||
OpenMPI/3.1.4-GCC-8.3.0 | OpenMPI/3.1.4-GCC-8.3.0 | ||
OpenMPI/3.1.4-iccifort-2019.5.281 | |||
OpenMPI/4.0.5-GCC-10.2.0 | |||
OpenMPI/4.1.1-GCC-10.3.0 | |||
OpenMPI/4.1.1-GCC-11.2.0 | OpenMPI/4.1.1-GCC-11.2.0 | ||
OpenMPI/4.1.4-GCC-11.3.0 | OpenMPI/4.1.4-GCC-11.3.0 | ||
OpenMPI/4.1.4-GCC-12.2.0 | OpenMPI/4.1.4-GCC-12.2.0 | ||
OpenMPI/4.1.4-intel-compilers-2022.1.0 | |||
OpenMPI/4.1.5-GCC-12.3.0 | |||
OpenMPI/4.1.6-GCC-13.2.0 | |||
------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ | ||
Line 91: | Line 88: | ||
For example: | For example: | ||
$ module spider OpenMPI/4.1. | $ module spider OpenMPI/4.1.6-GCC-13.2.0 | ||
----------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ||
</pre> | </pre> | ||
Line 100: | Line 97: | ||
You can find all Intel MPI modules available on Sapelo2 by running the following command on a Sapelo2 node:<pre class="gcommand"> | You can find all Intel MPI modules available on Sapelo2 by running the following command on a Sapelo2 node:<pre class="gcommand"> | ||
module spider impi | module spider impi | ||
</pre>The module names have the format impi/''Version''-''CompilerToolchain''-''ToolchainVersion'' | </pre>The module names have the format impi/''Version''-''CompilerToolchain''-''ToolchainVersion.'' | ||
For example, these are some of the modules available:<pre class="gcomment"> | |||
[shtsai@ss-sub1 ~]$ module spider impi | [shtsai@ss-sub1 ~]$ module spider impi | ||
Line 129: | Line 128: | ||
------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ||
</pre>Once the appropriate module is loaded, you can compile code with '''mpicc, mpiicc, mpicxx, mpiicpc,''' '''mpiifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module. | </pre>Once the appropriate module is loaded, you can compile code with '''mpicc, mpiicc, mpicxx, mpiicpc,''' '''mpiifort,''' '''mpif90,''' '''mpif77,''' etc. and you can run applications that were linked to the '''MPI libraries''' loaded by the module. | ||
===MPI commands and how to launch MPI programs=== | |||
{| width="100%" border="1" cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable" | |||
|- | |||
! scope="col" | MPI library | |||
! scope="col" | Version | |||
! scope="col" | Base toolchain | |||
! scope="col" | Toolchain | |||
! scope="col" | Fortran | |||
! scope="col" | C | |||
! scope="col" | C++ | |||
! scope="col" | How to launch with Slurm | |||
|- | |||
| OpenMPI || 3.1.4-GCC-8.3.0 || GCCcore-8.3.0 || foss/2019b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2 | |||
|- | |||
| OpenMPI || 3.1.4-iccifort-2019.5.281 || GCCcore-8.3.0 || iomkl/2019b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2 | |||
|- | |||
| OpenMPI || 4.0.5-GCC-10.2.0 || GCCcore-10.2.0 || foss/2020b || mpif90 || mpicc || mpicxx || srun --mpi=pmi2 | |||
|- | |||
| OpenMPI || 4.1.1-GCC-11.2.0 || GCCcore-11.2.0 || foss/2021b || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| OpenMPI || 4.1.4-GCC-11.3.0 || GCCcore-11.3.0 || foss/2022a || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| OpenMPI || 4.1.4-intel-compilers-2022.1.0 || GCCcore-11.3.0 || iomkl/2022a || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| OpenMPI || 4.1.4-GCC-12.2.0 || GCCcore-12.2.0 || foss/2022b || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| OpenMPI || 4.1.5-GCC-12.3.0 || GCCcore-12.3.0 || foss/2023a || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| OpenMPI || 4.1.6-GCC-13.2.0 || GCCcore-13.2.0 || foss/2023b || mpif90 || mpicc || mpicxx || srun | |||
|- | |||
| Intel MPI || 2021.4.0-intel-compilers-2021.4.0 || GCCcore-11.2.0 || intel/2021b || mpiifort || mpiicc || mpiicpc || srun | |||
|- | |||
| Intel MPI || 2021.6.0-intel-compilers-2022.1.0 || GCCcore-11.3.0 || intel/2022a || mpiifort || mpiicc || mpiicpc || srun | |||
|- | |||
| Intel MPI || 2021.9.0-intel-compilers-2023.1.0 || GCCcore-12.3.0 || intel/2023a || mpiifort || mpiicc || mpiicpc || srun | |||
|- | |||
|} | |||
====Note==== | |||
If your MPI job receives any of the following or similar errors: | |||
*PMIX ERROR: OUT-OF-RESOURCE in file base/bfrop_base_unpack.c at line 750 | |||
*PMIX ERROR: UNPACK-PAST-END in file base/bfrop_base_unpack.c at line 750 | |||
*PMIX ERROR: UNPACK-INADEQUATE-SPACE in file base/gds_base_fns.c at line 138 | |||
*UNPACK-PMIX-VALUE: UNSUPPORTED TYPE 126 | |||
then please use <code>srun --mpi=pmi2</code> to start the MPI application. | |||
---- | ---- | ||
Line 135: | Line 183: | ||
==MPI Libraries for parallel jobs on the teaching cluster== | ==MPI Libraries for parallel jobs on the teaching cluster== | ||
All compute nodes on | All compute nodes on Sapelo2 have Infiniband (IB) interconnect via '''EDR''' Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file. | ||
For more information on Environment Modules, please see the [[Lmod]] page. | |||
The following MPI libraries are available: | |||
<!-- | <!-- | ||
====MVAPICH2==== | ====MVAPICH2==== |
Latest revision as of 12:22, 5 August 2024
MPI Libraries for parallel jobs on Sapelo2
All compute nodes on Sapelo2 have Infiniband (IB) interconnect via EDR Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.
For more information on Environment Modules, please see the Lmod page.
The following MPI libraries are available:
OpenMPI
You can find all OpenMPI modules available on Sapelo2 by running the following command on a Sapelo2 node:
module spider OpenMPI
The module names have the format OpenMPI/Version-CompilerToolchain-ToolchainVersion.
For example, these are some of the modules available:
[shtsai@ss-sub1 ~]$ module spider OpenMPI ----------------------------------------------------------------------------------------------------------------------------------------------- OpenMPI: ----------------------------------------------------------------------------------------------------------------------------------------------- Description: The Open MPI Project is an open source MPI-3 implementation. Versions: OpenMPI/3.1.4-GCC-8.3.0 OpenMPI/3.1.4-iccifort-2019.5.281 OpenMPI/4.0.5-GCC-10.2.0 OpenMPI/4.1.1-GCC-10.3.0 OpenMPI/4.1.1-GCC-11.2.0 OpenMPI/4.1.4-GCC-11.3.0 OpenMPI/4.1.4-GCC-12.2.0 OpenMPI/4.1.4-intel-compilers-2022.1.0 OpenMPI/4.1.5-GCC-12.3.0 OpenMPI/4.1.6-GCC-13.2.0 ------------------------------------------------------------------------------------------------------------------------------------------------ For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name. Note that names that have a trailing (E) are extensions provided by other modules. For example: $ module spider OpenMPI/4.1.6-GCC-13.2.0 -----------------------------------------------------------------------------------------------------------------------------------------------
Once the appropriate module is loaded, you can compile code with mpicc, mpiCC, mpicxx, mpic++, mpifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.
Intel MPI
You can find all Intel MPI modules available on Sapelo2 by running the following command on a Sapelo2 node:
module spider impi
The module names have the format impi/Version-CompilerToolchain-ToolchainVersion. For example, these are some of the modules available:
[shtsai@ss-sub1 ~]$ module spider impi ------------------------------------------------------------------------------------------------------------------------------------ impi: ------------------------------------------------------------------------------------------------------------------------------------ Description: Intel MPI Library, compatible with MPICH ABI Versions: impi/2021.4.0-intel-compilers-2021.4.0 impi/2021.6.0-intel-compilers-2022.1.0 impi/2021.9.0-intel-compilers-2023.1.0 Other possible modules matches: iimpi ------------------------------------------------------------------------------------------------------------------------------------ To find other possible module matches execute: $ module -r spider '.*impi.*' ------------------------------------------------------------------------------------------------------------------------------------ For detailed information about a specific "impi" package (including how to load the modules) use the module's full name. Note that names that have a trailing (E) are extensions provided by other modules. For example: $ module spider impi/2021.9.0-intel-compilers-2023.1.0 ------------------------------------------------------------------------------------------------------------------------------------
Once the appropriate module is loaded, you can compile code with mpicc, mpiicc, mpicxx, mpiicpc, mpiifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.
MPI commands and how to launch MPI programs
MPI library | Version | Base toolchain | Toolchain | Fortran | C | C++ | How to launch with Slurm |
---|---|---|---|---|---|---|---|
OpenMPI | 3.1.4-GCC-8.3.0 | GCCcore-8.3.0 | foss/2019b | mpif90 | mpicc | mpicxx | srun --mpi=pmi2 |
OpenMPI | 3.1.4-iccifort-2019.5.281 | GCCcore-8.3.0 | iomkl/2019b | mpif90 | mpicc | mpicxx | srun --mpi=pmi2 |
OpenMPI | 4.0.5-GCC-10.2.0 | GCCcore-10.2.0 | foss/2020b | mpif90 | mpicc | mpicxx | srun --mpi=pmi2 |
OpenMPI | 4.1.1-GCC-11.2.0 | GCCcore-11.2.0 | foss/2021b | mpif90 | mpicc | mpicxx | srun |
OpenMPI | 4.1.4-GCC-11.3.0 | GCCcore-11.3.0 | foss/2022a | mpif90 | mpicc | mpicxx | srun |
OpenMPI | 4.1.4-intel-compilers-2022.1.0 | GCCcore-11.3.0 | iomkl/2022a | mpif90 | mpicc | mpicxx | srun |
OpenMPI | 4.1.4-GCC-12.2.0 | GCCcore-12.2.0 | foss/2022b | mpif90 | mpicc | mpicxx | srun |
OpenMPI | 4.1.5-GCC-12.3.0 | GCCcore-12.3.0 | foss/2023a | mpif90 | mpicc | mpicxx | srun |
OpenMPI | 4.1.6-GCC-13.2.0 | GCCcore-13.2.0 | foss/2023b | mpif90 | mpicc | mpicxx | srun |
Intel MPI | 2021.4.0-intel-compilers-2021.4.0 | GCCcore-11.2.0 | intel/2021b | mpiifort | mpiicc | mpiicpc | srun |
Intel MPI | 2021.6.0-intel-compilers-2022.1.0 | GCCcore-11.3.0 | intel/2022a | mpiifort | mpiicc | mpiicpc | srun |
Intel MPI | 2021.9.0-intel-compilers-2023.1.0 | GCCcore-12.3.0 | intel/2023a | mpiifort | mpiicc | mpiicpc | srun |
Note
If your MPI job receives any of the following or similar errors:
- PMIX ERROR: OUT-OF-RESOURCE in file base/bfrop_base_unpack.c at line 750
- PMIX ERROR: UNPACK-PAST-END in file base/bfrop_base_unpack.c at line 750
- PMIX ERROR: UNPACK-INADEQUATE-SPACE in file base/gds_base_fns.c at line 138
- UNPACK-PMIX-VALUE: UNSUPPORTED TYPE 126
then please use srun --mpi=pmi2
to start the MPI application.
MPI Libraries for parallel jobs on the teaching cluster
All compute nodes on Sapelo2 have Infiniband (IB) interconnect via EDR Infiniband network (100Gbps). Various IB-enabled MPI libraries are available and users can set the environment variables for the MPI library of choice by loading the corresponding module file.
For more information on Environment Modules, please see the Lmod page.
The following MPI libraries are available:
OpenMPI
You can find all OpenMPI modules available on the teaching cluster by running the following command:
module spider OpenMPI
The module names have the format OpenMPI/Version-CompilerToolchain-ToolchainVersion.
For example, these are some of the modules available:
zhuofei@teach-sub1 ~$ ml spider OpenMPI ------------------------------------------------------------------------------------------------------------------------------------ OpenMPI: ------------------------------------------------------------------------------------------------------------------------------------ Description: The Open MPI Project is an open source MPI-3 implementation. Versions: OpenMPI/3.1.4-GCC-8.3.0 OpenMPI/4.1.1-GCC-11.2.0 OpenMPI/4.1.2-GCC-11.2.0 OpenMPI/4.1.4-GCC-11.3.0 OpenMPI/4.1.4-GCC-12.2.0 -------------------------------------------------------------------------------------------------------------------------------------- For detailed information about a specific "OpenMPI" package (including how to load the modules) use the module's full name. Note that names that have a trailing (E) are extensions provided by other modules. For example: $ module spider OpenMPI/4.1.4-GCC-12.2.0 ------------------------------------------------------------------------------------------------------------------------------------
Once the appropriate module is loaded, you can compile code with mpicc, mpiCC, mpicxx, mpic++, mpifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.
Intel MPI
You can find all Intel MPI modules available on the teaching cluster by running the following command:
module spider impi
The module names have the format impi/Version-CompilerToolchain-ToolchainVersion. For example, these are some of the modules available:
zhuofei@teach-sub1 ~$ ml spider impi ------------------------------------------------------------------------------------------------------------------------------------ impi: impi/2018.5.288-iccifort-2019.5.281 ------------------------------------------------------------------------------------------------------------------------------------ Description: Intel MPI Library, compatible with MPICH ABI Versions: impi/2021.4.0-intel-compilers-2021.4.0 impi/2021.6.0-intel-compilers-2022.1.0 impi/2021.9.0-intel-compilers-2023.1.0 Other possible modules matches: iimpi ------------------------------------------------------------------------------------------------------------------------------------ To find other possible module matches execute: $ module -r spider '.*impi.*' ------------------------------------------------------------------------------------------------------------------------------------ For detailed information about a specific "impi" package (including how to load the modules) use the module's full name. Note that names that have a trailing (E) are extensions provided by other modules. For example: $ module spider impi/2021.9.0-intel-compilers-2023.1.0 ------------------------------------------------------------------------------------------------------------------------------------
Once the appropriate module is loaded, you can compile code with mpicc, mpiicc, mpicxx, mpiicpc, mpiifort, mpif90, mpif77, etc. and you can run applications that were linked to the MPI libraries loaded by the module.