GPU: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
Line 81: Line 81:


===Software===
===Software===
Sapelo2 has several tools for GPU programming, for example:
Sapelo2 has several tools for GPU programming and many CUDA-enabled applications. For example:


'''1. NVIDIA CUDA toolkit'''
'''1. NVIDIA CUDA toolkit'''
Line 126: Line 126:
OpenACC is also supported by GNU compilers, especially the latest versions, e.g. GNU 7.2.0, installed on Sapelo2. For more information on OpenACC support by GNU compilers, please refer to https://gcc.gnu.org/wiki/OpenACC
OpenACC is also supported by GNU compilers, especially the latest versions, e.g. GNU 7.2.0, installed on Sapelo2. For more information on OpenACC support by GNU compilers, please refer to https://gcc.gnu.org/wiki/OpenACC


For information on versions of GNU compilers installed on Sapelo2, please see [[Code Compilation on Sapelo2]].
For information on versions of compilers installed on Sapelo2, please see [[Code Compilation on Sapelo2]].
 
 
'''5. CUDA-enabled applications'''
 
CUDA-enabled applications typically have a version suffix in the module name to indicate the version of CUDA that they were built with. Some examples of applications built with CUDA-12.1.1 that support up to compute capability 9.0 include:
 
* GROMACS/2023.3-foss-2023a-CUDA-12.1.1-PLUMED-2.9.0
       
* GROMACS/2023.4-foss-2023a-CUDA-12.1.1
 
* PyTorch/2.1.2-foss-2023a-CUDA-12.1.1


===Running Jobs===
===Running Jobs===
For information on how to run GPU jobs on Sapelo2, please refer to [[Running Jobs on Sapelo2]].
For information on how to run GPU jobs on Sapelo2, please refer to [[Running Jobs on Sapelo2]].

Revision as of 21:00, 2 September 2024


GPU Computing on Sapelo2

Hardware

For a description of the Graphics Processing Units (GPU) device specifications, please see GPU Hardware.

The following table summarizes the GPU devices available on sapelo2:

Number of nodes CPU cores per node Host memory per node CPU processor GPU model GPU devices per node Device memory GPU compute capability Minimum CUDA version Partition Name Notes
10 128 1TB Intel Sapphire Rapids H100 4 80GB 9.0 11.8 gpu_p, gpu_30d_p Need to request --gres=gpu:H100, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:H100:1

#SBATCH --time=7-00:00:00

14 64 1TB AMD Milan A100 4 80GB 8.0 11.0 gpu_p, gpu_30d_p Need to request --gres=gpu:A100, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:A100:1

#SBATCH --time=7-00:00:00

11 128 745GB AMD Genoa L4 4 24GB 8.9 11.8 gpu_p, gpu_30d_p Need to request --gres=gpu:L4, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:L4:1

#SBATCH --time=7-00:00:00

2 32 192GB Intel Skylake P100 1 16GB 6.0 8.0 gpu_p, gpu_30d_p Need to request --gres=gpu:P100, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:P100:1

#SBATCH --time=7-00:00:00

1 64 1TB AMD Milan A100 4 80GB 8.0 11.0 buyin partition Available on batch for all users up to 4 hours, e.g.,

#SBATCH --partition=batch

#SBATCH --gres=gpu:A100:1 or

#SBATCH --gres=gpu:L4:1 or

#SBATCH --gres=gpu:V100:1 or

#SBATCH --gres=gpu:V100S:1

#SBATCH --time=4:00:00

2 64 745GB AMD Genoa L4 4 24GB 8.9 11.8 buyin partition
2 28 192GB Intel Skylake V100 1 16GB 7.0 9.0 buyin partition
2 32 192GB Intel Skylake V100 1 16GB 7.0 9.0 buyin partition
2 32 384GB Intel Skylake V100 1 32GB 7.0 9.0 buyin partition
2 64 128GB AMD Naples V100 2 32GB 7.0 9.0 buyin partition
1 64 128GB AMD Naples V100 1 32GB 7.0 9.0 buyin partition
4 64 128GB AMD Rome V100S 1 32GB 7.0 9.0 buyin partition

Software

Sapelo2 has several tools for GPU programming and many CUDA-enabled applications. For example:

1. NVIDIA CUDA toolkit

Several versions of the CUDA toolkit are available. Please see our CUDA page.


2. cuDNN

The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.

To see all modules of cuDNN installed on Sapelo2, please use the command

ml spider cuDNN


3. NCCL

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication

     primitives that are performance optimized for NVIDIA GPUs.

To see all modules of cuDNN installed on Sapelo2, please use the command

ml spider NCCL


4. OpenACC

Using the NVIDIA HPC SDK compiler suite, provided by the NVHPC module on Sapelo2, programmers can accelerate applications on x64+accelerator platforms by adding OpenACC compiler directives to Fortran and C programs and then recompiling with appropriate compiler options. Please see https://developer.nvidia.com/hpc-sdk and http://www.pgroup.com/resources/accel.htm

OpenACC is also supported by GNU compilers, especially the latest versions, e.g. GNU 7.2.0, installed on Sapelo2. For more information on OpenACC support by GNU compilers, please refer to https://gcc.gnu.org/wiki/OpenACC

For information on versions of compilers installed on Sapelo2, please see Code Compilation on Sapelo2.


5. CUDA-enabled applications

CUDA-enabled applications typically have a version suffix in the module name to indicate the version of CUDA that they were built with. Some examples of applications built with CUDA-12.1.1 that support up to compute capability 9.0 include:

  • GROMACS/2023.3-foss-2023a-CUDA-12.1.1-PLUMED-2.9.0
  • GROMACS/2023.4-foss-2023a-CUDA-12.1.1
  • PyTorch/2.1.2-foss-2023a-CUDA-12.1.1

Running Jobs

For information on how to run GPU jobs on Sapelo2, please refer to Running Jobs on Sapelo2.