GPU: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
Line 11: Line 11:
|-
|-
! scope="col" | Number of nodes
! scope="col" | Number of nodes
! scope="col" | CPU cores/node
! scope="col" | CPU cores per node
! scope="col" | Host memory/node
! scope="col" | Host memory per node
! scope="col" | CPU processor
! scope="col" | CPU processor
! scope="col" | GPU model
! scope="col" | GPU model
! scope="col" | GPU devices/node
! scope="col" | GPU devices per node
! scope="col" | Device memory
! scope="col" | Device memory
! scope="col" | GPU compute capability
! scope="col" | CUDA version
! scope="col" | Partition Name
! scope="col" | Partition Name
! scope="col" | Notes
! scope="col" | Notes
|-
|-
|-
|-
| 14 || 64 || 1TB  || AMD Milan || A100 || 4 || 80GB || gpu_p, gpu_30d_p || Need to request --gres=gpu:A100, e.g.,
| 14 || 64 || 1TB  || AMD Milan || A100 || 4 || 80GB || 8.0 || >=11.0 || gpu_p, gpu_30d_p || Need to request --gres=gpu:A100, e.g.,
<nowiki>#</nowiki>SBATCH --partition=gpu_p
<nowiki>#</nowiki>SBATCH --partition=gpu_p


Line 28: Line 30:
<nowiki>#</nowiki>SBATCH --time=7-00:00:00  
<nowiki>#</nowiki>SBATCH --time=7-00:00:00  
|-
|-
| 2 || 32 || 192GB || Intel Skylake || P100 || 1 || 16GB || gpu_p, gpu_30d_p ||Need to request --gres=gpu:P100, e.g.,
| 2 || 32 || 192GB || Intel Skylake || P100 || 1 || 16GB || 6.0 || >=8.0|| gpu_p, gpu_30d_p ||Need to request --gres=gpu:P100, e.g.,
<nowiki>#</nowiki>SBATCH --partition=gpu_p
<nowiki>#</nowiki>SBATCH --partition=gpu_p


Line 35: Line 37:
<nowiki>#</nowiki>SBATCH --time=7-00:00:00  
<nowiki>#</nowiki>SBATCH --time=7-00:00:00  
|-
|-
| 1 || 64 || 1TB  || AMD Milan || A100 || 4 || 80GB || buyin partition || rowspan="7" | Available on '''batch''' for all users up to '''4 hours''', e.g.,
| 1 || 64 || 1TB  || AMD Milan || A100 || 4 || 80GB || 8.0 || >=11.0 || buyin partition || rowspan="7" | Available on '''batch''' for all users up to '''4 hours''', e.g.,
<nowiki>#</nowiki>SBATCH --partition=batch
<nowiki>#</nowiki>SBATCH --partition=batch


Line 46: Line 48:
<nowiki>#</nowiki>SBATCH --time=4:00:00
<nowiki>#</nowiki>SBATCH --time=4:00:00
|-
|-
| 2 || 28 || 192GB || Intel Skylake || V100 || 1 || 16GB || buyin partition  
| 2 || 28 || 192GB || Intel Skylake || V100 || 1 || 16GB || 7.0|| || buyin partition  
|-
|-
| 2 || 32 || 192GB || Intel Skylake || V100 || 1 || 16GB || buyin partition  
| 2 || 32 || 192GB || Intel Skylake || V100 || 1 || 16GB || 7.0 || || buyin partition  
|-
|-
| 2 || 32 || 384GB || Intel Skylake || V100 || 1 || 32GB || buyin partition  
| 2 || 32 || 384GB || Intel Skylake || V100 || 1 || 32GB || 7.0 || || buyin partition  
|-
|-
| 2 || 64 || 128GB || AMD Naples || V100 || 2 || 32GB || buyin partition  
| 2 || 64 || 128GB || AMD Naples || V100 || 2 || 32GB || 7.0 || || buyin partition  
|-
|-
| 1 || 64 || 128GB || AMD Naples || V100 || 1 || 32GB || buyin partition  
| 1 || 64 || 128GB || AMD Naples || V100 || 1 || 32GB || 7.0 || ||  buyin partition  
|-
|-
| 4 || 64 || 128GB || AMD Rome || V100S || 1 || 32GB || buyin partition  
| 4 || 64 || 128GB || AMD Rome || V100S || 1 || 32GB ||  7.0 || || buyin partition  
|-
|-
|}
|}

Revision as of 15:40, 31 August 2024


GPU Computing on Sapelo2

Hardware

For a description of the Graphics Processing Units (GPU) device specifications, please see GPU Hardware.

The following table summarizes the GPU devices available on sapelo2:

Number of nodes CPU cores per node Host memory per node CPU processor GPU model GPU devices per node Device memory GPU compute capability CUDA version Partition Name Notes
14 64 1TB AMD Milan A100 4 80GB 8.0 >=11.0 gpu_p, gpu_30d_p Need to request --gres=gpu:A100, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:A100:1

#SBATCH --time=7-00:00:00

2 32 192GB Intel Skylake P100 1 16GB 6.0 >=8.0 gpu_p, gpu_30d_p Need to request --gres=gpu:P100, e.g.,

#SBATCH --partition=gpu_p

#SBATCH --gres=gpu:P100:1

#SBATCH --time=7-00:00:00

1 64 1TB AMD Milan A100 4 80GB 8.0 >=11.0 buyin partition Available on batch for all users up to 4 hours, e.g.,

#SBATCH --partition=batch

#SBATCH --gres=gpu:A100:1 or

#SBATCH --gres=gpu:V100:1 or

#SBATCH --gres=gpu:V100S:1 or

#SBATCH --time=4:00:00

2 28 192GB Intel Skylake V100 1 16GB 7.0 buyin partition
2 32 192GB Intel Skylake V100 1 16GB 7.0 buyin partition
2 32 384GB Intel Skylake V100 1 32GB 7.0 buyin partition
2 64 128GB AMD Naples V100 2 32GB 7.0 buyin partition
1 64 128GB AMD Naples V100 1 32GB 7.0 buyin partition
4 64 128GB AMD Rome V100S 1 32GB 7.0 buyin partition

Software

Sapelo2 has the following tools for programming for GPUs:

1. NVIDIA CUDA toolkit

Several versions of the CUDA toolkit are available. Please see our CUDA page.

2. PGI/CUDA compilers

The PGI compilers available on Sapelo2 support GPU acceleration, including Fortran/CUDA.

For more information on the GPU support of PGI compilers, please visit the PGI website http://www.pgroup.com/resources/cudafortran.htm

For information on versions of PGI compilers installed on Sapelo2, please see Code Compilation on Sapelo2.

3. OpenACC

Using the NVIDIA HPC SDK compiler suite or the old PGI Accelerator compilers, programmers can accelerate applications on x64+accelerator platforms by adding OpenACC compiler directives to Fortran and C programs and then recompiling with appropriate compiler options. Please see https://developer.nvidia.com/hpc-sdk and http://www.pgroup.com/resources/accel.htm

OpenACC is also supported by GNU compilers, especially the latest versions, e.g. GNU 7.2.0, installed on Sapelo2. For more information on OpenACC support by GNU compilers, please refer to https://gcc.gnu.org/wiki/OpenACC

For information on versions of GNU compilers installed on Sapelo2, please see Code Compilation on Sapelo2.

Running Jobs

For information on how to run GPU jobs on Sapelo2, please refer to Running Jobs on Sapelo2.