Systems

From Research Computing Center Wiki
Revision as of 15:20, 19 October 2022 by Gcormier (talk | contribs)
Jump to navigation Jump to search



Sapelo2

Sapelo2 is a Linux cluster that runs a 64-bit CentOS 7.9 operating system and it is managed using xCAT and Puppet. Several virtual login nodes are available, with Intel Xeon Gold 6230 processors, 32GB of RAM, and 16 cores per node. The queueing system on Sapelo2 is Slurm.

For a subset of compute nodes, internodal communication among them and between these nodes and the storage systems serving the home directories and the scratch directories is provided by a QDR Infiniband network(40Gbps). For another (larger) subset of compute nodes, these communications are provided by an EDR Infiniband network (100Gbps).


The cluster is currently comprised of the following resources:

Regular nodes

  • 72 compute nodes with AMD EPYC (Milan 3rd gen) processors (128 cores and 512GB of RAM per node)
  • 4 compute nodes with AMD EPYC (Milan 3rd gen) processors (64 cores and 256GB of RAM per node)
  • 2 compute nodes with AMD EPYC (Milan 3rd gen) processors (64 cores and 128GB of RAM per node)
  • 123 compute nodes with AMD EPYC (Rome 2nd gen) processors (64 cores and 128GB of RAM per node)
  • 64 compute nodes with AMD EPYC (Naples 1st gen) processors (32 cores and 128GB of RAM per node)
  • 42 compute nodes with Intel Xeon Skylake processors (32 cores and 192GB of RAM per node)
  • 34 compute nodes with Intel Xeon Broadwell processors (28 cores and 64GB of RAM per node)


High memory nodes (2TB/node)

  • 2 compute nodes with AMD EPYC processors (32 cores and 2TB of RAM per node)


High memory nodes (1TB/node)

  • 2 compute nodes with AMD EPYC (Milan 3rd gen) processors (128 cores and 1TB of RAM per node)
  • 4 compute nodes with AMD EPYC (Naples 1st gen) processors (64 cores and 1TB of RAM per node)
  • 5 compute nodes with Intel Xeon Broadwell processors (28 cores and 1TB of RAM per node)


High memory nodes (512GB/node)

  • 18 compute nodes with AMD EPYC (Naples 1st gen) processors (32 cores and 512GB of RAM per node)


GPU nodes

  • 1 compute node with AMD EPYC (Milan 3rd gen) processors (64 cores and 1TB of RAM) and 4x NVIDIA A100 GPU cards.
  • 4 compute nodes with Intel Xeon Skylake processors (32 cores and 187GB of RAM) and 1x NVIDIA P100 GPU card per node
  • 2 compute nodes with Intel Xeon processors (16 cores and 128GB of RAM) and 8x NVIDIA K40m GPU cards per node


Buy-in nodes

  • Various configurations


Connecting to Sapelo2

Transferring Files

Disk Storage

Software on Sapelo2

Available Toolchains and Toolchain Compatibility

Code Compilation on Sapelo2

Running Jobs on Sapelo2

Monitoring Jobs on Sapelo2

Migrating from Torque to Slurm

Training material

To help users familiarize with Slurm and the test cluster environment, we have prepared some training videos that are available from the GACRC's Kaltura channel at https://kaltura.uga.edu/channel/GACRC/176125031 (login with MyID and password is required). Training sessions and slides are available at https://wiki.gacrc.uga.edu/wiki/Training



Back to Top


Teaching cluster

The teaching cluster is a Linux cluster that runs a 64-bit Linux, with Centos 7.8. The login node is a VM that has 4 cores (Intel Xeon Gold 6230 processor) and 16GB of RAM. An Ethernet network (1Gbps) provides internodal communication among compute nodes, and between the compute nodes and the storage systems serving the home directories and the work directories.

The cluster is currently comprised of the following resources:

  • 30 compute nodes with Intel Xeon X5650 2.67GHz processors (12 cores and 48GB of RAM per node)
  • 2 compute nodes with Intel Xeon L7555 1.87GHz processors (32 cores and 512GB of RAM per node)
  • 4 NVIDIA Tesla (Kepler) K20Xm GPU cards. These cards are installed on one host that has dual 6-core Intel Xeon CPUs and 48GB of RAM

The queueing system on the teaching cluster is Slurm.

Connecting to the teaching cluster

Transferring Files

Disk Storage

Software Installed on the teaching cluster

The list of installed application is available at Software page.

Code Compilation on the teaching cluster

Running Jobs on the teaching cluster

Monitoring Jobs on the teaching cluster