Systems
Sapelo
Sapelo is a Linux cluster that runs a 64-bit CentOS 6.5 operating system and the login nodes has Intel Xeon processors. A QDR Infiniband network (40Gbps) provides internodal communication among compute nodes, and between the compute nodes and the storage systems serving the home directories and the scratch directories.
The cluster is currently comprised of the following resources:
- 112 compute nodes with AMD Opteron processors (48 cores and 128GB of RAM per node)
- four 48-core 256GB RAM nodes with AMD Opteron processors (n16, n17, n18, n19)
- six 48-core 512GB RAM nodes with AMD Opteron processors (n20, n21, n22, n23, n24, n25)
- one 48-core 1TB RAM node with AMD Opteron processors (n26)
- two 16-core 128GB RAM nodes with Intel Xeon processors and 8 NVIDIA K40m GPU cards each (n48, n49)
Connecting
Code Compilation on Sapelo
Running Jobs on Sapelo
Sapelo2
Sapelo2 is essentially a transformation of Sapelo. The compute layer, storage devices, and network fabrics (InfiniBand and Ethernet) remain physically in place. A new management layer using Cobbler and Puppet is installed on new hardware. The operating system is 64-bit Linux, with Centos 7.1 kernel and other components updated beyond 7.1. Two physical login nodes are available, with Intel Xeon E5-2680 v3 (Haswell) processors and 128GB of RAM and 24 cores per node.
The cluster is currently comprised of the following resources:
- 60 compute nodes with AMD Opteron processors (48 cores and 128GB of RAM per node)
- 30 compute nodes with Intel Xeon Broadwell processors (28 cores and 64GB of RAM per node)
- 1 compute node with Intel Xeon processors (28 cores and 1TB of RAM per node)
- 2 compute nodes with AMD Opteron processors (48 cores and 512GB of RAM per node)
- 2 compute nodes with Intel Xeon processors (16 cores and 128GB of RAM) and 7 NVIDIA K40m GPU cards per node
- 2 compute nodes with Intel Xeon processors (12 cores and 96GB of RAM) and 7 NVIDIA K20Xm GPU cards per node
- buyin nodes
Notes
Your home directory and /lustre1 directory on Sapelo2 are the same as on Sapelo. Therefore, there is no need to transfer data between your Sapelo and Sapelo2 home directories and /lustre1 directories.
The queueing system on Sapelo2 is Torque/Moab.
Sapelo2 Frequently Asked Questions
Sapelo and Sapelo2 comparison
Connecting to Sapelo2
Transferring Files
Disk Storage
Software Installed on Sapelo2
Code Compilation on Sapelo2
Running Jobs on Sapelo2
Monitoring Jobs on Sapelo2
Teaching cluster
The teaching cluster is a Linux cluster that runs a 64-bit Linux, with Centos 7.5. The physical login node had two 6-core Intel Xeon E5-2620 processors and 128GB of RAM and it runs Red Hat EL 7.5.
The cluster is currently comprised of the following resources:
- 37 compute nodes with Intel Xeon X5650 2.67GHz processors (12 cores and 48GB of RAM per node)
- 2 compute nodes with Intel Xeon E5504 2.00GHz processors (8 cores and 48GB of RAM per node)
- 3 compute nodes with Intel Xeon E5504 2.00GHz processors (8 cores and 192GB of RAM per node)
- 2 compute nodes with AMD Opteron 6174 processors (48 cores and 128GB of RAM per node)
- 3 compute nodes with AMD Opteron 6128 HE 2.00GHz processors (32 cores and 64GB of RAM per node)
- 6 NVIDIA Tesla (Fermi) M2070 GPU cards (8 x 448 = 3584 GPU cores). These cards are installed on one host that has dual 6-core Intel Xeon CPUs and 48GB of RAM
The queueing system on the teaching cluster is Slurm.