Difference between revisions of "Systems"

From Research Computing Center Wiki
Jump to navigation Jump to search
Line 43: Line 43:
 
* 4 compute node with Intel Xeon processors (28 cores and 1TB of RAM per node)
 
* 4 compute node with Intel Xeon processors (28 cores and 1TB of RAM per node)
 
* 1 compute node with  AMD Opteron processors (48 cores and 1TB of RAM per node)
 
* 1 compute node with  AMD Opteron processors (48 cores and 1TB of RAM per node)
 +
* 8 compute nodes with AMD EPYC processors (32 cores and 495GB of RAM per node)
 
* 4 compute nodes with AMD Opteron processors (48 cores and 512GB of RAM per node)
 
* 4 compute nodes with AMD Opteron processors (48 cores and 512GB of RAM per node)
 
* 1 compute node with Intel Xeon processors (32 cores and 512GB of RAM per node)
 
* 1 compute node with Intel Xeon processors (32 cores and 512GB of RAM per node)

Revision as of 15:47, 29 May 2019



Sapelo2

Sapelo2 is a Linux cluster that runs a 64-bit CentOS 7.5 operating system and it is managed using Foreman and Puppet. Two physical login nodes are available, with Intel Xeon E5-2680 v3 (Haswell) processors and 128GB of RAM and 24 cores per node.

For a subset of compute nodes, internodal communication among them and between these nodes and the storage systems serving the home directories and the scratch directories is provided by a QDR Infiniband network(40Gbps). For another subset of compute nodes, these communications are provided by an EDR Infiniband network.


The cluster is currently comprised of the following resources:

  • 42 compute nodes with Intel Xeon Skylake processors (32 cores and 187GB of RAM per node)
  • 76 compute nodes with AMD Opteron processors (48 cores and 128GB of RAM per node)
  • 30 compute nodes with Intel Xeon Broadwell processors (28 cores and 64GB of RAM per node)
  • 4 compute nodes with AMD Opteron processors (48 cores and 256GB of RAM per node)
  • 4 compute node with Intel Xeon processors (28 cores and 1TB of RAM per node)
  • 1 compute node with AMD Opteron processors (48 cores and 1TB of RAM per node)
  • 8 compute nodes with AMD EPYC processors (32 cores and 495GB of RAM per node)
  • 4 compute nodes with AMD Opteron processors (48 cores and 512GB of RAM per node)
  • 1 compute node with Intel Xeon processors (32 cores and 512GB of RAM per node)
  • 4 compute nodes with Intel Xeon Skylake processors (32 cores and 187GB of RAM) and 1 NVIDIA P100 GPU card per node
  • 2 compute nodes with Intel Xeon processors (16 cores and 128GB of RAM) and 7 NVIDIA K40m GPU cards per node
  • 4 compute nodes with Intel Xeon processors (12 cores and 96GB of RAM) and 7 NVIDIA K20Xm GPU cards per node
  • buyin nodes

Notes

Your home directory and /lustre1 directory on Sapelo2 are the same as on Sapelo. Therefore, there is no need to transfer data between your Sapelo and Sapelo2 home directories and /lustre1 directories.

The queueing system on Sapelo2 is Torque/Moab.

Sapelo2 Frequently Asked Questions

Sapelo and Sapelo2 comparison

Connecting to Sapelo2

Transferring Files

Disk Storage

Software Installed on Sapelo2

Code Compilation on Sapelo2

Running Jobs on Sapelo2

Monitoring Jobs on Sapelo2


Back to Top

Teaching cluster

The teaching cluster is a Linux cluster that runs a 64-bit Linux, with Centos 7.5. The physical login node has two 6-core Intel Xeon E5-2620 processors and 128GB of RAM and it runs Red Hat EL 7.5. An Ethernet network (1Gbps) provides internodal communication among compute nodes, and between the compute nodes and the storage systems serving the home directories and the work directories.

The cluster is currently comprised of the following resources:

  • 37 compute nodes with Intel Xeon X5650 2.67GHz processors (12 cores and 48GB of RAM per node)
  • 2 compute nodes with Intel Xeon E5504 2.00GHz processors (8 cores and 48GB of RAM per node)
  • 3 compute nodes with Intel Xeon E5504 2.00GHz processors (8 cores and 192GB of RAM per node)
  • 2 compute nodes with AMD Opteron 6174 processors (48 cores and 128GB of RAM per node)
  • 3 compute nodes with AMD Opteron 6128 HE 2.00GHz processors (32 cores and 64GB of RAM per node)
  • 6 NVIDIA Tesla (Fermi) M2070 GPU cards (8 x 448 = 3584 GPU cores). These cards are installed on one host that has dual 6-core Intel Xeon CPUs and 48GB of RAM

The queueing system on the teaching cluster is Slurm.

Connecting to the teaching cluster

Transferring Files

Disk Storage

Software Installed on the teaching cluster

The list of installed application is available at Software page.

Code Compilation on the teaching cluster

Running Jobs on the teaching cluster

Monitoring Jobs on the teaching cluster