Systems: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 5: Line 5:


Sapelo is a Linux cluster that runs a 64-bit CentOS 6.5 operating system
Sapelo is a Linux cluster that runs a 64-bit CentOS 6.5 operating system
and the login node has Intel Xeon processors.   
and the login node has Intel Xeon processors.  An Infiniband network provides internodal communication among
compute nodes, and between the compute nodes and the storage systems serving the home directories and the
scratch directories.


The cluster is currently comprised of the following resources:  
The cluster is currently comprised of the following resources:  

Revision as of 10:30, 16 March 2015


Sapelo

Sapelo is a Linux cluster that runs a 64-bit CentOS 6.5 operating system and the login node has Intel Xeon processors. An Infiniband network provides internodal communication among compute nodes, and between the compute nodes and the storage systems serving the home directories and the scratch directories.

The cluster is currently comprised of the following resources:

  • 120 compute nodes with AMD Opteron processors (48 cores and 128GB of RAM per node)
  • two 48-core 512GB RAM nodes with AMD Opteron processors (n24, n25)
  • one 32-core 512GB RAM node with Intel Xeon processors (n27)
  • two 16-core 128GB RAM nodes with Intel Xeon processors and 8 NVIDIA K40m GPU cards each (n48, n49)
  • one 12-core 64GB RAM node with Intel Xeon processors and 7 NVIDIA K20Xm GPU cards (n42)

Connecting

Transferring Files

Disk Storage

Code Compilation on Sapelo

Running Jobs on Sapelo


Back to Top

Zcluster

The Linux cluster is comprised of compute nodes with 4-, 6-, 8-, and 12-core processors from both Intel and AMD. Subsets of nodes have "large memory" (e.g., 128, 256, or 512 GB of RAM), while others have InfiniBand connectivity or GPU capabilities. Total CPU compute power is 25.9 Tflops.

The cluster is currently comprised of the following resources:

  • 230 compute nodes (2600 compute cores), 32 with InfiniBand connectivity.
  • Four 8-core, 192GB high-memory compute nodes
  • Ten 12-core, 256GB high-memory compute nodes
  • Two 32-core, 512GB high-memory compute nodes
  • Six 32-core, 64GB high-memory compute nodes
  • One NVIDIA Tesla S1070 with four GPU cards (4 x 240 = 960 GPU cores) for programs written to use this architecture.
  • One NVIDIA Tesla (Fermi) C2075 GPU processor (448 GPU cores)
  • Nine NVIDIA Tesla (Fermi) M2070 GPU cards (9 x 448 = 4032 GPU cores). These cards are installed on 2 hosts each of which has dual 6-core Intel Xeon CPUs and 48GB of RAM; there are 6 GPU cards on one host and 3 on the other.
  • 32 NVIDIA Tesla (Kepler) K20X GPU cards (32 x 2688 = 86016 GPU cores). These cards are installed on 4 hosts each of which has dual 6-core Intel Xeon CPUs and 96GB of RAM; there are 8 GPU cards per host.

Connecting

Transferring Files

Disk Storage

Code Compilation on zcluster

Running Jobs on zcluster