Difference between revisions of "Job Submission partitions on Sapelo2"

From Research Computing Center Wiki
Jump to navigation Jump to search
Line 58: Line 58:
 
! scope="col" | GPU Cards/Node
 
! scope="col" | GPU Cards/Node
 
|-
 
|-
| style="text-align: center" rowspan="7" | batch, batch_30d  
+
| rowspan="8" style="text-align: center" | batch, batch_30d
 
|-
 
|-
| 72 || style="color:red" |'''500''' || style="color:red"| '''128''' || AMD EPYC Milan (3rd gen) || rowspan="12" style="text-align: center" | N/A
+
| 72 || style="color:red" |'''500''' || style="color:red"| '''128''' || AMD EPYC Milan (3rd gen) || rowspan="11" style="text-align: center" | N/A
 
 
 
|-
 
|-
| 123 || rowspan="3" | 120 || 64 || AMD EPYC Rome (2nd gen)
+
|4
 +
|250
 +
|64
 +
|AMD EPYC Milan (3rd gen)
 
|-
 
|-
| 64 || 32 || AMD EPYC Naples (1st gen)
+
| 2 || rowspan="3" | 120 || 64 || AMD EPYC Milan (3rd gen)
 
|-
 
|-
| || ||  
+
| 123 || 64 || AMD EPYC Rome (2nd gen)
 +
|-
 +
| 64
 +
| 32
 +
| AMD EPYC Naples (1st gen)
 
|-
 
|-
 
| 42 || 180 || 32 || Intel Xeon Skylake  
 
| 42 || 180 || 32 || Intel Xeon Skylake  
Line 73: Line 79:
 
| 34 || 58 || 28 || Intel Xeon Broadwell
 
| 34 || 58 || 28 || Intel Xeon Broadwell
 
|-
 
|-
| style="text-align: center" rowspan="5"| highmem_p, highmem_30d_p
+
| rowspan="3" style="text-align: center" | highmem_p, highmem_30d_p
| 18 ||rowspan="2"| 500 || 32 || AMD EPYC Naples (1st gen)
+
| 18 || 500 || 32 || AMD EPYC Naples (1st gen)
|-
 
|  ||  ||
 
 
|-
 
|-
| 4 || style="color:red" rowspan="3"| '''990''' || style="color:red"| '''64''' || AMD EPYC Naples (1st gen)
+
| 4 || rowspan="2" style="color:red" |'''990'''|| style="color:red" |'''64'''|| AMD EPYC Naples (1st gen)
 
|-
 
|-
 
| 5 || 28 || Intel Xeon Broadwell
 
| 5 || 28 || Intel Xeon Broadwell
|-
 
|  ||  ||
 
 
|-
 
|-
 
| style="text-align: center"|hugemem_p, hugemem_30d_p
 
| style="text-align: center"|hugemem_p, hugemem_30d_p
Line 90: Line 92:
 
|AMD EPYC Rome (2nd gen)
 
|AMD EPYC Rome (2nd gen)
 
|-
 
|-
| rowspan="4" style="text-align: center" | gpu_p, gpu_30d_p || 4 || 180 ||  32 || Intel Xeon Skylake || 1 NVDIA P100   
+
| rowspan="3" style="text-align: center" | gpu_p, gpu_30d_p || 4 || 180 ||  32 || Intel Xeon Skylake || 1 NVDIA P100   
|-
 
| 2 || 120 || 16 || rowspan="2"| Intel Xeon || 8 NVIDIA K40m
 
 
|-
 
|-
| || || ||  
+
| 2 || 120 || 16 || Intel Xeon || 8 NVIDIA K40m
 
|-
 
|-
 
|1
 
|1

Revision as of 15:03, 19 October 2022


Batch partitions (queues) defined on the Sapelo2

There are different partitions defined on Sapelo2. The Slurm queueing system refers to queues as partition. Users are required to specify, in the job submission script or as job submission command line arguments, the partition and the resources needed by the job in order for it to be assigned to compute node(s) that have enough available resources (such as number of cores, amount of memory, GPU cards, etc). Please note, Slurm will not allow a job to be submitted if there are no resources matching your request. Please refer to Migrating from Torque to Slurm for more info about Slurm queueing system.

The following partitions are defined on the Sapelo2 cluster:

Partition Name Time limit Max jobs running Max jobs able to be submitted Notes
batch 7 days 250 10,000 Regular nodes.
batch_30d 30 days 1 2 Regular nodes. A given user can have up to one job running at a time here, plus one pending, or two pending and none running. A user's attempt to submit a third job into this partition will be rejected.
highmem_p 7 days 15 100 For high memory jobs
highmem_30d_p 30 days 1 2 For high memory jobs. A given user can have up to one job running at a time here, plus one pending, or two pending and none running. A user's attempt to submit a third job into this partition will be rejected.
hugemem_p 7 days 4 4 For jobs needing up to 2TB of memory
hugemem_30d_p 30 days 4 4 For jobs needing up to 2TB of memory
gpu_p 7 days 18 20 For GPU-enabled jobs.
gpu_30d_p 30 days 2 2 For GPU-enabled jobs. A given user can have up to one job running at a time here, plus one pending, or two pending and none running. A user's attempt to submit a third job into this partition will be rejected.
inter_p 2 days 3 20 Regular nodes, for interactive jobs.
name_p variable Partitions that target different groups' buy-in nodes. The name string is specific to each group.


When defining the resources for your job, you'll want to make sure you stay within the bounds of the resources available for the partition that you're using. The below table outlines the resources available per type of node, with the red values being the maximum for that corresponding partition.

Partition Name # of Nodes Max Mem(GB)/Node Max Cores/Node Processor Type GPU Cards/Node
batch, batch_30d
72 500 128 AMD EPYC Milan (3rd gen) N/A
4 250 64 AMD EPYC Milan (3rd gen)
2 120 64 AMD EPYC Milan (3rd gen)
123 64 AMD EPYC Rome (2nd gen)
64 32 AMD EPYC Naples (1st gen)
42 180 32 Intel Xeon Skylake
34 58 28 Intel Xeon Broadwell
highmem_p, highmem_30d_p 18 500 32 AMD EPYC Naples (1st gen)
4 990 64 AMD EPYC Naples (1st gen)
5 28 Intel Xeon Broadwell
hugemem_p, hugemem_30d_p 2 2000 32 AMD EPYC Rome (2nd gen)
gpu_p, gpu_30d_p 4 180 32 Intel Xeon Skylake 1 NVDIA P100
2 120 16 Intel Xeon 8 NVIDIA K40m
1 1000 64 AMD EPYC Milan (3rd gen) 4 NVIDIA A100
name_p variable