Running Jobs on the teaching cluster: Difference between revisions
(19 intermediate revisions by one other user not shown) | |||
Line 26: | Line 26: | ||
|- | |- | ||
| batch || Intel|| | | batch || Intel|| 28 || 12-core, 48GB RAM, Intel Xeon || Regular nodes. | ||
|- | |- | ||
| | | highmem || Intel || 2 || 32-core, 512GB RAM, Intel Xeon || For high memory jobs. | ||
|- | |- | ||
| | | gpu || GPU|| 1 || 12-core, 48GB RAM, Intel Xeon, 4 NVIDIA K20Xm GPUs || For GPU-enabled jobs. | ||
|- | |- | ||
| | | interactive || Intel || 2 || 12-core, 48GB RAM, Intel Xeon || For interactive jobs. | ||
|- | |- | ||
|} | |} | ||
Note that the 48GB-RAM nodes in the table above can allocate a total of 41GB of memory to jobs. | |||
You can check all partitions (queues) defined in the cluster with the command | You can check all partitions (queues) defined in the cluster with the command | ||
Line 151: | Line 149: | ||
You can then load the needed modules. For example, if you are running an R program, then include the line | You can then load the needed modules. For example, if you are running an R program, then include the line | ||
<pre class="gscript"> | <pre class="gscript"> | ||
module load R/3. | module load R/4.3.1-foss-2022a | ||
</pre> | </pre> | ||
Line 158: | Line 156: | ||
R CMD BATCH add.R | R CMD BATCH add.R | ||
</pre> | </pre> | ||
====Environment Variables exported by batch jobs==== | ====Environment Variables exported by batch jobs==== | ||
Line 218: | Line 215: | ||
#SBATCH --partition=batch # Partition (queue) name | #SBATCH --partition=batch # Partition (queue) name | ||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
#SBATCH --mail-user= | #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | ||
#SBATCH --ntasks=1 # Run on a single CPU | #SBATCH --ntasks=1 # Run on a single CPU | ||
#SBATCH --mem=1gb # Job memory request | #SBATCH --mem=1gb # Job memory request | ||
#SBATCH --time=02:00:00 # Time limit hrs:min:sec | #SBATCH --time=02:00:00 # Time limit hrs:min:sec | ||
#SBATCH --output=testserial.%j.out # Standard output | #SBATCH --output=testserial.%j.out # Standard output log | ||
#SBATCH --error=testserial.%j.err # Standard error log | |||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
module load R/3. | module load R/4.3.1-foss-2022a | ||
R CMD BATCH add.R | R CMD BATCH add.R | ||
</pre> | </pre> | ||
In this sample script, the standard output and error of the job will be saved into a file called testserial. | In this sample script, the standard output and error of the job will be saved into a file called testserial.%j.out and testserial.%j.err, where %j will be automatically replaced by the job id of the job. | ||
====MPI Job==== | ====MPI Job==== | ||
Line 241: | Line 240: | ||
#SBATCH --partition=batch # Partition (queue) name | #SBATCH --partition=batch # Partition (queue) name | ||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
#SBATCH --mail-user= | #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | ||
#SBATCH --ntasks=16 # Number of MPI ranks | #SBATCH --ntasks=16 # Number of MPI ranks | ||
#SBATCH --cpus-per-task=1 # Number of cores per MPI rank | #SBATCH --cpus-per-task=1 # Number of cores per MPI rank | ||
Line 248: | Line 247: | ||
#SBATCH --mem-per-cpu=600mb # Memory per processor | #SBATCH --mem-per-cpu=600mb # Memory per processor | ||
#SBATCH --time=02:00:00 # Time limit hrs:min:sec | #SBATCH --time=02:00:00 # Time limit hrs:min:sec | ||
#SBATCH --output=mpitest.%j.out # Standard output | #SBATCH --output=mpitest.%j.out # Standard output log | ||
#SBATCH --error=mpitest.%j.err # Standard error log | |||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
module load OpenMPI/1. | module load OpenMPI/4.1.4-GCC-11.3.0 | ||
mpirun ./mympi.exe | mpirun ./mympi.exe | ||
Line 268: | Line 268: | ||
#SBATCH --partition=batch # Partition (queue) name | #SBATCH --partition=batch # Partition (queue) name | ||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
#SBATCH --mail-user= | #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | ||
#SBATCH --ntasks=1 # Run a single task | #SBATCH --ntasks=1 # Run a single task | ||
#SBATCH --cpus-per-task=6 # Number of CPU cores per task | #SBATCH --cpus-per-task=6 # Number of CPU cores per task | ||
#SBATCH --mem=4gb # Job memory request | #SBATCH --mem=4gb # Job memory request | ||
#SBATCH --time=02:00:00 # Time limit hrs:min:sec | #SBATCH --time=02:00:00 # Time limit hrs:min:sec | ||
#SBATCH --output=mctest.%j.out # Standard output | #SBATCH --output=mctest.%j.out # Standard output log | ||
#SBATCH --error=mctest.%j.err # Standard error log | |||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
Line 279: | Line 280: | ||
export OMP_NUM_THREADS=6 | export OMP_NUM_THREADS=6 | ||
module load foss/ | module load foss/2022a # load the appropriate module file, e.g. foss/2022a | ||
time ./a.out | time ./a.out | ||
Line 295: | Line 296: | ||
#SBATCH --partition=highmem # Partition (queue) name | #SBATCH --partition=highmem # Partition (queue) name | ||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
#SBATCH --mail-user= | #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | ||
#SBATCH --ntasks=1 # Run a single task | #SBATCH --ntasks=1 # Run a single task | ||
#SBATCH --cpus-per-task=4 # Number of CPU cores per task | #SBATCH --cpus-per-task=4 # Number of CPU cores per task | ||
#SBATCH --mem= | #SBATCH --mem=100gb # Job memory request | ||
#SBATCH --time=02:00:00 # Time limit hrs:min:sec | #SBATCH --time=02:00:00 # Time limit hrs:min:sec | ||
#SBATCH --output=highmemtest.%j.out # Standard output | #SBATCH --output=highmemtest.%j.out # Standard output log | ||
#SBATCH --error=highmemtest.%j.err # Standard error log | |||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
Line 322: | Line 324: | ||
#SBATCH --partition=batch # Partition (queue) name | #SBATCH --partition=batch # Partition (queue) name | ||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
#SBATCH --mail-user= | #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | ||
#SBATCH --nodes=2 # Number of nodes | #SBATCH --nodes=2 # Number of nodes | ||
#SBATCH --ntasks=4 # Number of MPI ranks | #SBATCH --ntasks=4 # Number of MPI ranks | ||
Line 329: | Line 331: | ||
#SBATCH --mem-per-cpu=2000mb # Per processor memory request | #SBATCH --mem-per-cpu=2000mb # Per processor memory request | ||
#SBATCH --time=2-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss (2 days in the example) | #SBATCH --time=2-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss (2 days in the example) | ||
#SBATCH --output=hybridtest.%j.out | #SBATCH --output=hybridtest.%j.out # Standard output log | ||
#SBATCH --error=hybridtest.%j.err # Standard error log | |||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
ml foss/2022a | |||
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK | export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK | ||
Line 350: | Line 355: | ||
#SBATCH --mem=1gb # Job Memory | #SBATCH --mem=1gb # Job Memory | ||
#SBATCH --time=10:00:00 # Time limit hrs:min:sec | #SBATCH --time=10:00:00 # Time limit hrs:min:sec | ||
#SBATCH --output=array_%A-%a.out # Standard output | #SBATCH --output=array_%A-%a.out # Standard output log | ||
#SBATCH --error=array_%A-%a.err # Standard error log | |||
#SBATCH --array=0-9 # Array range | #SBATCH --array=0-9 # Array range | ||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
module load foss/ | module load foss/2022a # load any needed module files, e.g. foss/2022a | ||
time ./a.out < input_$SLURM_ARRAY_TASK_ID | time ./a.out < input_$SLURM_ARRAY_TASK_ID | ||
Line 364: | Line 370: | ||
====GPU/CUDA==== | ====GPU/CUDA==== | ||
Sample script to run Amber on a GPU node using one node, 2 CPU cores, and 1 GPU card: | |||
<pre class="gscript"> | |||
#!/bin/bash | |||
#SBATCH --job-name=amber # Job name | |||
#SBATCH --partition=gpu # Partition (queue) name | |||
#SBATCH --gres=gpu:1 # Requests one GPU device | |||
#SBATCH --ntasks=1 # Run a single task | |||
#SBATCH --cpus-per-task=2 # Number of CPU cores per task | |||
#SBATCH --mem=40gb # Job memory request | |||
#SBATCH --time=10:00:00 # Time limit hrs:min:sec | |||
#SBATCH --output=amber.%j.out # Standard output log | |||
#SBATCH --error=amber.%j.err # Standard error log | |||
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | |||
#SBATCH --mail-user=yourMYID@uga.edu # Where to send mail | |||
cd $SLURM_SUBMIT_DIR | |||
ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1 | |||
srun $AMBERHOME/bin/pmemd.cuda -O -i ./prod.in -o prod.out -p ./dimerFBP_GOL.prmtop -c ./restart.rst -r prod.rst -x prod.mdcrd | |||
</pre> | |||
---- | ---- | ||
Line 393: | Line 420: | ||
<pre class="gcommand"> | <pre class="gcommand"> | ||
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST | PARTITION AVAIL TIMELIMIT NODES STATE NODELIST | ||
highmem | highmem up 7-00:00:00 2 idle rb1-[1-2] | ||
interactive up 7-00:00:00 2 idle rb1-[11-12] | |||
fsr4601 up 1:00 8 idle rb1-[3-10] | |||
batch | fsr8602 up 10:00 8 idle rb1-[3-10] | ||
batch | batch up 2-00:00:00 3 mix rb1-3,rb1-[6-8] | ||
batch | batch up 2-00:00:00 1 alloc rb1-4 | ||
batch up 2-00:00:00 36 idle rb1-[5,9-10] | |||
</pre> | </pre> | ||
where some common values of STATE are: | where some common values of STATE are: | ||
Line 413: | Line 441: | ||
An interactive session on a compute node can be started with the command | An interactive session on a compute node can be started with the command | ||
<pre class="gcommand"> | <pre class="gcommand"> | ||
interact | |||
</pre> | |||
This command will start an interactive session with one core on one of the interactive nodes, and allocate 2GB of memory for a maximum walltime of 12h. | |||
The '''interact''' command is an alias for | |||
<pre class="gcommand"> | |||
srun --pty --cpus-per-task=1 --job-name=interact --ntasks=1 --nodes=1 --partition=interactive --time=12:00:00 --mem=2GB /bin/bash -l | |||
</pre> | |||
The options that can be used with <code>interact</code> are diplayed when this command is run with the -h or --help option: | |||
<pre class="gcomment"> | |||
[shtsai@teach1 ~]$ interact -h | |||
Usage: interact [OPTIONS] | |||
Description: Start an interactive job | |||
-c, --cpus-per-task CPU cores per task (default: 1) | |||
-J, --job-name Job name (default: interact) | |||
-n, --ntasks Number of tasks (default: 1) | |||
-N, --nodes Number of nodes (default: 1) | |||
-p, --partition Partition for interactive job (default: inter_p) | |||
-q, --qos Request a quality of service for the job. | |||
-t, --time Maximum run time for interactive job (default: 12:00:00) | |||
-w, --nodelist List of node name(s) on which your job should run | |||
--constraint Job constraints | |||
--gres Generic consumable resources | |||
--mem Memory per node (default 2GB) | |||
--shell Absolute path to the shell to be used in your interactive job (default: /bin/bash) | |||
--wckey Wckey to be used with job | |||
--x11 Start an interactive job with X Forwarding | |||
-h, --help Display this help output | |||
</pre> | </pre> | ||
'''Examples:''' | |||
To start an interactive session with 4 cores and 10GB of memory: | |||
<pre class="gcommand"> | <pre class="gcommand"> | ||
interact -c 4 --mem=10G | |||
</pre> | </pre> | ||
Line 429: | Line 489: | ||
===How to run an interactive job with Graphical User Interface capabilities=== | ===How to run an interactive job with Graphical User Interface capabilities=== | ||
If you want to run an application as an interactive job and have its graphical | |||
user interface displayed on the terminal of your local machine, you need to | |||
enable X-forwarding when you ssh into the login node. For information on how | |||
to do this, please see questions 5.4 and 5.5 in the [[Frequently Asked Questions]] | |||
page. | |||
On the teaching cluster, X-forwarding does not work from any of the compute nodes, | |||
including the interactive nodes. Please feel free to run X windows applications | |||
directly on the teaching cluster login node. | |||
<!-- | <!-- | ||
'''NOTE: X-forwarding is not working on Sapelo2 yet, sorry for the inconvenience''' | '''NOTE: X-forwarding is not working on Sapelo2 yet, sorry for the inconvenience''' | ||
Line 604: | Line 673: | ||
--> | --> | ||
===How to check on running or pending jobs=== | ===How to check on running or pending jobs=== | ||
Line 635: | Line 705: | ||
[[#top|Back to Top]] | [[#top|Back to Top]] | ||
===How to check resource utilization of a finished job=== | ===How to check resource utilization of a running or finished job=== | ||
The following command can be used to show resource utilization by a running job or a job that has already completed: | |||
<pre class="gcommand"> | |||
sacct | |||
</pre> | |||
This command can be used with many options. We have configured one option that shows some quantities that are commonly of interest, including the amount of memory used and the cputime used by the jobs: | |||
<pre class="gcommand"> | |||
sacct-gacrc | |||
</pre> | |||
For detailed information on how to monitor your jobs, please see [[Monitoring Jobs on the teaching cluster]]. | |||
<!-- | <!-- | ||
'''1.''' You can request than an email be sent to you when the job finishes, by adding these two header lines to the job submission script: | '''1.''' You can request than an email be sent to you when the job finishes, by adding these two header lines to the job submission script: |
Latest revision as of 10:14, 18 January 2024
Using the Queueing System
The login node for the teaching cluster should be used for text editing, and job submissions. No jobs should be run directly on the login node. Processes that use too much CPU or RAM on the login node may be terminated by GACRC staff, or automatically, in order to keep the cluster running properly. Jobs should be run using the Slurm queueing system. The queueing system should be used to run both interactive and batch jobs.
Queues defined on the teaching cluster
There are different queues defined on the teaching cluster. The SLURM queueing system refers to queues as partition. Users are required to specify, in the job submission script or as job submission command line arguments, the queue and the resources needed by the job in order for it to be assigned to compute node(s) that have enough available resources (such as number of cores, amount of memory, GPU cards, etc).
The table below summarizes the partitions (queues) defined and the compute nodes that they target:
Queue Name | Node Type | Node Number | Description | Notes |
---|---|---|---|---|
batch | Intel | 28 | 12-core, 48GB RAM, Intel Xeon | Regular nodes. |
highmem | Intel | 2 | 32-core, 512GB RAM, Intel Xeon | For high memory jobs. |
gpu | GPU | 1 | 12-core, 48GB RAM, Intel Xeon, 4 NVIDIA K20Xm GPUs | For GPU-enabled jobs. |
interactive | Intel | 2 | 12-core, 48GB RAM, Intel Xeon | For interactive jobs. |
Note that the 48GB-RAM nodes in the table above can allocate a total of 41GB of memory to jobs.
You can check all partitions (queues) defined in the cluster with the command
sinfo
Job submission Scripts
Users are required to specify the number of cores, the amount of memory, the queue name, and the maximum wallclock time needed by the job.
Header lines
Basic job submission script
At a minimum, the job submission script needs to have the following header lines:
#!/bin/bash #SBATCH --partition=batch #SBATCH --job-name=test #SBATCH --ntasks=1 #SBATCH --time=2:00:00 #SBATCH --mem=2gb
Commands to run your application should be added after these header lines.
Header lines explained
- #!/bin/bash : used to specify using /bin/bash shell
- #SBATCH --partition=batch : used to specify the partition (queue) name, e.g. batch
- #SBATCH --job-name=test : used to specify the name of the job, e.g. test
- #SBATCH --ntasks=1 : used to specify the number of tasks (e.g. 1).
- #SBATCH --time=2:00:00 : used to specify the maximum allowed wall clock time in dd:hh:mm:ss format for the job (e.g 2h).
- #SBATCH --mem=2gb : used to specify the maximum memory allowed for the job (e.g. 2GB)
Below are some of the most commonly used queueing system options to configure the job.
Options to request resources for the job
- -t, --time=time
Set a limit on the total run time. Acceptable formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds"
- --mem=MB
Maximum memory per node the job will need in MegaBytes
- --mem-per-cpu=MB
Memory required per allocated CPU in MegaBytes
- -N, --nodes=num
Number of nodes are required. Default is 1 node
- -n, --ntasks=num
Maximum number tasks will be launched. Default is one task per node
- --ntasks-per-node=ntasks
Request that ntasks be invoked on each node
- -c, --cpus-per-task=ncpus
Require ncpus number of CPU cores per task. Without this option, allocate one core per task
Please try to request resources for your job as accurately as possible, because this allows your job to be dispatched to run at the earliest opportunity and it helps the system allocate resources efficiently to start as many jobs as possible, benefiting all users.
Options to manage job notification and output
- -J, --job-name jobname
Give the job a name. The default is the filename of the job script. Within the job, $SBATCH_JOB_NAME expands to the job name
- -o, --output=path/for/stdout
Send stdout to path/for/stdout. The default filename is slurm-${SLURM_JOB_ID}.out, e.g. slurm-12345.out, in the directory from which the job was submitted
- -e, --error=path/for/stderr
Send stderr to path/for/stderr.
- --mail-user=username@uga.edu
Send email notification to the address you specified when certain events occur.
- --mail-type=type
The value of type can be set to NONE, BEGIN, END, FAIL, ALL.
Options to set Array Jobs
If you wish to run an application binary or script using e.g. different input files, then you might find it convenient to use an array job. To create an array job with e.g. 10 elements, use
#SBATCH -t 0-9
or
#SBATCH --array=0-9
The ID of each element in an array job is stored in the variable SLURM_ARRAY_TASK_ID. The variable SLURM_ARRAY_JOB_ID will be expanded into the jobid of the array job. Each array job element runs as an independent job, so multiple array elements can run concurrently, if resources are available.
Option to set job dependency
You can set job dependency with the option -d or --dependency=dependency-list. For example, if you want to specify that one job only starts after job with jobid 1234 finishes, you can add the following header line in the job submission script of the job:
#SBATCH --dependency=afterok:1234
Having this header line in the job submission script will ensure that the job is only dispatched to run after job 1234 has completed successfully.
Other content of the script
Following the header lines, users can include commands to change to the working directory, to load the modules needed to run the application, and to invoke the application. For example, to use the directory from which the job is submitted as the working directory (where to find input files or binaries), add the line
cd $SLURM_SUBMIT_DIR
You can then load the needed modules. For example, if you are running an R program, then include the line
module load R/4.3.1-foss-2022a
Then invoke your application. For example, if you are running an R program called add.R which is in your job submission directory, use
R CMD BATCH add.R
Environment Variables exported by batch jobs
When a batch job is started, a number of variables are introduced into the job's environment that can be used by the batch script in making decisions, creating output files, and so forth. Some of these variables are listed in the following table:
Variable | Description |
---|---|
SLURM_ARRAY_JOB_ID | Job id of an array job |
SLURM_ARRAY_TASK_ID | Value of job array index for this job |
SLURM_CPUS_ON_NODE | Number of CPUS on the allocated node. |
SLURM_CPUS_PER_TASK | Number of cpus requested per task. Only set if the --cpus-per-task option is specified. |
SLURM_JOB_ID | Unique pbs job id |
SLURM_JOB_NAME | User specified jobname |
SLURM_JOB_CPUS_PER_NODE | Count of processors available to the job on this node. |
SLURM_JOB_NAME | Name of the job. |
SLURM_JOB_NODELIST | List of nodes allocated to the job. |
SLURM_JOB_NUM_NODES | Total number of nodes in the job's resource allocation. |
SLURM_JOB_PARTITION | Name of the partition (i.e. queue) in which the job is running. |
SLURM_NTASKS | Same as -n, --ntasks |
SLURM_NTASKS_PER_NODE | Number of tasks requested per node. Only set if the --ntasks-per-node option is specified. |
SLURM_SUBMIT_DIR | The directory from which sbatch was invoked. |
SLURM_TASKS_PER_NODE | Number of tasks to be initiated on each node. |
Sample job submission scripts
Serial (single-processor) Job
Sample job submission script (sub.sh) to run an R program called add.R using a single core:
#!/bin/bash #SBATCH --job-name=testserial # Job name #SBATCH --partition=batch # Partition (queue) name #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail #SBATCH --ntasks=1 # Run on a single CPU #SBATCH --mem=1gb # Job memory request #SBATCH --time=02:00:00 # Time limit hrs:min:sec #SBATCH --output=testserial.%j.out # Standard output log #SBATCH --error=testserial.%j.err # Standard error log cd $SLURM_SUBMIT_DIR module load R/4.3.1-foss-2022a R CMD BATCH add.R
In this sample script, the standard output and error of the job will be saved into a file called testserial.%j.out and testserial.%j.err, where %j will be automatically replaced by the job id of the job.
MPI Job
Sample job submission script (sub.sh) to run an OpenMPI application. In this example the job requests 16 cores and further specifies that these 16 cores need to be divided equally on 2 nodes (8 cores per node) and the binary is called mympi.exe:
#!/bin/bash #SBATCH --job-name=mpitest # Job name #SBATCH --partition=batch # Partition (queue) name #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail #SBATCH --ntasks=16 # Number of MPI ranks #SBATCH --cpus-per-task=1 # Number of cores per MPI rank #SBATCH --nodes=2 # Number of nodes #SBATCH --ntasks-per-node=8 # How many tasks on each node #SBATCH --mem-per-cpu=600mb # Memory per processor #SBATCH --time=02:00:00 # Time limit hrs:min:sec #SBATCH --output=mpitest.%j.out # Standard output log #SBATCH --error=mpitest.%j.err # Standard error log cd $SLURM_SUBMIT_DIR module load OpenMPI/4.1.4-GCC-11.3.0 mpirun ./mympi.exe
OpenMP (Multi-Thread) Job
Sample job submission script (sub.sh) to run a program that uses OpenMP with 6 threads. Please set --ntasks=1 and set --cpus-per-task to the number of threads you wish to use. The name of the binary in this example is a.out.
#!/bin/bash #SBATCH --job-name=mctest # Job name #SBATCH --partition=batch # Partition (queue) name #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail #SBATCH --ntasks=1 # Run a single task #SBATCH --cpus-per-task=6 # Number of CPU cores per task #SBATCH --mem=4gb # Job memory request #SBATCH --time=02:00:00 # Time limit hrs:min:sec #SBATCH --output=mctest.%j.out # Standard output log #SBATCH --error=mctest.%j.err # Standard error log cd $SLURM_SUBMIT_DIR export OMP_NUM_THREADS=6 module load foss/2022a # load the appropriate module file, e.g. foss/2022a time ./a.out
High Memory Job
Sample job submission script (sub.sh) to run a velvet application that needs to use 50GB of memory and 4 threads:
#!/bin/bash #SBATCH --job-name=highmemtest # Job name #SBATCH --partition=highmem # Partition (queue) name #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail #SBATCH --ntasks=1 # Run a single task #SBATCH --cpus-per-task=4 # Number of CPU cores per task #SBATCH --mem=100gb # Job memory request #SBATCH --time=02:00:00 # Time limit hrs:min:sec #SBATCH --output=highmemtest.%j.out # Standard output log #SBATCH --error=highmemtest.%j.err # Standard error log cd $SLURM_SUBMIT_DIR export OMP_NUM_THREADS=4 module load Velvet velvetg [options]
Sample job submission script (sub.sh) to run a parallel job that uses 4 MPI processes with OpenMPI and each MPI process runs with 3 threads:
#!/bin/bash #SBATCH --job-name=hybridtest #SBATCH --partition=batch # Partition (queue) name #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail #SBATCH --nodes=2 # Number of nodes #SBATCH --ntasks=4 # Number of MPI ranks #SBATCH --ntasks-per-node=2 # Number of MPI ranks per node #SBATCH --cpus-per-task=3 # Number of OpenMP threads for each MPI process/rank #SBATCH --mem-per-cpu=2000mb # Per processor memory request #SBATCH --time=2-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss (2 days in the example) #SBATCH --output=hybridtest.%j.out # Standard output log #SBATCH --error=hybridtest.%j.err # Standard error log cd $SLURM_SUBMIT_DIR ml foss/2022a export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK mpirun ./myhybridprog.exe
Array job
Sample job submission script (sub.sh) to submit an array job with 10 elements. In this example, each array job element will run the a.out binary using an input file called input_0, input_1, ..., input_9.
#!/bin/bash #SBATCH --job-name=arrayjobtest # Job name #SBATCH --partition=batch # Partition (queue) name #SBATCH --ntasks=1 # Run a single task #SBATCH --mem=1gb # Job Memory #SBATCH --time=10:00:00 # Time limit hrs:min:sec #SBATCH --output=array_%A-%a.out # Standard output log #SBATCH --error=array_%A-%a.err # Standard error log #SBATCH --array=0-9 # Array range cd $SLURM_SUBMIT_DIR module load foss/2022a # load any needed module files, e.g. foss/2022a time ./a.out < input_$SLURM_ARRAY_TASK_ID
GPU/CUDA
Sample script to run Amber on a GPU node using one node, 2 CPU cores, and 1 GPU card:
#!/bin/bash #SBATCH --job-name=amber # Job name #SBATCH --partition=gpu # Partition (queue) name #SBATCH --gres=gpu:1 # Requests one GPU device #SBATCH --ntasks=1 # Run a single task #SBATCH --cpus-per-task=2 # Number of CPU cores per task #SBATCH --mem=40gb # Job memory request #SBATCH --time=10:00:00 # Time limit hrs:min:sec #SBATCH --output=amber.%j.out # Standard output log #SBATCH --error=amber.%j.err # Standard error log #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=yourMYID@uga.edu # Where to send mail cd $SLURM_SUBMIT_DIR ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1 srun $AMBERHOME/bin/pmemd.cuda -O -i ./prod.in -o prod.out -p ./dimerFBP_GOL.prmtop -c ./restart.rst -r prod.rst -x prod.mdcrd
How to submit a job to the batch queue
With the resource requirements specified in the job submission script (sub.sh), submit your job with
sbatch <scriptname>
For example
sbatch sub.sh
Once the job is submitted, the Job ID of the job (e.g. 12345) will be printed on the screen.
Discovering if a partition (queue) is busy
The nodes allocated to each partition (queue) and their state can be view with the command
sinfo
Sample output of the sinfo command:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST highmem up 7-00:00:00 2 idle rb1-[1-2] interactive up 7-00:00:00 2 idle rb1-[11-12] fsr4601 up 1:00 8 idle rb1-[3-10] fsr8602 up 10:00 8 idle rb1-[3-10] batch up 2-00:00:00 3 mix rb1-3,rb1-[6-8] batch up 2-00:00:00 1 alloc rb1-4 batch up 2-00:00:00 36 idle rb1-[5,9-10]
where some common values of STATE are:
- STATE=idle indicates that those nodes are completely free.
- STATE=mix indicates that some cores on those nodes are in use (and some are free).
- STATE=alloc indicates that all cores on those nodes are in use.
How to open an interactive session
An interactive session on a compute node can be started with the command
interact
This command will start an interactive session with one core on one of the interactive nodes, and allocate 2GB of memory for a maximum walltime of 12h.
The interact command is an alias for
srun --pty --cpus-per-task=1 --job-name=interact --ntasks=1 --nodes=1 --partition=interactive --time=12:00:00 --mem=2GB /bin/bash -l
The options that can be used with interact
are diplayed when this command is run with the -h or --help option:
[shtsai@teach1 ~]$ interact -h Usage: interact [OPTIONS] Description: Start an interactive job -c, --cpus-per-task CPU cores per task (default: 1) -J, --job-name Job name (default: interact) -n, --ntasks Number of tasks (default: 1) -N, --nodes Number of nodes (default: 1) -p, --partition Partition for interactive job (default: inter_p) -q, --qos Request a quality of service for the job. -t, --time Maximum run time for interactive job (default: 12:00:00) -w, --nodelist List of node name(s) on which your job should run --constraint Job constraints --gres Generic consumable resources --mem Memory per node (default 2GB) --shell Absolute path to the shell to be used in your interactive job (default: /bin/bash) --wckey Wckey to be used with job --x11 Start an interactive job with X Forwarding -h, --help Display this help output
Examples:
To start an interactive session with 4 cores and 10GB of memory:
interact -c 4 --mem=10G
How to run an interactive job with Graphical User Interface capabilities
If you want to run an application as an interactive job and have its graphical user interface displayed on the terminal of your local machine, you need to enable X-forwarding when you ssh into the login node. For information on how to do this, please see questions 5.4 and 5.5 in the Frequently Asked Questions page.
On the teaching cluster, X-forwarding does not work from any of the compute nodes, including the interactive nodes. Please feel free to run X windows applications directly on the teaching cluster login node.
How to check on running or pending jobs
To list all running and pending jobs (by all users), use the command
squeue
or
squeue -l
For detailed information on how to monitor your jobs, please see Monitoring Jobs on the teaching cluster.
How to delete a running or pending job
To delete one of your running or pending job, use the command
scancel <jobid>
For example, to delete a job with Job ID 12345 use
scancel 12345
How to check resource utilization of a running or finished job
The following command can be used to show resource utilization by a running job or a job that has already completed:
sacct
This command can be used with many options. We have configured one option that shows some quantities that are commonly of interest, including the amount of memory used and the cputime used by the jobs:
sacct-gacrc
For detailed information on how to monitor your jobs, please see Monitoring Jobs on the teaching cluster.