Sample Scripts
Sample batch job submission scripts on Sapelo2
Regular Serial Job
Sample job submission script (sub.sh) to run an application that needs to use an AMD regular node in batch queue:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=1:ppn=1:AMD #PBS -l walltime=480:00:00 #PBS -l mem=10gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Bowtie2/2.3.3-foss-2016b bowtie2 -p 1 [options] > outputfile
If you can use either an Intel or an AMD regular node:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=1:ppn=1 #PBS -l walltime=480:00:00 #PBS -l mem=10gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Bowtie2/2.3.3-foss-2016b bowtie2 -p 1 [options] > outputfile
Regular Threaded Parallel Job
Sample job submission script (sub.sh) to run an application that needs to use an AMD regular node in batch queue:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=1:ppn=4:AMD #PBS -l walltime=480:00:00 #PBS -l mem=10gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Bowtie2/2.3.3-foss-2016b bowtie2 -p 4 [options] > outputfile
If you can use either an Intel or an AMD regular node:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=1:ppn=4 #PBS -l walltime=480:00:00 #PBS -l mem=10gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Bowtie2/2.3.3-foss-2016b bowtie2 -p 4 [options] > outputfile
A high memory job
Sample job submission script (sub.sh) to run an application that needs to use an Intel HIGHMEM node in highmem_q queue:
#PBS -S /bin/bash #PBS -q highmem_q #PBS -N testjob #PBS -l nodes=1:ppn=12:Intel #PBS -l walltime=48:00:00 #PBS -l mem=400gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Velvet velvetg [options] > outputfile
If you can use either an Intel or an AMD HIGHMEM node:
#PBS -S /bin/bash #PBS -q highmem_q #PBS -N testjob #PBS -l nodes=1:ppn=12 #PBS -l walltime=48:00:00 #PBS -l mem=400gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml Velvet velvetg [options] > outputfile
OpenMPI
Sample job submission script (sub.sh) to run an OpenMPI application:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=2:ppn=48:AMD #PBS -l walltime=48:00:00 #PBS -l mem=2gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml OpenMPI/2.1.1-GCC-6.4.0-2.28 echo echo "Job ID: $PBS_JOBID" echo "Queue: $PBS_QUEUE" echo "Cores: $PBS_NP" echo "Nodes: $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')" echo "mpirun: $(which mpirun)" echo mpirun ./a.out > outputfile
OpenMP
Sample job submission script (sub.sh) to run an OpenMP (threaded) application:
#PBS -S /bin/bash #PBS -q batch #PBS -N testjob #PBS -l nodes=1:ppn=10 #PBS -l walltime=48:00:00 #PBS -l mem=30gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR export OMP_NUM_THREADS=10 export OMP_PROC_BIND=true echo echo "Job ID: $PBS_JOBID" echo "Queue: $PBS_QUEUE" echo "Cores: $PBS_NP" echo "Nodes: $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')" echo time ./a.out > outputfile
GPU/CUDA
Sample job submission script (sub.sh) to run a GPU-enabled (e.g. CUDA) application:
#PBS -S /bin/bash #PBS -q gpu_q #PBS -N testjob #PBS -l nodes=1:ppn=4:gpus=1 #PBS -l walltime=48:00:00 #PBS -l mem=2gb #PBS -M username@uga.edu #PBS -m abe cd $PBS_O_WORKDIR ml CUDA/9.0.176 echo echo "Job ID: $PBS_JOBID" echo "Queue: $PBS_QUEUE" echo "Cores: $PBS_NP" echo "Nodes: $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')" echo time ./a.out > outputfile
Note: Please note the additional gpus=1 option in the header line. This option should be used to request the number of GPU cards to be used (e.g. to request 2 GPU cards, use gpus=2).
The GPU devices allocated to a job are listed in a file whose name is stored in the queueing system environment variable PBS_GPUFILE. You can print what this file name is with the command (add it to your job submission file):
echo $PBS_GPUFILE
To get a list of the numbers of the GPU devices allocated to your job, separated by a blank space, use the command:
CUDADEV=$(cat $PBS_GPUFILE | rev | cut -d"u" -f1) echo "List of devices allocated to this job:" echo $CUDADEV
To remove the blank space between two device numbers in the CUDADEV variable above, use the command:
CUDADEV=$(cat $PBS_GPUFILE | rev | cut -d"u" -f1) GPULIST=$(echo $CUDADEV | sed 's/ //') echo "List of devices allocated to this job (no blank spaces between devices):" echo $GPULIST
Some GPU/CUDA applications require that a list of the GPU devices be given as an argument to the application. If the application needs a blank space separated device number list, use the $CUDADEV variable as an argument. If no blank space is allowed in the list, you can use the $GPULIST variable as an argument to the application.
Sample job submission script (sub.sh) to run a parallel job that uses 3 MPI processes with OpenMPI and each MPI process runs with 12 threads:
#PBS -S /bin/bash #PBS -j oe #PBS -q batch #PBS -N testhybrid #PBS -l nodes=3:ppn=12 #PBS -l mem=60g #PBS -l walltime=4:00:00 #PBS -M username@uga.edu #PBS -m abe ml OpenMPI/2.1.1-GCC-6.4.0-2.28 echo echo "Job ID: $PBS_JOBID" echo "Queue: $PBS_QUEUE" echo "Cores: $PBS_NP" echo "Nodes: $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')" echo "mpirun: $(which mpirun)" echo cd $PBS_O_WORKDIR export OMP_NUM_THREADS=12 perl /usr/local/bin/makehostlist.pl $PBS_NODEFILE $PBS_NUM_PPN $PBS_JOBID mpirun -machinefile host.$PBS_JOBID.list ./a.out
Running an array job
Sample job submission script (sub.sh) to submit an array job with 10 elements. In this example, each array job element will run the a.out binary using an input file called input_0, input_1, ..., input_9.
#PBS -S /bin/bash #PBS -j oe #PBS -q batch #PBS -N myarrayjob #PBS -l nodes=1:ppn=1 #PBS -l walltime=4:00:00 #PBS -t 0-9 cd $PBS_O_WORKDIR time ./a.out < input_$PBS_ARRAYID
Running a singularity job
For example, to run Trinity using the singularity image on a large memory node:
#PBS -S /bin/bash #PBS -N j_s_trinity #PBS -q highmem_q #PBS -l nodes=1:ppn=1 #PBS -l walltime=480:00:00 #PBS -l mem=100gb cd $PBS_O_WORKDIR singularity exec /usr/local/singularity-images/trinity-2.5.1--0.simg COMMAND OPTION
where COMMAND should be replaced by the specific command and options, such as:
#PBS -S /bin/bash #PBS -N j_s_trinity #PBS -q highmem_q #PBS -l nodes=1:ppn=16 #PBS -l walltime=480:00:00 #PBS -l mem=100gb cd $PBS_O_WORKDIR singularity exec /usr/local/singularity-images/trinity-2.5.1--0.simg Trinity --seqType <string> --max_memory 100G --CPU 8 --no_version_check 1>job.out 2>job.err