Sample Scripts: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
Line 2: Line 2:
[[Sample batch job submission scripts on Sapelo2]]
[[Sample batch job submission scripts on Sapelo2]]


==Sample batch job submission scripts on Sapelo2==
[[Sample batch job submission scripts on SapSlurm]]
 
====Regular serial job====
Sample job submission script (sub.sh) to run an application that needs to use an AMD regular node in batch queue:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=1:ppn=1:AMD
#PBS -l walltime=480:00:00
#PBS -l mem=10gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Bowtie2/2.3.3-foss-2016b
 
bowtie2 -p 1 [options] > outputfile
</pre>
 
If you can use either an Intel or an AMD regular node:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=1:ppn=1
#PBS -l walltime=480:00:00
#PBS -l mem=10gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Bowtie2/2.3.3-foss-2016b
 
bowtie2 -p 1 [options] > outputfile
</pre>
 
====Regular threaded parallel job====
Sample job submission script (sub.sh) to run an application that needs to use an AMD regular node in batch queue:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=1:ppn=4:AMD
#PBS -l walltime=480:00:00
#PBS -l mem=10gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Bowtie2/2.3.3-foss-2016b
 
bowtie2 -p 4 [options] > outputfile
</pre>
 
If you can use either an Intel or an AMD regular node:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=1:ppn=4
#PBS -l walltime=480:00:00
#PBS -l mem=10gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Bowtie2/2.3.3-foss-2016b
 
bowtie2 -p 4 [options] > outputfile
</pre>
 
====High memory job====
 
Sample job submission script (sub.sh) to run an application that needs to use an Intel HIGHMEM node in highmem_q queue:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q highmem_q
#PBS -N testjob
#PBS -l nodes=1:ppn=12:Intel
#PBS -l walltime=48:00:00
#PBS -l mem=400gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Velvet
 
velvetg [options] > outputfile
</pre>
 
If you can use either an Intel or an AMD HIGHMEM node:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q highmem_q
#PBS -N testjob
#PBS -l nodes=1:ppn=12
#PBS -l walltime=48:00:00
#PBS -l mem=400gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml Velvet
 
velvetg [options] > outputfile
</pre>
 
====OpenMPI====
 
Sample job submission script (sub.sh) to run an OpenMPI application:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=2:ppn=48:AMD
#PBS -l walltime=48:00:00
#PBS -l pmem=2gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml OpenMPI/2.1.1-GCC-6.4.0-2.28
 
echo
echo "Job ID: $PBS_JOBID"
echo "Queue:  $PBS_QUEUE"
echo "Cores:  $PBS_NP"
echo "Nodes:  $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')"
echo "mpirun: $(which mpirun)"
echo
 
mpirun ./a.out > outputfile
 
</pre>
 
====OpenMP====
 
Sample job submission script (sub.sh) to run an OpenMP (threaded) application:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q batch
#PBS -N testjob
#PBS -l nodes=1:ppn=10
#PBS -l walltime=48:00:00
#PBS -l mem=30gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
export OMP_NUM_THREADS=10
export OMP_PROC_BIND=true
 
echo
echo "Job ID: $PBS_JOBID"
echo "Queue:  $PBS_QUEUE"
echo "Cores:  $PBS_NP"
echo "Nodes:  $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')"
echo
 
time ./a.out > outputfile
</pre>
 
====GPU/CUDA====
 
Sample job submission script (sub.sh) to run a GPU-enabled (e.g. CUDA) application:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -q gpu_q
#PBS -N testjob
#PBS -l nodes=1:ppn=4:gpus=1
#PBS -l walltime=48:00:00
#PBS -l mem=2gb
#PBS -M username@uga.edu
#PBS -m abe
 
cd $PBS_O_WORKDIR
 
ml CUDA/9.0.176
 
echo
echo "Job ID: $PBS_JOBID"
echo "Queue:  $PBS_QUEUE"
echo "Cores:  $PBS_NP"
echo "Nodes:  $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')"
echo
 
time ./a.out > outputfile
 
</pre>
 
'''Note:''' Please note the additional '''gpus=1''' option in the header line. This option should be used to request the number of GPU cards to be used (e.g. to request 2 GPU cards, use gpus=2).
 
The GPU devices allocated to a job are listed in a file whose name is stored in the queueing system environment variable PBS_GPUFILE. You can print what this file name is with the command (add it to your job submission file):
<pre class="gscript">
echo $PBS_GPUFILE
</pre>
 
To get a list of the numbers of the GPU devices allocated to your job, separated by a blank space, use the command:
<pre class="gscript">
CUDADEV=$(cat $PBS_GPUFILE | rev | cut -d"u" -f1)
 
echo "List of devices allocated to this job:"
 
echo $CUDADEV
</pre>
 
To remove the blank space between two device numbers in the CUDADEV variable above, use the command:
<pre class="gscript">
CUDADEV=$(cat $PBS_GPUFILE | rev | cut -d"u" -f1)
 
GPULIST=$(echo $CUDADEV | sed 's/ //')
 
echo "List of devices allocated to this job (no blank spaces between devices):"
 
echo $GPULIST
</pre>
 
Some GPU/CUDA applications require that a list of the GPU devices be given as an argument to the application. If the application needs a blank space separated device number list, use the $CUDADEV variable as an argument. If no blank space is allowed in the list, you can use the $GPULIST variable as an argument to the application.
 
====Hybrid MPI/shared-memory using OpenMPI====
 
Sample job submission script (sub.sh) to run a parallel job that uses 3 MPI processes with OpenMPI and each MPI process runs with 12 threads:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -j oe
#PBS -q batch
#PBS -N testhybrid
#PBS -l nodes=3:ppn=12
#PBS -l pmem=5gb
#PBS -l walltime=4:00:00
#PBS -M username@uga.edu
#PBS -m abe
 
ml OpenMPI/2.1.1-GCC-6.4.0-2.28
 
echo
echo "Job ID: $PBS_JOBID"
echo "Queue:  $PBS_QUEUE"
echo "Cores:  $PBS_NP"
echo "Nodes:  $(cat $PBS_NODEFILE | sort -u | tr '\n' ' ')"
echo "mpirun: $(which mpirun)"
echo
 
cd $PBS_O_WORKDIR
 
export OMP_NUM_THREADS=12
 
perl /usr/local/bin/makehostlist.pl $PBS_NODEFILE $PBS_NUM_PPN $PBS_JOBID
 
mpirun -machinefile  host.$PBS_JOBID.list ./a.out
 
</pre>
 
====Running an array job====
 
Sample job submission script (sub.sh) to submit an array job with 10 elements. In this example, each array job element will run the a.out binary using an input file called input_0, input_1, ..., input_9.
<pre class="gscript">
#PBS -S /bin/bash
#PBS -j oe
#PBS -q batch
#PBS -N myarrayjob
#PBS -l nodes=1:ppn=1
#PBS -l mem=5gb
#PBS -l walltime=4:00:00
#PBS -t 0-9
 
cd $PBS_O_WORKDIR
 
time ./a.out < input_$PBS_ARRAYID
 
</pre>
 
====Running a singularity job====
 
For example, to run Trinity using the singularity image on a large memory node:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -N j_s_trinity
#PBS -q highmem_q
#PBS -l nodes=1:ppn=1
#PBS -l walltime=480:00:00
#PBS -l mem=100gb
cd $PBS_O_WORKDIR
 
singularity exec /usr/local/singularity-images/trinity-2.5.1--0.simg COMMAND OPTION
</pre>
 
where COMMAND should be replaced by the specific command and options, such as:
 
<pre class="gscript">
#PBS -S /bin/bash
#PBS -N j_s_trinity
#PBS -q highmem_q
#PBS -l nodes=1:ppn=16
#PBS -l walltime=480:00:00
#PBS -l mem=100gb
cd $PBS_O_WORKDIR
 
singularity exec /usr/local/singularity-images/trinity-2.5.1--0.simg Trinity --seqType <string> --max_memory 100G --CPU 8 --no_version_check 1>job.out 2>job.err 
</pre>

Revision as of 22:29, 9 July 2020