OpenMP: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 9: Line 9:
! scope="col" | Compiler
! scope="col" | Compiler
! scope="col" | Commands
! scope="col" | Commands
! scope="col" | Option  
! scope="col" | Compile Option
|-
|-


|-
|-
| PGI || pgcc, pgCC, pgf90, etc. || -mp
| PGI || pgcc, pgCC, pgfortran, etc. || -mp
|-
|-
| Intel || icc, ifort || -openmp
| Intel || icc, icpc, ifort || -openmp
|-
|-
| GNU || gcc, gfortran, g++, gcc44, etc. || -fopenmp
| GNU || gcc, g++, gfortran, etc. || -fopenmp
|-
|-
|}
|}
Line 25: Line 25:
===Defining number of OpenMP threads===
===Defining number of OpenMP threads===


The number of OpenMP threads can be specified using the environment variable OMP_NUM_THREADS. For example, to define 4 threads:
The number of OpenMP threads can be specified using the environment variable '''OMP_NUM_THREADS'''. For example, to define 4 threads:


For bash/sh:
For bash/sh:
Line 39: Line 39:
</pre>
</pre>


 
=== Setting thread-core binding ===
===Setting thread-core binding===
In general, an OpenMP code will run a lot more efficiently when threads are pinned to a given core on a compute node. Please set the environment variable '''OMP_PROC_BIND=true''' to specify threads' affinity policy (thread-core bind).  
 
In general, an OpenMP code will run a lot more efficiently when threads are pinned to a given core on a compute node. Please set the environment variable
OMP_PROC_BIND=true to specify threads' affinity policy (thread-core bind).  


For bash/sh:
For bash/sh:
Line 54: Line 51:


<pre class="gcommand">
<pre class="gcommand">
setenv OMP_PROC_BIND true
setenv OMP_PROC_BIND true
</pre>
</pre>


Line 81: Line 78:
export OMP_PROC_BIND=true
export OMP_PROC_BIND=true


module load foss/2019b  # load the appropriate module file, e.g. foss/2019b
module load foss/2022a                # load the appropriate module, e.g. foss/2022a


time ./a.out
time ./a.out
</pre>
</pre>


 
'''Please note''' that the number of cores requested by this job ('''--cpus-per-task=4''') is the same as the number of OpenMP threads that the application will use ('''export OMP_NUM_THREADS=4''').
'''Please note''' that the number of cores requested by this job (--cpus-per-task=4) is the same as the number of OpenMP threads that the application will use (export OMP_NUM_THREADS=4).


===Sample job submission command on Sapelo2===
===Sample job submission command on Sapelo2===

Latest revision as of 10:48, 6 September 2023


Compiling OpenMP code on Sapelo2

The compilers installed on Sapelo2 support shared memory applications using OpenMP. Here are the compiler options to include when using OpenMP:

Compiler Commands Compile Option
PGI pgcc, pgCC, pgfortran, etc. -mp
Intel icc, icpc, ifort -openmp
GNU gcc, g++, gfortran, etc. -fopenmp

For more information about the compilers on Sapelo2, please see Code Compilation on Sapelo2.

Defining number of OpenMP threads

The number of OpenMP threads can be specified using the environment variable OMP_NUM_THREADS. For example, to define 4 threads:

For bash/sh:

export OMP_NUM_THREADS=4

For csh/tcsh:

setenv OMP_NUM_THREADS 4

Setting thread-core binding

In general, an OpenMP code will run a lot more efficiently when threads are pinned to a given core on a compute node. Please set the environment variable OMP_PROC_BIND=true to specify threads' affinity policy (thread-core bind).

For bash/sh:

export OMP_PROC_BIND=true

For csh/tcsh:

setenv OMP_PROC_BIND true


Sample script to run a shared memory job using OpenMP

Sample job submission script (sub.sh) to run a program that uses 4 OpenMP threads:

#!/bin/bash
#SBATCH --job-name=mctest             # Job name
#SBATCH --partition=batch             # Partition (queue) name
#SBATCH --ntasks=1                    # Run a single task	
#SBATCH --cpus-per-task=4             # Number of CPU cores per task
#SBATCH --mem=4gb                     # Job memory request
#SBATCH --time=02:00:00               # Time limit hrs:min:sec
#SBATCH --output=mctest.%j.out        # Standard output log
#SBATCH --error=mctest.%j.err         # Standard error log

#SBATCH --mail-type=END,FAIL          # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=username@uga.edu  # Where to send mail	

cd $SLURM_SUBMIT_DIR

export OMP_NUM_THREADS=4
export OMP_PROC_BIND=true

module load foss/2022a                # load the appropriate module, e.g. foss/2022a

time ./a.out

Please note that the number of cores requested by this job (--cpus-per-task=4) is the same as the number of OpenMP threads that the application will use (export OMP_NUM_THREADS=4).

Sample job submission command on Sapelo2

To submit a shared memory job that uses 4 OpenMP threads:

sbatch sub.sh

For more information about running jobs on Sapelo2, please see Running Jobs on Sapelo2.