Code Compilation on Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 8: Line 8:




<--!
If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command
If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command
<pre class="gcommand">
<pre class="gcommand">
Line 17: Line 18:
qlogin_intel
qlogin_intel
</pre>
</pre>
-->


Code compilation can be done on an interactive session that you can be started with the command
<pre class="gcommand">
qlogin
</pre>


For information on how to access the compute node interactively for code compilation, please see [[Running Jobs on Sapelo2 using Slurm]].
For information on how to access the compute node interactively for code compilation, please see [[Running Jobs on Sapelo2 using Slurm]].

Revision as of 12:26, 10 July 2020


Where should I compile my code?

IMPORTANT NOTE: Please do not compile code on the login node. Instead, please first start an interactive session with qlogin and compile the code on the interactive node.


<--! If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command

qlogin_amd

If you plan to run the code on an Intel node, you can start an interactive session on an Intel node to compile the code. To start an interactive on an Intel node, use the command

qlogin_intel

-->

Code compilation can be done on an interactive session that you can be started with the command

qlogin

For information on how to access the compute node interactively for code compilation, please see Running Jobs on Sapelo2 using Slurm.




Compilers

A number of Fortran and C/C++ compilers, as well as Java and scripting languages such as Perl and Python, are available on the Slurm test cluster.

Summary of main Fortran and C/C++ compilers installed:

Portland Group (PGI) Intel GNU OpenMPI File extension
Fortran77 pgf77 ifort mpif77 .f
Fortran90 pgf90 ifort gfortran mpif90 .f90
Fortran95 pgf95 ifort gfortran mpifort .f95
C pgcc icc gcc mpicc .c
C++ pgCC icpc g++ mpicxx .C, .cpp, .cc

The various compiler suites are provided by their environment modules.

GNU compiler suites:

The following command will show all the GCC compiler suite modules


  • Version 6.4.0, provided by the GCC/6.4.0-2.28 module, includes C, C++, and Fortran compilers.
  • Version 7.3.0, provided by the GCC/7.2.0-2.30 module, includes C, C++, and Fortran compilers.
  • Version 8.3.0, provided by the GCC/8.3.0 module, includes C, C++, and Fortran compilers.
  • Version 9.2.0, provided by the GCC/9.2.0 module, includes C, C++, and Fortran compilers.



PGI compiler suites:

  • Version 17.9, provided by the PGI/17.9 module.


Intel compiler suites:

  • Version 13 SP1, provided by the iccifort/2013_sp1.0.080 module.
  • Version 15.2, provided by the iccifort/2015.2.164-GCC-4.8.5 module.
  • Version 18.0.1.163, provided by the iccifort/2018.1.163-GCC-6.4.0-2.28 module.


LLVM compiler suites':

  • Version 3.8.1, provided by the LLVM/3.8.1-foss-2016b module.
  • Version 4.0.0, provided by the LLVM/4.0.0-foss-2016b module.
  • Version 5.0.1, provided by the LLVM/5.0.1-GCCcore-6.4.0 module.
  • Version 6.0.0, provided by the LLVM/6.0.0-GCCcore-7.2.0 module.


The module spider command can be used to see information on the various modules available. For example, to check all GCC compiler suites installed, use

ml spider gcc

To use any of the compiler suite, please first load the corresponding module. For example, to use the GNU 6.4.0 compiler suite, load the module with

ml load GCC/6.4.0-2.28

Please note that you can only have one compiler module loaded at a time. For more information about Environment Modules, please see lmod.

Some commonly used compiler options

PGI compiler suite:

Option Description
-O0 Specifies no optimization, recommended for code debugging
-O1 Specifies local optimization
-O2 Specifies global optimization (this is the default, same as using -O)
-O3 Includes -O1, -O2 and more aggressive optimization. Use with care.
-fast Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
-Mbounds Performs runtime array bound check, recommended for code debugging
-g Produces symbolic debug information in the object files.
-r8 Interpret REAL variables as DOUBLE PRECISION.
-B Allow C++ style comments in C source code; these begin with ‘//’ and continue until the end of the current line. pgcc only.
-Kieee Perform floating-point operations in strict conformance with the IEEE 754 standard. The default compilation is -Knoieee, which uses faster but very slightly less accurate methods.
-mp Interpret OpenMP directives to explicitly parallelize regions of code for execution by multiple threads
-acc Enable OpenACC pragmas and directives to explicitly parallelize regions of code for execution by accelerator devices. Use with the -ta option

NOTE When using optimization options, please check if your code becomes more efficient (in some cases optimization options will slow the code down) and if it still generates correct results. Many other compiler options are available. For more information on the PGI compilers, you can view the manual pages with the commands man pgf90, man pgcc, etc, after loading the pgi module.

Intel compiler suite:

Option Description
-O0 Specifies no optimization, recommended for code debugging
-O2 Enables optimizations for speed. This is the generally recommended optimization level.
-O3 Performs -O2 optimizations and more aggressive loop transformations. Use with care.
-fast Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
-Mbounds Performs runtime array bound check, recommended for code debugging



Back to Top


Compiler Toolchains

On Sapelo2 we use the EasyBuild framework to install software applications. The EasyBuild toolchains are also available for users to compile their own code. Each toolchain provides a compiler suite and some basic libraries, such as MPI, BLAS, LAPACK, FFTW, etc.


foss toolchains:

Most software applications are installed with the foss toolchain, where foss is short for “Free and Open Source Software”.

The foss toolchain consists of:

The following foss toolchains are available:

  • foss/2016b, includes GCC 5.4.0, OpenMPI 1.10.3, OpenBLAS 0.2.18, FFTW 3.3.4, ScaLAPACK 2.0.2
  • foss/2018a, includes GCC 6.4.0, OpenMPI 2.1.2, OpenBLAS 0.2.20, FFTW 3.3.7, ScaLAPACK 2.0.2

When you load a toolchain, all it components will be loaded. For example:

ml foss/2016b

will load these modules:

  • GCCcore/5.4.0
  • binutils/2.26-GCCcore-5.4.0
  • GCC/5.4.0-2.26
  • numactl/2.0.11-GCC-5.4.0-2.26
  • hwloc/1.11.3-GCC-5.4.0-2.26
  • OpenMPI/1.10.3-GCC-5.4.0-2.26
  • OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1
  • gompi/2016b
  • FFTW/3.3.4-gompi-2016b
  • ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1
  • foss/2016b


iomkl toolchains:

The iomkl toolchain consists of

  • the Intel compiler suite
  • the OpenMPI libraries
  • the Intel Math Kernel Libraries (MKL)

The following iomkl toolchains are available:

  • iomkl/2013_sp1.0.080, includes the Intel 2013.SP1 compiler suite, OpenMPI 1.8.4, MKL 11.1.1.106
  • iomkl/2015.02, includes the Intel 2015.2.164 compiler suite, OpenMPI 1.8.4, MKL 11.2.2.164
  • iomkl/2018a, includes the Intel 2018.1.163 compiler suite, OpenMPI 2.1.2, MKL 2018.1.163


imvmkl toolchains:

The imvmkl toolchain consists of

  • the Intel compiler suite
  • the MVAPICH2 libraries
  • the Intel Math Kernel Libraries (MKL)

The following imvmkl toolchains are available:

  • imvmkl/2013_sp1.0.080, includes the Intel 2013_sp1.0.080 compiler suite, MVAPICH2 2.2, MKL 11.1.1.106
  • imvmkl/2015.02, includes the Intel 2015.2.164 compiler suite, MVAPICH2 2.2, MKL 11.2.2.164
  • imvmkl/2018a, includes the Intel 2018.1.163 compiler suite, MVAPICH2 2.2, MKL 2018.1.163


gmvolf toolchains:

The gmvolf toolchain consists of:

The following foss toolchains are available:

  • gmvolf/2016b, includes GCC 5.4.0, MVAPICH2 2.2, OpenBLAS 0.2.18, FFTW 3.3.4, ScaLAPACK 2.0.2



Back to Top