Code Compilation on Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(77 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[[Category:Sapelo2]]
[[Category:Sapelo2]]


===Where should I compile my code?===
==Where should I compile my code?==


<blockquote style="background-color: lightyellow; border: solid thin grey;">  
====<span style="color:darkred"><big>IMPORTANT: Please DO NOT compile source code on the login node. Instead, compile your code in an interactive session started with the interact command.</big></span>====
'''IMPORTANT NOTE:''' Please do not compile code on the login node. Instead, please first start an interactive session with qlogin and compile the code on the interactive node.  
</blockquote>




<!--
Code compilation can be done in an interactive session. To start an interactive session, first login into Sapelo2 and from there issue the <code>'''[[Running Jobs on Sapelo2#How to open an interactive session|interact]]'''</code> command
<pre class="gcommand">
interact
</pre>
If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command
If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command
<pre class="gcommand">
<pre class="gcommand">
qlogin_amd
interact --constraint AMD
</pre>
</pre>


If you plan to run the code on an Intel node, you can start an interactive session on an Intel node to compile the code. To start an interactive on an Intel node, use the command
If you plan to run the code on an Intel node, you can start an interactive session on an Intel node to compile the code. To start an interactive on an Intel node, use the command
<pre class="gcommand">
<pre class="gcommand">
qlogin_intel
interact --constraint Intel
</pre>
</pre>
-->
Code compilation can be done on an interactive session that you can be started with the command
<pre class="gcommand">
qlogin
</pre>
For information on how to access the compute node interactively for code compilation, please see [[Running Jobs on Sapelo2 using Slurm]].


For detailed information on how to access the compute node interactively for code compilation, please see [[Running Jobs on Sapelo2]].


----
----
==Compilers==


A number of Fortran and C/C++ compilers, as well as Java and scripting languages such as Perl and Python, are available on Sapelo2.


 
=== Summary of main Fortran and C/C++ compilers installed ===
===Compilers===
 
A number of Fortran and C/C++ compilers, as well as Java and scripting languages such as Perl and Python, are available on the Slurm test cluster.
 
'''Summary of main Fortran and C/C++ compilers installed:'''
 
{|  width="100%" border="1"  cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable"
{|  width="100%" border="1"  cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable"
|-
|-
Line 48: Line 38:
|-
|-


| Fortran77 || pgf77 || ifort || || mpif77|| .f
| Fortran77 || pgfortran
| ifort || gfortran
| mpif77|| .f
|-
| Fortran90 || pgfortran
| ifort || gfortran|| mpif90|| .f90
|-
|-
| Fortran90 || pgf90 || ifort || gfortran|| mpif90|| .f90
| Fortran95 || pgfortran
| ifort || gfortran || mpifort || .f95
|-
|-
| Fortran95 || pgf95 || ifort || gfortran || mpifort || .f95
|Fortran2003
|pgfortran
|ifort
|gfortran
|mpifort
|.f
|-
|-
| C || pgcc || icc || gcc || mpicc || .c
| C || pgcc || icc || gcc || mpicc || .c
|-
|-
| C++ || pgCC || icpc || g++ || mpicxx || .C, .cpp, .cc
| C++ || pgc++ || icpc || g++ || mpicxx || .C, .cpp, .cc
|-
|-


Line 63: Line 64:
The various compiler suites are provided by their environment modules.  
The various compiler suites are provided by their environment modules.  


'''GNU compiler suites''':
=== GNU compiler suites ===
 
The following command will show all the modules that provide GCC compiler suites:
The following command will show all the modules that provide GCC compiler suites:
<pre class="gcommand">
<pre class="gcommand">
Line 72: Line 72:
Sample partial output of this command:
Sample partial output of this command:
<pre class="gcomment">
<pre class="gcomment">
[shtsai@b1-1 ~]$ module spider GCC
[shtsai@d2-13 ~]$ module spider GCC
 
-----------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------
   GCC:
   GCC:
-------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
    Description:
    Description:
       The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as
       The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
      libraries for these languages (libstdc++, libgcj,...).


     Versions:
     Versions:
        GCC/6.4.0-2.28
        GCC/7.3.0-2.30
         GCC/8.3.0
         GCC/8.3.0
         GCC/9.2.0
         GCC/10.2.0
        GCC/11.2.0
        GCC/11.3.0
        GCC/12.2.0
     Other possible modules matches:
     Other possible modules matches:
         GCCcore gcccuda
         GCCcore
 
-----------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------
</pre>
</pre>


This output indicates that the following versions of GCC compilers are available:
This output indicates that the following versions of GCC compilers are available:


*Version 6.4.0, provided by the GCC/6.4.0-2.28 module, includes C, C++, and Fortran compilers.
*Version 8.3.0, with binutils 2.32, provided by the GCC/8.3.0 module, includes C, C++, and Fortran compilers.
*Version 7.3.0, provided by the GCC/7.2.0-2.30 module, includes C, C++, and Fortran compilers.
*Version 10.2.0, with binutils 2.35, provided by the GCC/10.2.0 module, includes C, C++, and Fortran compilers.
*Version 8.3.0, provided by the GCC/8.3.0 module, includes C, C++, and Fortran compilers.
*Version 11.2.0, with binutils 2.37, provided by the GCC/11.2.0 module, includes C, C++, and Fortran compilers.
*Version 9.2.0, provided by the GCC/9.2.0 module, includes C, C++, and Fortran compilers.
*Version 11.3.0, with binutils 2.38, provided by the GCC/11.3.0 module, includes C, C++, and Fortran compilers.
*Version 12.2.0, with binutils 2.39, provided by the GCC/12.2.0 module, includes C, C++, and Fortran compilers.


We suggest that you run the <code> module spider GCC</code> command to check an updated list of GCC compilers available on the cluster.
We suggest that you run the<code> module spider GCC</code> command to check an updated list of GCC compilers available on the cluster.


 
=== Intel compiler suites ===
'''PGI compiler suites''':
The following command will show all the modules that provide Intel compiler suites:
 
The following command will show all the modules that provide PGI compiler suites:
<pre class="gcommand">
<pre class="gcommand">
module spider PGI
module spider intel-compilers
</pre>
</pre>


Sample output of this command
Sample output of this command
<pre class="gcomment">
<pre class="gcomment">
[shtsai@b1-1 ~]$ module spider PGI
[shtsai@d2-13 ~]$ ml spider intel-compilers


-------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
   PGI: PGI/19.10-GCC-8.3.0-2.32
   intel-compilers:
-------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
     Description:
     Description:
       C, C++ and Fortran compilers from The Portland Group - PGI
       Intel C, C++ & Fortran compilers (classic and oneAPI)


    Versions:
        intel-compilers/2021.4.0
        intel-compilers/2022.1.0
        intel-compilers/2022.2.1
        intel-compilers/2023.1.0


    This module can be loaded directly: module load PGI/19.10-GCC-8.3.0-2.32
-----------------------------------------------------------------------------------------------------------------
 
    Help:
      Description
      ===========
      C, C++ and Fortran compilers from The Portland Group - PGI
     
     
      More information
      ================
      - Homepage: https://www.pgroup.com/
</pre>
</pre>


This output indicates that the PGI v. 19.10 compiler suite is available currently.
This output indicates that the following versions of the Intel compiler suites are available:


*Version 2021.4.0, provided by the intel-compilers/2021.4.0 module.
*Version 2022.1.0, provided by the intel-compilers/2022.1.0 module.
*Version 2022.2.1, provided by the intel-compilers/2022.2.1 module.
*Version 2023.1.0, provided by the intel-compilers/2023.1.0 module.
*Version 2019.5.281, provided by the iccifort/2019.5.281 module


'''Intel compiler suites''':
We suggest that you run the <code>module spider intel-compilers</code> or <code>module spider iccifort</code> command to check an updated list of Intel compilers available on the cluster.


The following command will show all the modules that provide Intel compiler suites:
=== LLVM compiler suites ===
The following command will show all the modules that provide LLVM compilers:
<pre class="gcommand">
<pre class="gcommand">
module spider iccifort
module spider LLVM
</pre>
</pre>
Sample output of this command
<pre class="gcomment">
[shtsai@d2-13 ~]$ module spider LLVM
------------------------------------------------------------------------------------------------------------------------------------
  LLVM:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified
      code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an
      optimizer and code generator.


    Versions:
        LLVM/12.0.1-GCCcore-11.2.0
        LLVM/14.0.3-GCCcore-11.3.0
-----------------------------------------------------------------------------------------------------------------------------------
</pre>


=== How to load a compiler module ===


*Version 13 SP1, provided by the iccifort/2013_sp1.0.080 module.
To use any of the compiler suite, please first load the corresponding module. For example, to use the GNU 11.3.0 compiler suite, load the module with<pre class="gcommand">
*Version 15.2, provided by the iccifort/2015.2.164-GCC-4.8.5 module.
module load GCC/11.3.0
*Version 18.0.1.163, provided by the iccifort/2018.1.163-GCC-6.4.0-2.28 module.
 
 
'''LLVM compiler suites'''':
 
*Version 3.8.1, provided by the LLVM/3.8.1-foss-2016b module.
*Version 4.0.0, provided by the LLVM/4.0.0-foss-2016b module.
*Version 5.0.1, provided by the LLVM/5.0.1-GCCcore-6.4.0 module.
*Version 6.0.0, provided by the LLVM/6.0.0-GCCcore-7.2.0 module.
 
 
The module spider command can be used to see information on the various modules available. For example, to check all GCC compiler suites installed, use
<pre class="gcommand">
ml spider gcc
</pre>
 
To use any of the compiler suite, please first load the corresponding module. For example, to use the GNU 6.4.0 compiler suite, load the module with
<pre class="gcommand">
ml load GCC/6.4.0-2.28
</pre>
</pre>
Once this module is loaded the gcc, g++, and gfortran for GCC v. 11.3.0 will be available in your path.


Please note that you can only have one compiler module loaded at a time. For more information about Environment Modules, please see [[lmod-Sapelo2|lmod]].
Please note that you can only have one compiler module loaded at a time.


====Some commonly used compiler options====
=== Some commonly used compiler options ===


'''PGI compiler suite:'''
==== PGI compiler suite (To be  added on Sapelo2)====
 
{| class="wikitable unsortable" width="100%" border="1" cellspacing="0" cellpadding="2" align="center"
{| width="100%" border="1" cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable"
|-
|-
! scope="col" | Option
! scope="col" |Option
! scope="col" | Description
! scope="col" |Description
|-
|-


| -O0 || Specifies no optimization, recommended for code debugging
| -O0||Specifies no optimization, recommended for code debugging
|-
|-
| -O1 || Specifies local optimization
| -O1||Specifies local optimization
|-
|-
| -O2 || Specifies global optimization (this is the default, same as using -O)
| -O2||Specifies global optimization (this is the default, same as using -O)
|-
|-
| -O3 || Includes -O1, -O2 and more aggressive optimization. Use with care.
| -O3||Includes -O1, -O2 and more aggressive optimization. Use with care.
|-
|-
| -fast || Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
| -fast ||Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
|-
|-
| -Mbounds || Performs runtime array bound check, recommended for code debugging
| -Mbounds ||Performs runtime array bound check, recommended for code debugging
|-
|-
| -g || Produces symbolic debug information in the object files.
| -g||Produces symbolic debug information in the object files.
|-
|-
| -r8 || Interpret REAL variables as DOUBLE PRECISION.
| -r8||Interpret REAL variables as DOUBLE PRECISION.
|-
|-
| -B || Allow C++ style comments in C source code; these begin with ‘//’ and continue until the end of the current line. pgcc only.
| -B||Allow C++ style comments in C source code; these begin with ‘//’ and continue until the end of the current line. pgcc only.
|-
|-
| -Kieee || Perform floating-point operations in strict conformance with the IEEE 754 standard. The default compilation is -Knoieee, which uses faster but very slightly less accurate methods.
| -Kieee ||Perform floating-point operations in strict conformance with the IEEE 754 standard. The default compilation is -Knoieee, which uses faster but very slightly less accurate methods.
|-
|-
| -mp || Interpret OpenMP directives to explicitly parallelize regions of code for execution by multiple threads
| -mp||Interpret OpenMP directives to explicitly parallelize regions of code for execution by multiple threads
|-
|-
| -acc || Enable OpenACC pragmas and directives to explicitly parallelize regions of code for execution by accelerator devices. Use with the -ta option
| -acc ||Enable OpenACC pragmas and directives to explicitly parallelize regions of code for execution by accelerator devices. Use with the -ta option
|}
|}


'''NOTE'''
'''NOTE'''
When using optimization options, please check if your code becomes more efficient (in some cases optimization options will slow the code down) and if it still generates correct results. Many other compiler options are available. For more information on the PGI compilers, you can view the manual pages with the commands '''man pgf90''', '''man pgcc''', etc, after loading the pgi module.
When using optimization options, please check if your code becomes more efficient (in some cases optimization options will slow the code down) and if it still generates correct results. Many other compiler options are available. For more information on the PGI compilers, you can view the manual pages with the commands '''man pgf90''', '''man pgcc''', etc, after loading the PGI module.
 
====Intel compiler suite====
{| class="wikitable unsortable" width="100%" border="1" cellspacing="0" cellpadding="2" align="center"
|-
! scope="col" |Option
! scope="col" |Description
|-


'''Intel compiler suite:'''
| -O0||Specifies no optimization, recommended for code debugging
|-
| -O2 ||Enables  optimizations  for speed. This is the generally recommended optimization level.
|-
| -O3||Performs -O2 optimizations and more aggressive loop transformations. Use with care.
|-
| -fast ||Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
|-
| -Mbounds ||Performs runtime array bound check, recommended for code debugging
|}


{| width="100%" border="1" cellspacing="0" cellpadding="2" align="center" class="wikitable unsortable"
====GNU compiler suite====
{| class="wikitable unsortable" width="100%" border="1" cellspacing="0" cellpadding="2" align="center"
|-
|-
! scope="col" | Option
! scope="col" |Option
! scope="col" | Description
! scope="col" | Description
|-
|-


| -O0 || Specifies no optimization, recommended for code debugging
| -O0||Specifies no optimization, recommended for code debugging
|-
|-
| -O2 || Enables  optimizations  for speed. This is the generally recommended optimization level.
| -O2 ||Enables  optimizations  for speed. This is the generally recommended optimization level.
|-
|-
| -O3 || Performs -O2 optimizations and more aggressive loop transformations. Use with care.
| -O3||Performs -O2 optimizations and more aggressive loop transformations. Use with care.
|-
|-
| -fast || Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
| -std=
|Determine the language standard. This option is currently only supported when compiling C or C++.
|-
|-
| -Mbounds || Performs runtime array bound check, recommended for code debugging
| -fopenmp
|Enable handling of OpenMP directives "#pragma omp" in C/C++ and "!$omp" in Fortran.
|-
| -fopenacc
| Enable handling of OpenACC directives "#pragma acc" in C/C++ and "!$acc" in Fortran.
|-
|<nowiki>-Wpedantic</nowiki>
|Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++.
|-
| -Wall
|This enables all the warnings about constructions that some users consider questionable.
|}
|}


----
----
[[#top|Back to Top]]
[[#top|Back to Top]]


 
== Compiler Toolchains==
===Compiler Toolchains===
 
On Sapelo2 we use the [https://easybuild.readthedocs.io/en/latest/ EasyBuild] framework to install software applications. The EasyBuild toolchains are also available for users to compile their own code. Each toolchain provides a compiler suite and some basic libraries, such as MPI, BLAS, LAPACK, FFTW, etc.  
On Sapelo2 we use the [https://easybuild.readthedocs.io/en/latest/ EasyBuild] framework to install software applications. The EasyBuild toolchains are also available for users to compile their own code. Each toolchain provides a compiler suite and some basic libraries, such as MPI, BLAS, LAPACK, FFTW, etc.  


More information about compiler toolchains, please [[Available Toolchains and Toolchain Compatibility]].


'''foss toolchains:'''
===foss toolchains===
 
Most software applications are installed with the '''foss''' toolchain, where '''foss''' is short for “Free and Open Source Software”.
Most software applications are installed with the '''foss''' toolchain, where '''foss''' is short for “Free and Open Source Software”.


Line 251: Line 270:
*the FFTW library (http://fftw.org/)
*the FFTW library (http://fftw.org/)


The following foss toolchains are available:
You can check the foss toolchain modules that are installed on the cluster with the command
<pre class="gcommand">
module spider foss
</pre>


*foss/2016b, includes GCC 5.4.0, OpenMPI 1.10.3, OpenBLAS 0.2.18, FFTW 3.3.4, ScaLAPACK 2.0.2
When you load a foss toolchain, all it components will be loaded. For example:
*foss/2018a, includes GCC 6.4.0, OpenMPI 2.1.2, OpenBLAS 0.2.20, FFTW 3.3.7, ScaLAPACK 2.0.2
<pre class="gcommand">
[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load foss/2022a
[shtsai@d2-13 ~]$ module list
 
Currently Loaded Modules:
  1) GCCcore/11.3.0                5) numactl/2.0.14-GCCcore-11.3.0      9) hwloc/2.7.1-GCCcore-11.3.0     13) libfabric/1.15.1-GCCcore-11.3.0  17) OpenBLAS/0.3.20-GCC-11.3.0 21) FFTW.MPI/3.3.10-gompi-2022a
  2) zlib/1.2.12-GCCcore-11.3.0    6) XZ/5.2.5-GCCcore-11.3.0          10) OpenSSL/1.1                    14) PMIx/4.1.2-GCCcore-11.3.0       18) FlexiBLAS/3.2.0-GCC-11.3.0  22) ScaLAPACK/2.2.0-gompi-2022a-fb
  3) binutils/2.38-GCCcore-11.3.0  7) libxml2/2.9.13-GCCcore-11.3.0     11) libevent/2.1.12-GCCcore-11.3.0  15) UCC/1.0.0-GCCcore-11.3.0        19) FFTW/3.3.10-GCC-11.3.0      23) foss/2022a
  4) GCC/11.3.0                    8) libpciaccess/0.16-GCCcore-11.3.0  12) UCX/1.12.1-GCCcore-11.3.0       16) OpenMPI/4.1.4-GCC-11.3.0        20) gompi/2022a
</pre>
 
===intel toolchains===
The intel toolchain consists of


When you load a toolchain, all it components will be loaded. For example:
*the Intel compiler suite
*the Intel MPI libraries
*the Intel Math Kernel Libraries (MKL)
 
You can check the intel toolchain modules that are installed on the cluster with the command
<pre class="gcommand">
<pre class="gcommand">
ml foss/2016b
module spider intel
</pre>
</pre>
will load these modules:
*GCCcore/5.4.0
*binutils/2.26-GCCcore-5.4.0
*GCC/5.4.0-2.26
*numactl/2.0.11-GCC-5.4.0-2.26
*hwloc/1.11.3-GCC-5.4.0-2.26
*OpenMPI/1.10.3-GCC-5.4.0-2.26
*OpenBLAS/0.2.18-GCC-5.4.0-2.26-LAPACK-3.6.1
*gompi/2016b
*FFTW/3.3.4-gompi-2016b
*ScaLAPACK/2.0.2-gompi-2016b-OpenBLAS-0.2.18-LAPACK-3.6.1
*foss/2016b


When you load an intel toolchain, all it components will be loaded. For example:
<pre class="gcommand">
[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load intel/2022a
[shtsai@d2-13 ~]$ module list


'''iomkl toolchains:'''
Currently Loaded Modules:
  1) GCCcore/11.3.0              3) binutils/2.38-GCCcore-11.3.0  5) numactl/2.0.14-GCCcore-11.3.0  7) impi/2021.6.0-intel-compilers-2022.1.0  9) iimpi/2022a                    11) intel/2022a
  2) zlib/1.2.12-GCCcore-11.3.0  4) intel-compilers/2022.1.0      6) UCX/1.12.1-GCCcore-11.3.0      8) imkl/2022.1.0                          10) imkl-FFTW/2022.1.0-iimpi-2022a
</pre>


<!--
===iomkl toolchains===
The iomkl toolchain consists of
The iomkl toolchain consists of


*the Intel compiler suite
*the Intel compiler suite
*the OpenMPI libraries
* the OpenMPI libraries
*the Intel Math Kernel Libraries (MKL)
*the Intel Math Kernel Libraries (MKL)


The following iomkl toolchains are available:
You can check the iomkl toolchain modules that are installed on the cluster with the command
<pre class="gcommand">
module spider iomkl
</pre>
 
Th iomkl toolchains available on the cluster include:


*iomkl/2013_sp1.0.080, includes the Intel 2013.SP1 compiler suite, OpenMPI 1.8.4, MKL 11.1.1.106
*iomkl/2013_sp1.0.080, includes the Intel 2013.SP1 compiler suite, OpenMPI 1.8.4, MKL 11.1.1.106
*iomkl/2015.02, includes the Intel 2015.2.164 compiler suite, OpenMPI 1.8.4, MKL 11.2.2.164
*iomkl/2015.02, includes the Intel 2015.2.164 compiler suite, OpenMPI 1.8.4, MKL 11.2.2.164
*iomkl/2018a, includes the Intel 2018.1.163 compiler suite, OpenMPI 2.1.2, MKL 2018.1.163
*iomkl/2018a, includes the Intel 2018.1.163 compiler suite, OpenMPI 2.1.2, MKL 2018.1.163
-->


 
<!--
'''imvmkl toolchains:'''
'''imvmkl toolchains:'''


Line 302: Line 347:
*imvmkl/2015.02, includes the Intel 2015.2.164 compiler suite, MVAPICH2 2.2, MKL 11.2.2.164
*imvmkl/2015.02, includes the Intel 2015.2.164 compiler suite, MVAPICH2 2.2, MKL 11.2.2.164
*imvmkl/2018a, includes the Intel 2018.1.163 compiler suite, MVAPICH2 2.2, MKL 2018.1.163
*imvmkl/2018a, includes the Intel 2018.1.163 compiler suite, MVAPICH2 2.2, MKL 2018.1.163
-->
<!--
When you load an iomkl toolchain, all it components will be loaded. For example:<pre class="gcommand">
[shtsai@b1-1 ~]$ module list
No modules loaded
[shtsai@b1-1 ~]$ module load iomkl/2018a
[shtsai@b1-1 ~]$ module list


 
Currently Loaded Modules:
'''gmvolf toolchains:'''
1) GCCcore/6.4.0                4) icc/2018.1.163-GCC-6.4.0-2.28        7) numactl/2.0.11-GCCcore-6.4.0  10) libpciaccess/0.14-GCCcore-6.4.0                  13) iompi/2018a
 
2) zlib/1.2.11-GCCcore-6.4.0    5) ifort/2018.1.163-GCC-6.4.0-2.28      8) XZ/5.2.3-GCCcore-6.4.0        11) hwloc/1.11.8-GCCcore-6.4.0                        14) imkl/2018.1.163-iompi-2018a
3) binutils/2.28-GCCcore-6.4.0  6) iccifort/2018.1.163-GCC-6.4.0-2.28  9) libxml2/2.9.7-GCCcore-6.4.0  12) OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28  15) iomkl/2018a
</pre>
-->
<!--
===gmvolf toolchains===
The gmvolf toolchain consists of:
The gmvolf toolchain consists of:


Line 311: Line 368:
*the GNU Compiler Collection (GCC, https://gcc.gnu.org/), i.e. gcc (C), g++ (C++) and gfortran (Fortran)
*the GNU Compiler Collection (GCC, https://gcc.gnu.org/), i.e. gcc (C), g++ (C++) and gfortran (Fortran)
*the MVAPICH2 library (http://mvapich.cse.ohio-state.edu/)
*the MVAPICH2 library (http://mvapich.cse.ohio-state.edu/)
*the OpenBLAS (http://www.openblas.net/) + LAPACK (http://netlib.org/lapack) libraries
* the OpenBLAS (http://www.openblas.net/) + LAPACK (http://netlib.org/lapack) libraries
*the ScaLAPACK (http://netlib.org/scalapack) library is also included
*the ScaLAPACK (http://netlib.org/scalapack) library is also included
*the FFTW library (http://fftw.org/)
*the FFTW library (http://fftw.org/)


The following foss toolchains are available:
You can check the gmvolf toolchain modules that are installed on the cluster with the command
<pre class="gcommand">
module spider gmvolf
</pre>When you load a gmvolf toolchain, all it components will be loaded. For example:<pre class="gcommand">
[shtsai@b1-1 ~]$ module list
No modules loaded
[shtsai@b1-1 ~]$ module load gmvolf/2020a
[shtsai@b1-1 ~]$ module list


*gmvolf/2016b, includes GCC 5.4.0, MVAPICH2 2.2, OpenBLAS 0.2.18, FFTW 3.3.4, ScaLAPACK 2.0.2
Currently Loaded Modules:
1) icc/2018.1.163-GCC-6.4.0-2.28        6) libxml2/2.9.7-GCCcore-6.4.0                      11) imkl/2018.1.163-iompi-2018a  16) GCC/9.3.0                  21) FFTW/3.3.8-gmvapich2-2020a
2) ifort/2018.1.163-GCC-6.4.0-2.28      7) libpciaccess/0.14-GCCcore-6.4.0                  12) iomkl/2018a                  17) Bison/3.5.3-GCCcore-9.3.0  22) ScaLAPACK/2.0.2-gmvapich2-2020a-OpenBLAS-0.3.9
3) iccifort/2018.1.163-GCC-6.4.0-2.28  8) hwloc/1.11.8-GCCcore-6.4.0                        13) GCCcore/9.3.0                18) MVAPICH2/2.3.6-GCC-9.3.0  23) gmvolf/2020a
4) numactl/2.0.11-GCCcore-6.4.0        9) OpenMPI/2.1.2-iccifort-2018.1.163-GCC-6.4.0-2.28  14) zlib/1.2.11-GCCcore-9.3.0    19) OpenBLAS/0.3.9-GCC-9.3.0
5) XZ/5.2.3-GCCcore-6.4.0              10) iompi/2018a                                      15) binutils/2.34-GCCcore-9.3.0  20) gmvapich2/2020a
</pre>
-->----
[[#top|Back to Top]]
 
== Linking with libraries==
Some library packages are installed along with some compiler toolchains. Examples of these are OpenBLAS, MKL, FFTW, etc. Other libraries are installed as a separate module, for example, Boost and GSL.
 
If you want to compile a code that uses a library that is not included with compiler toolchain, you will have to load a library module that uses a[[Available Toolchains and Toolchain Compatibility | compatible]] toolchain. For example, if you want to compile your code with GCC 11.3.0 (or with the foss/2022a toolchain), and you need to use GSL, you can load the GSL/2.7-GCC-11.3.0 module.


Also note that when you load a module for a library or an application, the full path to its installation directory will be stored in an environment variable called '''EBROOT''NAME''''', where ''NAME'' is the name of the application or library. For example, when you load a GSL module, the directory where the GSL libraries are installed will be in an environment variable called EBROOTGSL.


For example:
<pre class="gcommand">
[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load GCC/11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGCC
/apps/eb/GCCcore/11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGSL
[shtsai@d2-13 ~]$ module load GSL/2.7-GCC-11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGSL
/apps/eb/GSL/2.7-GCC-11.3.0
[shtsai@d2-13 ~]$
</pre>
As shown in the example above, when you load an GSL module, an environment variable called '''EBROOTGSL''' is defined, and it points to the installation path for GSL.
When you compile your code, you can add the compiler option:
<code> -I${EBROOTGSL}/include </code>
and the linker option
<code> -L${EBROOTGSL}/lib -lgsl -lgslcblas </code>
'''Example of program compilation that uses GCC 11.3.0 and GSL v. 2.7:'''
<pre class="gcommand">
module load GSL/2.7-GCC-11.3.0
gcc -O program.c -I${EBROOTGSL}/include -L${EBROOTGSL}/lib -lgsl -lgslcblas -Wl,-rpath=${EBROOTGSL}/lib
</pre>


Users can include the compilation option e.g. '''-Wl,-rpath=${EBROOTGSL}/lib''' to include the library directory in the '''runtime path'''. If this option is not included, then at runtime the user has to load the GSL module again, in order to define the environment variable LD_LIBRARY_PATH.
----
----
[[#top|Back to Top]]
[[#top|Back to Top]]

Latest revision as of 12:23, 6 September 2023


Where should I compile my code?

IMPORTANT: Please DO NOT compile source code on the login node. Instead, compile your code in an interactive session started with the interact command.

Code compilation can be done in an interactive session. To start an interactive session, first login into Sapelo2 and from there issue the interact command

interact

If you plan to run the code on an AMD node, you can start an interactive session on an AMD node to compile the code. To start an interactive on an AMD node, use the command

interact --constraint AMD

If you plan to run the code on an Intel node, you can start an interactive session on an Intel node to compile the code. To start an interactive on an Intel node, use the command

interact --constraint Intel

For detailed information on how to access the compute node interactively for code compilation, please see Running Jobs on Sapelo2.


Compilers

A number of Fortran and C/C++ compilers, as well as Java and scripting languages such as Perl and Python, are available on Sapelo2.

Summary of main Fortran and C/C++ compilers installed

Portland Group (PGI) Intel GNU OpenMPI File extension
Fortran77 pgfortran ifort gfortran mpif77 .f
Fortran90 pgfortran ifort gfortran mpif90 .f90
Fortran95 pgfortran ifort gfortran mpifort .f95
Fortran2003 pgfortran ifort gfortran mpifort .f
C pgcc icc gcc mpicc .c
C++ pgc++ icpc g++ mpicxx .C, .cpp, .cc

The various compiler suites are provided by their environment modules.

GNU compiler suites

The following command will show all the modules that provide GCC compiler suites:

module spider GCC

Sample partial output of this command:

[shtsai@d2-13 ~]$ module spider GCC
-----------------------------------------------------------------------------------------------------------------
  GCC:
-----------------------------------------------------------------------------------------------------------------
     Description:
      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).

     Versions:
        GCC/8.3.0
        GCC/10.2.0
        GCC/11.2.0
        GCC/11.3.0
        GCC/12.2.0
     Other possible modules matches:
        GCCcore
-----------------------------------------------------------------------------------------------------------------

This output indicates that the following versions of GCC compilers are available:

  • Version 8.3.0, with binutils 2.32, provided by the GCC/8.3.0 module, includes C, C++, and Fortran compilers.
  • Version 10.2.0, with binutils 2.35, provided by the GCC/10.2.0 module, includes C, C++, and Fortran compilers.
  • Version 11.2.0, with binutils 2.37, provided by the GCC/11.2.0 module, includes C, C++, and Fortran compilers.
  • Version 11.3.0, with binutils 2.38, provided by the GCC/11.3.0 module, includes C, C++, and Fortran compilers.
  • Version 12.2.0, with binutils 2.39, provided by the GCC/12.2.0 module, includes C, C++, and Fortran compilers.

We suggest that you run the module spider GCC command to check an updated list of GCC compilers available on the cluster.

Intel compiler suites

The following command will show all the modules that provide Intel compiler suites:

module spider intel-compilers

Sample output of this command

[shtsai@d2-13 ~]$ ml spider intel-compilers

-----------------------------------------------------------------------------------------------------------------
  intel-compilers:
-----------------------------------------------------------------------------------------------------------------
    Description:
      Intel C, C++ & Fortran compilers (classic and oneAPI)

     Versions:
        intel-compilers/2021.4.0
        intel-compilers/2022.1.0
        intel-compilers/2022.2.1
        intel-compilers/2023.1.0

-----------------------------------------------------------------------------------------------------------------

This output indicates that the following versions of the Intel compiler suites are available:

  • Version 2021.4.0, provided by the intel-compilers/2021.4.0 module.
  • Version 2022.1.0, provided by the intel-compilers/2022.1.0 module.
  • Version 2022.2.1, provided by the intel-compilers/2022.2.1 module.
  • Version 2023.1.0, provided by the intel-compilers/2023.1.0 module.
  • Version 2019.5.281, provided by the iccifort/2019.5.281 module

We suggest that you run the module spider intel-compilers or module spider iccifort command to check an updated list of Intel compilers available on the cluster.

LLVM compiler suites

The following command will show all the modules that provide LLVM compilers:

module spider LLVM

Sample output of this command

[shtsai@d2-13 ~]$ module spider LLVM
------------------------------------------------------------------------------------------------------------------------------------
  LLVM:
------------------------------------------------------------------------------------------------------------------------------------
    Description:
      The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified
      code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an
      optimizer and code generator.

     Versions:
        LLVM/12.0.1-GCCcore-11.2.0
        LLVM/14.0.3-GCCcore-11.3.0
-----------------------------------------------------------------------------------------------------------------------------------

How to load a compiler module

To use any of the compiler suite, please first load the corresponding module. For example, to use the GNU 11.3.0 compiler suite, load the module with

module load GCC/11.3.0

Once this module is loaded the gcc, g++, and gfortran for GCC v. 11.3.0 will be available in your path.

Please note that you can only have one compiler module loaded at a time.

Some commonly used compiler options

PGI compiler suite (To be added on Sapelo2)

Option Description
-O0 Specifies no optimization, recommended for code debugging
-O1 Specifies local optimization
-O2 Specifies global optimization (this is the default, same as using -O)
-O3 Includes -O1, -O2 and more aggressive optimization. Use with care.
-fast Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
-Mbounds Performs runtime array bound check, recommended for code debugging
-g Produces symbolic debug information in the object files.
-r8 Interpret REAL variables as DOUBLE PRECISION.
-B Allow C++ style comments in C source code; these begin with ‘//’ and continue until the end of the current line. pgcc only.
-Kieee Perform floating-point operations in strict conformance with the IEEE 754 standard. The default compilation is -Knoieee, which uses faster but very slightly less accurate methods.
-mp Interpret OpenMP directives to explicitly parallelize regions of code for execution by multiple threads
-acc Enable OpenACC pragmas and directives to explicitly parallelize regions of code for execution by accelerator devices. Use with the -ta option

NOTE When using optimization options, please check if your code becomes more efficient (in some cases optimization options will slow the code down) and if it still generates correct results. Many other compiler options are available. For more information on the PGI compilers, you can view the manual pages with the commands man pgf90, man pgcc, etc, after loading the PGI module.

Intel compiler suite

Option Description
-O0 Specifies no optimization, recommended for code debugging
-O2 Enables optimizations for speed. This is the generally recommended optimization level.
-O3 Performs -O2 optimizations and more aggressive loop transformations. Use with care.
-fast Chooses generally good optimization options for the platform. Type pgcc -fast -help to see the equivalent options.
-Mbounds Performs runtime array bound check, recommended for code debugging

GNU compiler suite

Option Description
-O0 Specifies no optimization, recommended for code debugging
-O2 Enables optimizations for speed. This is the generally recommended optimization level.
-O3 Performs -O2 optimizations and more aggressive loop transformations. Use with care.
-std= Determine the language standard. This option is currently only supported when compiling C or C++.
-fopenmp Enable handling of OpenMP directives "#pragma omp" in C/C++ and "!$omp" in Fortran.
-fopenacc Enable handling of OpenACC directives "#pragma acc" in C/C++ and "!$acc" in Fortran.
-Wpedantic Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++.
-Wall This enables all the warnings about constructions that some users consider questionable.

Back to Top

Compiler Toolchains

On Sapelo2 we use the EasyBuild framework to install software applications. The EasyBuild toolchains are also available for users to compile their own code. Each toolchain provides a compiler suite and some basic libraries, such as MPI, BLAS, LAPACK, FFTW, etc.

More information about compiler toolchains, please Available Toolchains and Toolchain Compatibility.

foss toolchains

Most software applications are installed with the foss toolchain, where foss is short for “Free and Open Source Software”.

The foss toolchain consists of:

You can check the foss toolchain modules that are installed on the cluster with the command

module spider foss

When you load a foss toolchain, all it components will be loaded. For example:

[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load foss/2022a
[shtsai@d2-13 ~]$ module list

Currently Loaded Modules:
  1) GCCcore/11.3.0                 5) numactl/2.0.14-GCCcore-11.3.0      9) hwloc/2.7.1-GCCcore-11.3.0      13) libfabric/1.15.1-GCCcore-11.3.0  17) OpenBLAS/0.3.20-GCC-11.3.0  21) FFTW.MPI/3.3.10-gompi-2022a
  2) zlib/1.2.12-GCCcore-11.3.0     6) XZ/5.2.5-GCCcore-11.3.0           10) OpenSSL/1.1                     14) PMIx/4.1.2-GCCcore-11.3.0        18) FlexiBLAS/3.2.0-GCC-11.3.0  22) ScaLAPACK/2.2.0-gompi-2022a-fb
  3) binutils/2.38-GCCcore-11.3.0   7) libxml2/2.9.13-GCCcore-11.3.0     11) libevent/2.1.12-GCCcore-11.3.0  15) UCC/1.0.0-GCCcore-11.3.0         19) FFTW/3.3.10-GCC-11.3.0      23) foss/2022a
  4) GCC/11.3.0                     8) libpciaccess/0.16-GCCcore-11.3.0  12) UCX/1.12.1-GCCcore-11.3.0       16) OpenMPI/4.1.4-GCC-11.3.0         20) gompi/2022a

intel toolchains

The intel toolchain consists of

  • the Intel compiler suite
  • the Intel MPI libraries
  • the Intel Math Kernel Libraries (MKL)

You can check the intel toolchain modules that are installed on the cluster with the command

module spider intel

When you load an intel toolchain, all it components will be loaded. For example:

[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load intel/2022a
[shtsai@d2-13 ~]$ module list

Currently Loaded Modules:
  1) GCCcore/11.3.0               3) binutils/2.38-GCCcore-11.3.0   5) numactl/2.0.14-GCCcore-11.3.0   7) impi/2021.6.0-intel-compilers-2022.1.0   9) iimpi/2022a                     11) intel/2022a
  2) zlib/1.2.12-GCCcore-11.3.0   4) intel-compilers/2022.1.0       6) UCX/1.12.1-GCCcore-11.3.0       8) imkl/2022.1.0                           10) imkl-FFTW/2022.1.0-iimpi-2022a



Back to Top

Linking with libraries

Some library packages are installed along with some compiler toolchains. Examples of these are OpenBLAS, MKL, FFTW, etc. Other libraries are installed as a separate module, for example, Boost and GSL.

If you want to compile a code that uses a library that is not included with compiler toolchain, you will have to load a library module that uses a compatible toolchain. For example, if you want to compile your code with GCC 11.3.0 (or with the foss/2022a toolchain), and you need to use GSL, you can load the GSL/2.7-GCC-11.3.0 module.

Also note that when you load a module for a library or an application, the full path to its installation directory will be stored in an environment variable called EBROOTNAME, where NAME is the name of the application or library. For example, when you load a GSL module, the directory where the GSL libraries are installed will be in an environment variable called EBROOTGSL.

For example:

[shtsai@d2-13 ~]$ module list
No modules loaded
[shtsai@d2-13 ~]$ module load GCC/11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGCC
/apps/eb/GCCcore/11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGSL

[shtsai@d2-13 ~]$ module load GSL/2.7-GCC-11.3.0
[shtsai@d2-13 ~]$ echo $EBROOTGSL
/apps/eb/GSL/2.7-GCC-11.3.0
[shtsai@d2-13 ~]$

As shown in the example above, when you load an GSL module, an environment variable called EBROOTGSL is defined, and it points to the installation path for GSL.

When you compile your code, you can add the compiler option:

-I${EBROOTGSL}/include

and the linker option

-L${EBROOTGSL}/lib -lgsl -lgslcblas

Example of program compilation that uses GCC 11.3.0 and GSL v. 2.7:

module load GSL/2.7-GCC-11.3.0

gcc -O program.c -I${EBROOTGSL}/include -L${EBROOTGSL}/lib -lgsl -lgslcblas -Wl,-rpath=${EBROOTGSL}/lib

Users can include the compilation option e.g. -Wl,-rpath=${EBROOTGSL}/lib to include the library directory in the runtime path. If this option is not included, then at runtime the user has to load the GSL module again, in order to define the environment variable LD_LIBRARY_PATH.


Back to Top