Difference between revisions of "GROMACS-Sapelo2"

From Research Computing Center Wiki
Jump to navigation Jump to search
Line 5: Line 5:
 
Sapelo2
 
Sapelo2
 
===Version===
 
===Version===
4.5.6, 2019.4, 2020, 2020.3, 2021.2, 2021.3, 2021.4
+
2021.5, 2023.1
  
 
===Author / Distributor===
 
===Author / Distributor===
Line 26: Line 26:
  
  
 +
'''Version 2021.5'''
  
'''Version 4.5.6'''
+
This version was compiled with foss-2021b, CUDA 11.4.1, and PLUMED 2.8.0. It is installed in /apps/eb/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0. To use this version of Gromacs, please first load its module with
 
 
This version was compiled with gompi-2019b.  It is installed in /apps/eb/GROMACS/4.5.6-gompi-2019b. To use this version of Gromacs, please first load its module with
 
<pre class="gcommand">
 
module load GROMACS/4.5.6-gompi-2019b
 
</pre>
 
 
 
'''Version 2019.4'''
 
 
 
This version was compiled with fosscuda-2019b. It was patched with PLUMED. It is installed in /apps/eb/GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3. To use this version of Gromacs, please first load its module with
 
<pre class="gcommand">
 
module load GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3
 
</pre>
 
 
 
'''Version 2020'''
 
 
 
This version was compiled with fosscuda-2019b. It is installed in /apps/eb/GROMACS/2020-fosscuda-2019b. To use this version of Gromacs, please first load its module with
 
<pre class="gcommand">
 
module load GROMACS/2020-fosscuda-2019b
 
</pre>
 
 
 
'''Version 2020.3'''
 
 
 
This version was compiled with fosscuda-2019b. It is installed in /apps/eb/GROMACS/2020.3-fosscuda-2019b. To use this version of Gromacs, please first load its module with
 
<pre class="gcommand">
 
module load GROMACS/2020.3-fosscuda-2019b
 
</pre>
 
 
 
'''Version 2021.2'''
 
 
 
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.2-fosscuda-2020b. To use this version of Gromacs, please first load its module with
 
 
<pre class="gcommand">
 
<pre class="gcommand">
module load GROMACS/2021.2-fosscuda-2020b
+
module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0
  
</pre>'''Version 2021.3'''
+
</pre>'''Version 2023.1'''
  
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.3-fosscuda-2020b. To use this version of Gromacs, please first load its module with
+
This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.1-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with
 
<pre class="gcommand">
 
<pre class="gcommand">
module load GROMACS/2021.3-fosscuda-2020b
+
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
 
</pre>
 
</pre>
  
Note that this version of Gromacs works on nodes with K20Xm, K40, K80, P100, and V100 GPU cards.
 
  
'''Version 2021.4'''
+
For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following
 
 
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.4-fosscuda-2020b. To use this version of Gromacs, please first load its module with
 
 
<pre class="gcommand">
 
<pre class="gcommand">
module load GROMACS/2021.4-fosscuda-2020b
 
</pre>
 
 
 
With the release of version 5.0 of GROMACS, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using GROMACS/4.5.6 can be run by running the name of the command. For example to print the help output of the mdrun command one would run the following command<pre class="gcommand">
 
mdrun --help
 
</pre>
 
 
For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following <pre class="gcommand">
 
 
source $EBROOTGROMACS/bin/GMXRC
 
source $EBROOTGROMACS/bin/GMXRC
 
gmx mdrun --help
 
gmx mdrun --help
Line 88: Line 47:
  
  
 +
If using a P100 GPU node, it is advised to request all 32 CPUs there.
  
All versions of GROMACS except 2021.4 can run with K40, K80 or P100 GPUs. It is not advisable to try and run GROMACS using a K20 gpu.
 
 
If using a P100 GPU node, it is advised to request all 32 CPUs there. If running an older version of GROMACS (up to 2021.3) on a K40 node, it is advised to request a low number of CPUs (~ 4), to leave enough CPUs for other GPU jobs to use the other K40 cards on the node.
 
  
<span class="message_sender sender_info_hover no-select"></span>
+
Sample job submission script sub.sh to run v. 2023.1 and use 12 CPU cores on one node and 1 GPU A100 card:
 
 
Sample job submission script sub.sh to run v. 2021.3 and use 4 CPU cores on one node and 1 GPU K40 card:
 
  
 
<pre class="gscript">
 
<pre class="gscript">
Line 101: Line 56:
 
#SBATCH --job-name=testgromacs          # Job name
 
#SBATCH --job-name=testgromacs          # Job name
 
#SBATCH --partition=gpu_p              # Partition (queue) name
 
#SBATCH --partition=gpu_p              # Partition (queue) name
#SBATCH --gres=gpu:K40:1               # Request one K40 gpu  
+
#SBATCH --gres=gpu:A100:1               # Request one A100 gpu  
 
#SBATCH --ntasks=1                      # Run on a single CPU
 
#SBATCH --ntasks=1                      # Run on a single CPU
#SBATCH --cpus-per-task = 4            # 4cpus per task
+
#SBATCH --cpus-per-task = 12            # 4cpus per task
#SBATCH --mem=20gb                     # Job memory request
+
#SBATCH --mem=50gb                     # Job memory request
 
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
 
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
 
#SBATCH --output=%x.%j.out              # Standard output log
 
#SBATCH --output=%x.%j.out              # Standard output log
 
#SBATCH --error=%x.%j.err              # Standard error log
 
#SBATCH --error=%x.%j.err              # Standard error log
  
 
+
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
module load GROMACS/2021.3-fosscuda-2020b
 
  
 
source $EBROOTGROMACS/bin/GMXRC
 
source $EBROOTGROMACS/bin/GMXRC
Line 121: Line 75:
  
  
Sample job submission script using MPI and OpenMP  to run v. 2021.3 and use 4 CPU cores per task and 2 GPU K40 cards:
+
Sample job submission script using MPI and OpenMP  to run v. 2023.1 and use 12 CPU cores per task and 2 GPU A100 cards:
  
 
<pre class="gscript">
 
<pre class="gscript">
Line 127: Line 81:
 
#SBATCH --job-name=testgromacs          # Job name
 
#SBATCH --job-name=testgromacs          # Job name
 
#SBATCH --partition=gpu_p              # Partition (queue) name
 
#SBATCH --partition=gpu_p              # Partition (queue) name
#SBATCH --gres=gpu:K40:2               # Request one K40 gpu  
+
#SBATCH --gres=gpu:A100:2               # Request two A100 gpu  
 
#SBATCH --ntasks=2                      # Run on a single CPU
 
#SBATCH --ntasks=2                      # Run on a single CPU
#SBATCH --cpus-per-task = 4            # 4cpus per task
+
#SBATCH --cpus-per-task = 12            # 12cpus per task
#SBATCH --mem=20gb                     # Job memory request
+
#SBATCH --mem=50gb                     # Job memory request
 
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
 
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
 
#SBATCH --output=%x.%j.out              # Standard output log
 
#SBATCH --output=%x.%j.out              # Standard output log
 
#SBATCH --error=%x.%j.err              # Standard error log
 
#SBATCH --error=%x.%j.err              # Standard error log
  
 
+
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
module load GROMACS/2021.3-fosscuda-2020b
 
  
 
source $EBROOTGROMACS/bin/GMXRC
 
source $EBROOTGROMACS/bin/GMXRC
 
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
  
mpirun -np 2 gmx_mpi mdrun [options]
+
srun gmx_mpi mdrun [options]
  
 
</pre>
 
</pre>
 +
 
where [options] need to be replaced by the arguments you wish to use.  Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name '''testgromacs''' should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.
 
where [options] need to be replaced by the arguments you wish to use.  Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name '''testgromacs''' should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.
  

Revision as of 12:23, 5 September 2023

Category

Chemistry

Program On

Sapelo2

Version

2021.5, 2023.1

Author / Distributor

First developed in Herman Berendsens group at Groningen University.

Current head authors and project leaders:

Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE) David van der Spoel (Biomedical Centre, Uppsala, SE) Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE) .

Description

GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

Running Program

Also refer to Running Jobs on Sapelo2.

For more information on Environment Modules on Sapelo2 please see the Lmod page.


Version 2021.5

This version was compiled with foss-2021b, CUDA 11.4.1, and PLUMED 2.8.0. It is installed in /apps/eb/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0. To use this version of Gromacs, please first load its module with

module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0

Version 2023.1

This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.1-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0


For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following

source $EBROOTGROMACS/bin/GMXRC
gmx mdrun --help


If using a P100 GPU node, it is advised to request all 32 CPUs there.


Sample job submission script sub.sh to run v. 2023.1 and use 12 CPU cores on one node and 1 GPU A100 card:

#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p               # Partition (queue) name
#SBATCH --gres=gpu:A100:1               # Request one A100 gpu 
#SBATCH --ntasks=1                      # Run on a single CPU
#SBATCH --cpus-per-task = 12            # 4cpus per task
#SBATCH --mem=50gb                      # Job memory request
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err               # Standard error log

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0

source $EBROOTGROMACS/bin/GMXRC
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

gmx mdrun [options]

where [options] need to be replaced by the arguments you wish to use. The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.


Sample job submission script using MPI and OpenMP to run v. 2023.1 and use 12 CPU cores per task and 2 GPU A100 cards:

#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p               # Partition (queue) name
#SBATCH --gres=gpu:A100:2               # Request two A100 gpu 
#SBATCH --ntasks=2                      # Run on a single CPU
#SBATCH --cpus-per-task = 12            # 12cpus per task
#SBATCH --mem=50gb                      # Job memory request
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err               # Standard error log

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0

source $EBROOTGROMACS/bin/GMXRC
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

srun gmx_mpi mdrun [options]

where [options] need to be replaced by the arguments you wish to use. Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.

Documentation

Please see http://www.gromacs.org/

Installation

System

64-bit Linux