GROMACS-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
(Created page with "Category:Sapelo2Category:SoftwareCategory:Chemistry ===Category=== Chemistry ===Program On=== Sapelo2 ===Version=== 4.5.6-gompi-2019b,2019.4-fosscuda-2019b-PLUMED-...")
 
 
(23 intermediate revisions by 3 users not shown)
Line 5: Line 5:
Sapelo2
Sapelo2
===Version===
===Version===
4.5.6-gompi-2019b,2019.4-fosscuda-2019b-PLUMED-2.5.3,2020-fosscuda-2019b,2020.3-fosscuda-2019b,2021.2-fosscuda-2020b, GROMACS/2021.3-fosscuda-2020b,GROMACS/2021.4-fosscuda-2020b
2021.5, 2023.1, 2023.4


===Author / Distributor===
===Author / Distributor===
Line 12: Line 12:
Current head authors and project leaders:
Current head authors and project leaders:


Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE)
Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE) David van der Spoel (Biomedical Centre, Uppsala, SE)
David van der Spoel (Biomedical Centre, Uppsala, SE)
Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE)
Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE)
.
.


===Description===
===Description===
GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.[
GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.


=== Running Program ===
=== Running Program ===
Line 26: Line 25:
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.


'''Version 2019.4'''


This version was compiled with fosscuda-2019b. It was patched with PLUMED. It is installed in /apps/eb/GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3. To use this version of Gromacs, please first load its module with
'''Version 2021.5'''
 
This version was compiled with foss-2021b, CUDA 11.4.1, and PLUMED 2.8.0. It is installed in /apps/eb/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0. To use this version of Gromacs, please first load its module with
<pre class="gcommand">
<pre class="gcommand">
module load GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3
module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0
</pre>


</pre>'''Version 2023.1'''


'''Version 2020'''
This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.1-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with
 
This version was compiled with fosscuda-2019b. It is installed in apps/eb/GROMACS/2020-foss-2019b. To use this version of Gromacs, please first load its module with
<pre class="gcommand">
<pre class="gcommand">
module load GROMACS/2020-foss-2019b
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
</pre>
</pre>


'''Version 2023.4'''


'''Version 2020.3'''
This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.4-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with
 
This version was compiled with fosscuda-2019b. It is installed in apps/eb/GROMACS/2020.3-foss-2019b. To use this version of Gromacs, please first load its module with
<pre class="gcommand">
<pre class="gcommand">
module load GROMACS/2020.3-foss-2019b
module load GROMACS/2023.4-foss-2022a-CUDA-11.7.0
</pre>
</pre>


'''Version 2021.2'''


This version was compiled with fosscuda-2020b. It is installed in apps/eb/GROMACS/2021.2-foss-2020b. To use this version of Gromacs, please first load its module with
For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following
<pre class="gcommand">
<pre class="gcommand">
module load GROMACS/2021.2-foss-2020b
source $EBROOTGROMACS/bin/GMXRC
gmx mdrun --help
</pre>


</pre>'''Version 2021.3'''


This version was compiled with fosscuda-2020b. It is installed in apps/eb/GROMACS/2021.3-foss-2020b. To use this version of Gromacs, please first load its module with
If using a P100 GPU node, it is advised to request all 32 CPUs there.  
<pre class="gcommand">
module load GROMACS/2021.3-foss-2020b
</pre>


'''Version 2021.4'''


This version was compiled with fosscuda-2020b. It is installed in apps/eb/GROMACS/2021.4-foss-2020b. To use this version of Gromacs, please first load its module with
Sample job submission script sub.sh to run v. 2023.1 and use 12 CPU cores on one GPU node and 1 A100 GPU card:
<pre class="gcommand">
module load GROMACS/2021.4-foss-2020b
</pre>


Note that this version of Gromacs works on nodes with K20Xm, K40, K80, P100, and V100 GPU cards.  
<pre class="gscript">
#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p              # Partition (queue) name
#SBATCH --gres=gpu:A100:1              # Request 1 A100 gpu device
#SBATCH --ntasks=1                      # Run single task on one GPU node
#SBATCH --cpus-per-task=12              # 12 CPU cores per task
#SBATCH --mem=50gb                      # Job memory request
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec or days-hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err              # Standard error log


module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0


Sample job submission script sub.sh to run v. 2021.4 and use 10 CPU cores and 1 GPU K40 card:
source $EBROOTGROMACS/bin/GMXRC


<pre class="gscript">
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
#PBS -S /bin/bash
#PBS -q gpu_q
#PBS -N testjob
#PBS -l nodes=1:ppn=10:gpus=1:K40
#PBS -l walltime=4:00:00:00
#PBS -l mem=20gb


cd $PBS_O_WORKDIR
gmx mdrun -ntomp 12 [options]
</pre>
where [options] need to be replaced by the arguments you wish to use.  The job name '''testgromacs''' should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.


module load GROMACS/2021.4-foss-2020b


source $EBROOTGROMACS/bin/GMXRC
Sample job submission script using MPI and OpenMP  to run v. 2023.1 and use 2 GPU nodes to run 4 MPI ranks, 2 MPI ranks per node, 12 CPU cores per task, and 2 A100 GPU cards per node:


gmx mdrun -nt $PBS_NP [options]
<pre class="gscript">
#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p              # Partition (queue) name
#SBATCH --gres=gpu:A100:2              # Request 2 A100 gpu devices per node
#SBATCH --nodes=2                      # Request 2 GPU nodes
#SBATCH --ntasks=4                      # Run 4 MPI ranks
#SBATCH --ntasks-per-node=2            # 2 MPI ranks per node
#SBATCH --cpus-per-task=12              # 12 CPU cores per task
#SBATCH --mem-per-cpu=4gb              # Memory request for each CPU core
#SBATCH --time=7-00:00:00              # Time limit hrs:min:sec or days-hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err              # Standard error log


</pre>
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
where [options] need to be replaced by the arguments you wish to use. The job name '''testjob''' should be replaced by a name that is appropriate for your job.
Also, choose an appropriate number of cores per node (ppn), a suitable wall time (the example
above specifies 4 days), and a suitable amount of memory.  


source $EBROOTGROMACS/bin/GMXRC


Submit the job to the queue with
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK


<pre class="gcommand">
srun -n 4 gmx_mpi mdrun -ntomp 12 [options]
qsub sub.sh
</pre>
</pre>


where [options] need to be replaced by the arguments you wish to use.  Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name '''testgromacs''' should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.


=== Documentation ===
=== Documentation ===

Latest revision as of 12:13, 9 May 2024

Category

Chemistry

Program On

Sapelo2

Version

2021.5, 2023.1, 2023.4

Author / Distributor

First developed in Herman Berendsens group at Groningen University.

Current head authors and project leaders:

Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE) David van der Spoel (Biomedical Centre, Uppsala, SE) Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE) .

Description

GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

Running Program

Also refer to Running Jobs on Sapelo2.

For more information on Environment Modules on Sapelo2 please see the Lmod page.


Version 2021.5

This version was compiled with foss-2021b, CUDA 11.4.1, and PLUMED 2.8.0. It is installed in /apps/eb/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0. To use this version of Gromacs, please first load its module with

module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0

Version 2023.1

This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.1-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0

Version 2023.4

This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.4-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with

module load GROMACS/2023.4-foss-2022a-CUDA-11.7.0


For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following

source $EBROOTGROMACS/bin/GMXRC
gmx mdrun --help


If using a P100 GPU node, it is advised to request all 32 CPUs there.


Sample job submission script sub.sh to run v. 2023.1 and use 12 CPU cores on one GPU node and 1 A100 GPU card:

#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p               # Partition (queue) name
#SBATCH --gres=gpu:A100:1               # Request 1 A100 gpu device 
#SBATCH --ntasks=1                      # Run single task on one GPU node
#SBATCH --cpus-per-task=12              # 12 CPU cores per task
#SBATCH --mem=50gb                      # Job memory request
#SBATCH --time=4:00:00                  # Time limit hrs:min:sec or days-hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err               # Standard error log

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0

source $EBROOTGROMACS/bin/GMXRC

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

gmx mdrun -ntomp 12 [options]

where [options] need to be replaced by the arguments you wish to use. The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.


Sample job submission script using MPI and OpenMP to run v. 2023.1 and use 2 GPU nodes to run 4 MPI ranks, 2 MPI ranks per node, 12 CPU cores per task, and 2 A100 GPU cards per node:

#!/bin/bash
#SBATCH --job-name=testgromacs          # Job name
#SBATCH --partition=gpu_p               # Partition (queue) name
#SBATCH --gres=gpu:A100:2               # Request 2 A100 gpu devices per node
#SBATCH --nodes=2                       # Request 2 GPU nodes
#SBATCH --ntasks=4                      # Run 4 MPI ranks
#SBATCH --ntasks-per-node=2             # 2 MPI ranks per node
#SBATCH --cpus-per-task=12              # 12 CPU cores per task
#SBATCH --mem-per-cpu=4gb               # Memory request for each CPU core
#SBATCH --time=7-00:00:00               # Time limit hrs:min:sec or days-hrs:min:sec
#SBATCH --output=%x.%j.out              # Standard output log
#SBATCH --error=%x.%j.err               # Standard error log

module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0

source $EBROOTGROMACS/bin/GMXRC

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

srun -n 4 gmx_mpi mdrun -ntomp 12 [options]

where [options] need to be replaced by the arguments you wish to use. Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.

Documentation

Please see http://www.gromacs.org/

Installation

System

64-bit Linux