GROMACS-Sapelo2
Category
Chemistry
Program On
Sapelo2
Version
2021.5, 2023.1
Author / Distributor
First developed in Herman Berendsens group at Groningen University.
Current head authors and project leaders:
Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE) David van der Spoel (Biomedical Centre, Uppsala, SE) Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE) .
Description
GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Running Program
Also refer to Running Jobs on Sapelo2.
For more information on Environment Modules on Sapelo2 please see the Lmod page.
Version 2021.5
This version was compiled with foss-2021b, CUDA 11.4.1, and PLUMED 2.8.0. It is installed in /apps/eb/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0. To use this version of Gromacs, please first load its module with
module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0
Version 2023.1
This version was compiled with foss-2022a and CUDA 11.7.0. It is installed in /apps/eb/GROMACS/2023.1-foss-2022a-CUDA-11.7.0. To use this version of Gromacs, please first load its module with
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following
source $EBROOTGROMACS/bin/GMXRC gmx mdrun --help
If using a P100 GPU node, it is advised to request all 32 CPUs there.
Sample job submission script sub.sh to run v. 2023.1 and use 12 CPU cores on one node and 1 GPU A100 card:
#!/bin/bash #SBATCH --job-name=testgromacs # Job name #SBATCH --partition=gpu_p # Partition (queue) name #SBATCH --gres=gpu:A100:1 # Request one A100 gpu #SBATCH --ntasks=1 # Run on a single CPU #SBATCH --cpus-per-task = 12 # 4cpus per task #SBATCH --mem=50gb # Job memory request #SBATCH --time=4:00:00 # Time limit hrs:min:sec #SBATCH --output=%x.%j.out # Standard output log #SBATCH --error=%x.%j.err # Standard error log module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0 source $EBROOTGROMACS/bin/GMXRC export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK gmx mdrun [options]
where [options] need to be replaced by the arguments you wish to use. The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.
Sample job submission script using MPI and OpenMP to run v. 2023.1 and use 12 CPU cores per task and 2 GPU A100 cards:
#!/bin/bash #SBATCH --job-name=testgromacs # Job name #SBATCH --partition=gpu_p # Partition (queue) name #SBATCH --gres=gpu:A100:2 # Request two A100 gpu #SBATCH --ntasks=2 # Run on a single CPU #SBATCH --cpus-per-task = 12 # 12cpus per task #SBATCH --mem=50gb # Job memory request #SBATCH --time=4:00:00 # Time limit hrs:min:sec #SBATCH --output=%x.%j.out # Standard output log #SBATCH --error=%x.%j.err # Standard error log module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0 source $EBROOTGROMACS/bin/GMXRC export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK srun gmx_mpi mdrun [options]
where [options] need to be replaced by the arguments you wish to use. Make sure the number of GPUS requested is equal to the number of tasks requested(--ntasks). The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 hours), and a suitable amount of memory.
Documentation
Please see http://www.gromacs.org/
Installation
System
64-bit Linux