GROMACS-Sapelo2: Difference between revisions
No edit summary |
No edit summary |
||
Line 102: | Line 102: | ||
#SBATCH --job-name=testgromacs # Job name | #SBATCH --job-name=testgromacs # Job name | ||
#SBATCH --partition=gpu_p # Partition (queue) name | #SBATCH --partition=gpu_p # Partition (queue) name | ||
#SBATCH --gres=gpu:1 | #SBATCH --gres=gpu:K40:1 # Request one K40 gpu | ||
#SBATCH --ntasks=1 # Run on a single CPU | #SBATCH --ntasks=1 # Run on a single CPU | ||
#SBATCH --cpus-per-task = 16 # 16 cpus per task | |||
#SBATCH --mem=20gb # Job memory request | #SBATCH --mem=20gb # Job memory request | ||
#SBATCH --time=4:00:00 # Time limit hrs:min:sec | #SBATCH --time=4:00:00 # Time limit hrs:min:sec | ||
Line 117: | Line 118: | ||
</pre> | </pre> | ||
where [options] need to be replaced by the arguments you wish to use. The job name ''' | where [options] need to be replaced by the arguments you wish to use. The job name '''testgromacs''' should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 days), and a suitable amount of memory. | ||
Also, choose an appropriate number of cores per node ( | |||
=== Documentation === | === Documentation === |
Revision as of 10:44, 25 January 2022
Category
Chemistry
Program On
Sapelo2
Version
4.5.6, 2019.4, 2020, 2020.3, 2021.2, 2021.3, 2021.4
Author / Distributor
First developed in Herman Berendsens group at Groningen University.
Current head authors and project leaders:
Erik Lindahl (Stockholm Center for Biomembrane Research, Stockholm, SE) David van der Spoel (Biomedical Centre, Uppsala, SE) Berk Hess (Max Planck Institute for Polymer Research, Mainz, DE) .
Description
GROMACS is a package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Running Program
Also refer to Running Jobs on Sapelo2.
For more information on Environment Modules on Sapelo2 please see the Lmod page.
Version 4.5.6
This version was compiled with gompi-2019b. It is installed in /apps/eb/GROMACS/4.5.6-gompi-2019b. To use this version of Gromacs, please first load its module with
module load GROMACS/4.5.6-gompi-2019b-PLUMED-2.5.3
Version 2019.4
This version was compiled with fosscuda-2019b. It was patched with PLUMED. It is installed in /apps/eb/GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3. To use this version of Gromacs, please first load its module with
module load GROMACS/2019.4-fosscuda-2019b-PLUMED-2.5.3
Version 2020
This version was compiled with fosscuda-2019b. It is installed in /apps/eb/GROMACS/2020-fosscuda-2019b. To use this version of Gromacs, please first load its module with
module load GROMACS/2020-foss-2019b
Version 2020.3
This version was compiled with fosscuda-2019b. It is installed in /apps/eb/GROMACS/2020.3-fosscuda-2019b. To use this version of Gromacs, please first load its module with
module load GROMACS/2020.3-foss-2019b
Version 2021.2
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.2-fosscuda-2020b. To use this version of Gromacs, please first load its module with
module load GROMACS/2021.2-foss-2020b
Version 2021.3
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.3-fosscuda-2020b. To use this version of Gromacs, please first load its module with
module load GROMACS/2021.3-foss-2020b
Note that this version of Gromacs works on nodes with K20Xm, K40, K80, P100, and V100 GPU cards.
Version 2021.4
This version was compiled with fosscuda-2020b. It is installed in /apps/eb/GROMACS/2021.4-fosscuda-2020b. To use this version of Gromacs, please first load its module with
module load GROMACS/2021.4-foss-2020b
With the release of version 5.0 of GROMACS, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using GROMACS/4.5.6 can be run by running the name of the command. For example to print the help output of the mdrun command one would run the following command
mdrun --help
For each GROMACS module, after version 4.5.6, all of the tools are essentially modules of a binary named "gmx." This is a departure from previous versions, wherein each of the tools was invoked as its own command. On Sapelo2, commands using later versions of GROMACS can be run by first sourcing the file $EBROOTGROMACS/bin/GMXRC. Then running the command gmx followed by the name of the command. For example to print the help output of the mdrun command one would run the following
source $EBROOTGROMACS/bin/GMXRC gmx mdrun --help
All versions of GROMACS except 2021.4 can run with K40, K80 or p100 GPUs. It is not advisable to try and run GROMACS using a K20 gpu.
If using a P100 GPU node, it is advised to request all 32 CPUs there. If running an older version of GROMACS (up to 2021.3) on a K40 node, it is advised to request a low number of CPUs (~ 4), to leave enough CPUs for other GPU jobs to use the other K40 cards on the node.
Sample job submission script sub.sh to run v. 2021.3 and use 10 CPU cores and 1 GPU K40 card:
#!/bin/bash #SBATCH --job-name=testgromacs # Job name #SBATCH --partition=gpu_p # Partition (queue) name #SBATCH --gres=gpu:K40:1 # Request one K40 gpu #SBATCH --ntasks=1 # Run on a single CPU #SBATCH --cpus-per-task = 16 # 16 cpus per task #SBATCH --mem=20gb # Job memory request #SBATCH --time=4:00:00 # Time limit hrs:min:sec #SBATCH --output=%x.%j.out # Standard output log #SBATCH --error=%x.%j.err # Standard error log module load GROMACS/2021.3-fosscuda-2020b source $EBROOTGROMACS/bin/GMXRC gmx mdrun -nt [options]
where [options] need to be replaced by the arguments you wish to use. The job name testgromacs should be replaced by a name that is appropriate for your job. Also, choose an appropriate number of cores per node(--cpus-per-task), a suitable wall time (the example above specifies 4 days), and a suitable amount of memory.
Documentation
Please see http://www.gromacs.org/
Installation
System
64-bit Linux