AMBER-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 10: Line 10:
=== Version ===
=== Version ===
   
   
18, 22
22
   
   
=== Author / Distributor ===
=== Author / Distributor ===
Line 16: Line 16:
Please see http://ambermd.org/#developers
Please see http://ambermd.org/#developers


When citing Amber14 or AmberTools15 please use the following:
When citing Amber22 or AmberTools22 please use the following:
D.A. Case, J.T. Berryman, R.M. Betz, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, N. Homeyer, S. Izadi, P. Janowski, J. Kaus, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, T. Luchko, R. Luo, B. Madej, K.M. Merz, G. Monard, P. Needham, H. Nguyen, H.T. Nguyen, I. Omelyan, A. Onufriev, D.R. Roe, A. Roitberg, R. Salomon-Ferrer, C.L. Simmerling, W. Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, D.M. York and P.A. Kollman (2015), AMBER 2015, University of California, San Francisco.  
D.A. Case, H.M. Aktulga, K. Belfon, I.Y. Ben-Shalom, J.T. Berryman, S.R. Brozell, D.S. Cerutti, T.E. Cheatham, III, G.A. Cisneros, V.W.D. Cruzeiro, T.A. Darden, R.E. Duke, G. Giambasu, M.K. Gilson, H. Gohlke, A.W. Goetz, R. Harris, S. Izadi, S.A. Izmailov, K. Kasavajhala, M.C. Kaymak, E. King, A. Kovalenko, T. Kurtzman, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, M. Machado, V. Man, M. Manathunga, K.M. Merz, Y. Miao, O. Mikhailovskii, G. Monard, H. Nguyen, K.A. O'Hearn, A. Onufriev, F. Pan, S. Pantano, R. Qi, A. Rahnamoun, D.R. Roe, A. Roitberg, C. Sagui, S. Schott-Verdugo, A. Shajan, J. Shen, C.L. Simmerling, N.R. Skrynnikov, J. Smith, J. Swails, R.C. Walker, J Wang, J. Wang, H. Wei, R.M. Wolf, X. Wu, Y. Xiong, Y. Xue, D.M. York, S. Zhao, and P.A. Kollman (2022), Amber 2022, University of California, San Francisco.


=== Description ===
=== Description ===
Line 28: Line 28:


For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.


'''AMBER18 and AmberTools 18'''


AMBER18 and AmberTools 18 were compiled with the fosscuda-2018b toolchain (GNU 7.3.0 compiler suite, OpenMPI 3.1.1, and CUDA 9.2.88). This version includes serial binaries, and binaries with CUDA and MPI support.
'''AMBER 22 and AmberTools 22.3'''
 
AMBER 22 and AmberTools 22.3 were compiled with the foss-2021b toolchain (GNU 11.2.0 compiler suite, OpenMPI 4.1.1), and CUDA 11.4.1. This version includes serial binaries, and binaries with CUDA and MPI support.


To use this version of AMBER, first load the Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8 module with  
To use this version of AMBER on the GPU, first load the Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1 module with  


<pre class="gcommand">
<pre class="gcommand">
ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8
ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1
</pre>
</pre>
This module will automatically load its dependencies. '''Note''': Some of the CUDA-enabled programs, such as antechamber, are not compatible with the K20 and K40 GPU devices.
This module will automatically load its dependencies.  


'''AMBER22 and AmberTools 22'''


AMBER22 and AmberTools 22 were compiled with the foss-2019b toolchain (GNU 8.3.0 compiler suite, OpenMPI 3.1.4), and CUDA 11.1.1. This version includes serial binaries, and binaries with CUDA and MPI support.
To use this version of AMBER on the CPU only, first load the Amber/22.0-foss-2021b-AmberTools-22.3 module with  
 
To use this version of AMBER, first load the Amber/22-foss-2019b-CUDA-11.1.1-AmberTools-22 module with  


<pre class="gcommand">
<pre class="gcommand">
ml Amber/22-foss-2019b-CUDA-11.1.1-AmberTools-22
ml Amber/22.0-foss-2021b-AmberTools-22.3
</pre>
</pre>
This module will automatically load its dependencies. The testsuite for the cuda serial version passed on the K20, K40, P100, V100, and A100 GPU devices.
This module will automatically load its dependencies.  




Line 57: Line 54:
In the example below the name of the input file is prod.in.
In the example below the name of the input file is prod.in.


Example of shell script (submpi.sh) to run Amber18 in the batch queue, using 96 processes:
Example of shell script (submpi.sh) to run Amber22 in the batch queue, using 96 processes:


<pre class="gscript">
<pre class="gscript">
Line 70: Line 67:


export OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1
ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8
ml Amber/22.0-foss-2021b-AmberTools-22.3
source ${AMBERHOME}/amber.sh
source ${AMBERHOME}/amber.sh
srun $AMBERHOME/bin/pmemd.MPI -O -i prod.in -o prod_${SLURM_JOB_ID}.out  -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd
srun $AMBERHOME/bin/pmemd.MPI -O -i prod.in -o prod_${SLURM_JOB_ID}.out  -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd
Line 87: Line 84:
In the example below the name of the input file is prod.in.
In the example below the name of the input file is prod.in.


Example of shell script (subcuda.sh) to run AMBER18 in the GPU queue (gpu_q), using one GPU card:
Example of shell script (subcuda.sh) to run AMBER22 in the GPU queue (gpu_q), using one GPU card:


<pre class="gscript">
<pre class="gscript">
Line 93: Line 90:
#SBATCH --job-name=amber            # Job name
#SBATCH --job-name=amber            # Job name
#SBATCH --partition=gpu_p            # Partition (queue) name     
#SBATCH --partition=gpu_p            # Partition (queue) name     
#SBATCH --gres=gpu:P100:1  
#SBATCH --gres=gpu:A100:1  
#SBATCH --ntasks=24                  # Run a single task
#SBATCH --ntasks=24                  # Run a single task
#SBATCH --cpus-per-task=1            # Number of CPU cores per task
#SBATCH --cpus-per-task=1            # Number of CPU cores per task
Line 100: Line 97:
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8
ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1


source ${AMBERHOME}/amber.sh
source ${AMBERHOME}/amber.sh
Line 108: Line 105:
</pre>
</pre>
Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.  
Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.  
'''Note''': Some of the CUDA-enabled programs, such as antechamber, in Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8 are not compatible with the K20 and K40 GPU devices.




Line 124: Line 119:
=== Installation ===
=== Installation ===


* Amber 18: Source code compiled with the fosscuda-2018b toolchain (GNU 7.3.0 compiler suite, OpenMPI 3.1.1, and CUDA 9.2.88).
* Amber 22: Source code compiled with the foss-2021b toolchain (GNU 11.2.0 compiler suite, OpenMPI 4.1.1), and CUDA 11.4.1.
 
* Amber 22: Source code compiled with the foss-2019b toolchain (GNU 8.3.0 compiler suite, OpenMPI 3.1.4), and CUDA 11.1.1.


=== System ===
=== System ===
64-bit Linux
64-bit Linux

Latest revision as of 10:15, 5 September 2023

Category

Chemistry

Program On

Sapelo2

Version

22

Author / Distributor

Please see http://ambermd.org/#developers

When citing Amber22 or AmberTools22 please use the following: D.A. Case, H.M. Aktulga, K. Belfon, I.Y. Ben-Shalom, J.T. Berryman, S.R. Brozell, D.S. Cerutti, T.E. Cheatham, III, G.A. Cisneros, V.W.D. Cruzeiro, T.A. Darden, R.E. Duke, G. Giambasu, M.K. Gilson, H. Gohlke, A.W. Goetz, R. Harris, S. Izadi, S.A. Izmailov, K. Kasavajhala, M.C. Kaymak, E. King, A. Kovalenko, T. Kurtzman, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, M. Machado, V. Man, M. Manathunga, K.M. Merz, Y. Miao, O. Mikhailovskii, G. Monard, H. Nguyen, K.A. O'Hearn, A. Onufriev, F. Pan, S. Pantano, R. Qi, A. Rahnamoun, D.R. Roe, A. Roitberg, C. Sagui, S. Schott-Verdugo, A. Shajan, J. Shen, C.L. Simmerling, N.R. Skrynnikov, J. Smith, J. Swails, R.C. Walker, J Wang, J. Wang, H. Wei, R.M. Wolf, X. Wu, Y. Xiong, Y. Xue, D.M. York, S. Zhao, and P.A. Kollman (2022), Amber 2022, University of California, San Francisco.

Description

From http://ambermd.org/: " "Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos."

Running Program

Also refer to Running Jobs on Sapelo2.

For more information on Environment Modules on Sapelo2 please see the Lmod page.


AMBER 22 and AmberTools 22.3

AMBER 22 and AmberTools 22.3 were compiled with the foss-2021b toolchain (GNU 11.2.0 compiler suite, OpenMPI 4.1.1), and CUDA 11.4.1. This version includes serial binaries, and binaries with CUDA and MPI support.

To use this version of AMBER on the GPU, first load the Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1 module with

ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1

This module will automatically load its dependencies.


To use this version of AMBER on the CPU only, first load the Amber/22.0-foss-2021b-AmberTools-22.3 module with

ml Amber/22.0-foss-2021b-AmberTools-22.3

This module will automatically load its dependencies.


1. Running the MPI version:

In the example below the name of the input file is prod.in.

Example of shell script (submpi.sh) to run Amber22 in the batch queue, using 96 processes:

#!/bin/bash
#SBATCH --job-name=amber             # Job name
#SBATCH --partition=batch            # Partition (queue) name      
#SBATCH --ntasks=96                  # Run a single task	
#SBATCH --cpus-per-task=1            # Number of CPU cores per task
#SBATCH --mem-per-cpu=2gb            # Job memory per MPI process request
#SBATCH --time=10:00:00              # Time limit hrs:min:sec
cd $SLURM_SUBMIT_DIR

export OMP_NUM_THREADS=1
ml Amber/22.0-foss-2021b-AmberTools-22.3
source ${AMBERHOME}/amber.sh
srun $AMBERHOME/bin/pmemd.MPI -O -i prod.in -o prod_${SLURM_JOB_ID}.out  -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd

Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.

Sample job submission command:

sbatch submpi.sh


2. Running the CUDA version:

In the example below the name of the input file is prod.in.

Example of shell script (subcuda.sh) to run AMBER22 in the GPU queue (gpu_q), using one GPU card:

#!/bin/bash
#SBATCH --job-name=amber             # Job name
#SBATCH --partition=gpu_p            # Partition (queue) name     
#SBATCH --gres=gpu:A100:1 
#SBATCH --ntasks=24                  # Run a single task	
#SBATCH --cpus-per-task=1            # Number of CPU cores per task
#SBATCH --mem-per-cpu=2gb            # Job memory per MPI process request
#SBATCH --time=10:00:00              # Time limit hrs:min:sec
cd $SLURM_SUBMIT_DIR

ml Amber/22.0-foss-2021b-AmberTools-22.3-CUDA-11.4.1

source ${AMBERHOME}/amber.sh

$AMBERHOME/bin/pmemd.cuda -O -i prod.in -o prod_${SLURM_JOB_ID}.out -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd

Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.


Sample job submission command:

sbatch subcuda.sh

Documentation

Please see http://ambermd.org/

Installation

  • Amber 22: Source code compiled with the foss-2021b toolchain (GNU 11.2.0 compiler suite, OpenMPI 4.1.1), and CUDA 11.4.1.

System

64-bit Linux