AMBER-Sapelo2: Difference between revisions
(Created page with "Category:Sapelo2Category:SoftwareCategory:Chemistry === Category === Chemistry === Program On === Sapelo2 === Version === 18 === Author / Distributor ===...") |
No edit summary |
||
Line 54: | Line 54: | ||
#SBATCH --ntasks=96 # Run a single task | #SBATCH --ntasks=96 # Run a single task | ||
#SBATCH --cpus-per-task=1 # Number of CPU cores per task | #SBATCH --cpus-per-task=1 # Number of CPU cores per task | ||
#SBATCH --mem-per-cpu=2gb | #SBATCH --mem-per-cpu=2gb # Job memory per MPI process request | ||
#SBATCH --time=10:00:00 # Time limit hrs:min:sec | #SBATCH --time=10:00:00 # Time limit hrs:min:sec | ||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
Line 85: | Line 85: | ||
#SBATCH --ntasks=24 # Run a single task | #SBATCH --ntasks=24 # Run a single task | ||
#SBATCH --cpus-per-task=1 # Number of CPU cores per task | #SBATCH --cpus-per-task=1 # Number of CPU cores per task | ||
#SBATCH --mem-per-cpu=2gb | #SBATCH --mem-per-cpu=2gb # Job memory per MPI process request | ||
#SBATCH --time=10:00:00 # Time limit hrs:min:sec | #SBATCH --time=10:00:00 # Time limit hrs:min:sec | ||
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR |
Revision as of 09:42, 3 March 2022
Category
Chemistry
Program On
Sapelo2
Version
18
Author / Distributor
Please see http://ambermd.org/#developers
When citing Amber14 or AmberTools15 please use the following: D.A. Case, J.T. Berryman, R.M. Betz, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, N. Homeyer, S. Izadi, P. Janowski, J. Kaus, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, T. Luchko, R. Luo, B. Madej, K.M. Merz, G. Monard, P. Needham, H. Nguyen, H.T. Nguyen, I. Omelyan, A. Onufriev, D.R. Roe, A. Roitberg, R. Salomon-Ferrer, C.L. Simmerling, W. Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, D.M. York and P.A. Kollman (2015), AMBER 2015, University of California, San Francisco.
Description
From http://ambermd.org/: " "Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos."
Running Program
Also refer to Running Jobs on Sapelo2.
For more information on Environment Modules on Sapelo2 please see the Lmod page.
AMBER18 and AmberTools 18
AMBER18 and AmberTools 18 were compiled with the fosscuda-2018b toolchain (GNU 7.3.0 compiler suite, OpenMPI 3.1.1, and CUDA 9.2.88). This version includes serial binaries, and binaries with CUDA and MPI support.
To use this version of AMBER, first load the Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8 module with
ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8
This module will automatically load its dependencies. Note: Some of the CUDA-enabled programs, such as antechamber, are not compatible with the K20 and K40 GPU devices.
1. Running the MPI version:
In the example below the name of the input file is prod.in.
Example of shell script (submpi.sh) to run Amber18 in the batch queue, using 96 processes:
#!/bin/bash #SBATCH --job-name=amber # Job name #SBATCH --partition=batch # Partition (queue) name #SBATCH --ntasks=96 # Run a single task #SBATCH --cpus-per-task=1 # Number of CPU cores per task #SBATCH --mem-per-cpu=2gb # Job memory per MPI process request #SBATCH --time=10:00:00 # Time limit hrs:min:sec cd $SLURM_SUBMIT_DIR export OMP_NUM_THREADS=1 ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8 source ${AMBERHOME}/amber.sh srun $AMBERHOME/bin/pmemd.MPI -O -i prod.in -o prod_${SLURM_JOB_ID}.out -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd
Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.
Sample job submission command:
sbatch submpi.sh
2. Running the CUDA version:
In the example below the name of the input file is prod.in.
Example of shell script (subcuda.sh) to run AMBER18 in the GPU queue (gpu_q), using one GPU card:
#!/bin/bash #SBATCH --job-name=amber # Job name #SBATCH --partition=gpu_p # Partition (queue) name #SBATCH --gres=gpu:P100:1 #SBATCH --ntasks=24 # Run a single task #SBATCH --cpus-per-task=1 # Number of CPU cores per task #SBATCH --mem-per-cpu=2gb # Job memory per MPI process request #SBATCH --time=10:00:00 # Time limit hrs:min:sec cd $SLURM_SUBMIT_DIR ml Amber/18-fosscuda-2018b-AmberTools-18-patchlevel-10-8 source ${AMBERHOME}/amber.sh $AMBERHOME/bin/pmemd.cuda -O -i prod.in -o prod_${SLURM_JOB_ID}.out -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd
Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed. The parameters of the job, such as the maximum wall clock time, the memory per MPI process, the number of ntasks, and the job name need to be modified appropriately as well.
Note: Some of the CUDA-enabled programs, such as antechamber, are not compatible with the K20 and K40 GPU devices.
Sample job submission command:
sbatch subcuda.sh
Documentation
Please see http://ambermd.org/
Installation
- Amber 18: Source code compiled with the fosscuda-2018b toolchain (GNU 7.3.0 compiler suite, OpenMPI 3.1.1, and CUDA 9.2.88).
System
64-bit Linux