AMBER-Teaching

From Research Computing Center Wiki
Revision as of 13:21, 25 January 2019 by Shtsai (talk | contribs) (→‎Running Program)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Category

Chemistry

Program On

Teaching

Version

14

Author / Distributor

Please see http://ambermd.org/#developers

When citing Amber14 or AmberTools15 please use the following: D.A. Case, J.T. Berryman, R.M. Betz, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, N. Homeyer, S. Izadi, P. Janowski, J. Kaus, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, T. Luchko, R. Luo, B. Madej, K.M. Merz, G. Monard, P. Needham, H. Nguyen, H.T. Nguyen, I. Omelyan, A. Onufriev, D.R. Roe, A. Roitberg, R. Salomon-Ferrer, C.L. Simmerling, W. Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, D.M. York and P.A. Kollman (2015), AMBER 2015, University of California, San Francisco.

Description

From http://ambermd.org/: " "Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos."

Running Program

Also refer to Running Jobs on the teaching cluster.


AMBER14 and AmberTools 15

AMBER14 and AmberTools 15 were compiled with the GNU 5.4.0 compiler suite (fully patched as of 1/20/2019). The MPI version uses MPICH 3.2 and the CUDA version uses the toolkit version 7.5.18.

To use this version of AMBER, first load the Amber/14-at15 module with

ml Amber/14-at15

This module will automatically load its dependencies, namely GCC/5.4.0-2.26, MPICH/3.2-GCC-5.4.0-2.26, and CUDA/7.5.18.


1. Running the MPI version:

In the example below the name of the input file is prod.in.

Example of shell script (submpi.sh) to run Amber14 in the batch queue, using 96 processes:

#!/bin/bash
#SBATCH --job-name=amberjob
#SBATCH --partition=batch
#SBATCH --mail-type=ALL
#SBATCH --mail-user=username@uga.edu
#SBATCH --ntasks=4
#SBATCH --mem=10gb
#SBATCH --time=08:00:00
#SBATCH --output=Amberjob.%j.out
#SBATCH --error=Amberjob.%j.err

cd $SLURM_SUBMIT_DIR
ml Amber/14-at15
source ${AMBERHOME}/amber.sh
mpiexec $AMBERHOME/bin/pmemd.MPI -O -i prod.in -o prod_${SLURM_JOB_ID}.out -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd

Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed.

In the real submission script, at least all the above underlined values need to be reviewed or to be replaced by the proper values.


Sample job submission command:

sbatch submpi.sh

2. Running the CUDA version:

In the example below the name of the input file is prod.in.

Example of shell script (subcuda.sh) to run AMBER14 in the GPU queue (gpu), using one GPU card:

#!/bin/bash
#SBATCH --job-name=amberjob
#SBATCH --partition=gpu
#SBATCH --mail-type=ALL
#SBATCH --mail-user=username@uga.edu
#SBATCH --ntasks=2
#SBATCH --mem=10gb
#SBATCH --time=08:00:00
#SBATCH --output=Amberjob.%j.out
#SBATCH --error=Amberjob.%j.err

cd $SLURM_SUBMIT_DIR
ml Amber/14-at15
source ${AMBERHOME}/amber.sh
$AMBERHOME/bin/pmemd.cuda -O -i prod.in -o prod_${SLURM_JOB_ID}.out -p prod.prmtop -c restart.rst -r prod.rst -x prod.mdcrd

Note: Here the input parameter file is called prod.in and the files prod.prmtop and restart.rst are also needed.

In the real submission script, at least all the above underlined values need to be reviewed or to be replaced by the proper values.


Sample job submission command:

sbatch subcuda.sh

Documentation

Please see http://ambermd.org/

Installation

Source code compiled with GNU 5.4.0 compilers. The MPI version uses MPICH2 3.2 and the CUDA version uses the toolkit version 7.5.18.

System

64-bit Linux