GAUSSIAN-Sapelo2
Category
Chemistry
Program On
Sapelo2
Version
09, 16
Author / Distributor
Description
Gaussian is a set of programs for performing semi-empirical, density functional theory and ab initio molecular orbital calculations.
NOTE: Users are required to sign a license agreement form before being allowed to run this software. Please fill out the GACRC General Support Form to check if you have permission to use this software.
After you sign the license agreement form and get added to the Gaussian group, please check if there is a directory in /scratch/gtemp with your MyID as its name. If you don't see a directory there for you, please fill out the GACRC General Support Form to request that one such directory be created for you.
Running Program
Also refer to Running Jobs on Sapelo2.
For more information on Environment Modules on Sapelo2 please see the Lmod page.
GAUSSIAN 09
Gaussian09 (g09) is installed in /apps/eb/gaussian and it was installed without TCP-Linda, so it can only use cores within a single node (and you cannot specify %LindaWorkers larger than one). The number of threads is not specified by default. Please use the %NProcShared keyword to specify the number of threads to use. Note that in this version of Gaussian, the %NProc directive used in earlier versions is obsolete.
For AMD processors:
Gaussian binaries optimized by AMD processors are installed in /apps/eb/gaussian/09-AMD-SSE4a. To use this version of Gaussian, please first load the gaussian/09-AMD-SSE4a module and source g09.profile with
module load gaussian/09-AMD-SSE4a . $g09root/g09/bsd/g09.profile
For Intel processors:
Gaussian binaries optimized by Intel processors are installed in /apps/eb/gaussian/09-Intel-SSE4_2. To use this version of Gaussian, please first load the gaussian/09-Intel-SSE4_2 module and source g09.profile with
module load gaussian/09-Intel-SSE4_2 . $g09root/g09/bsd/g09.profile
GAUSSIAN 16
Gaussian16 (g16) is installed in /apps/eb/gaussian and it was installed without TCP-Linda, so it can only use cores within a single node (and you cannot specify %LindaWorkers larger than one). The number of threads is not specified by default. Please use the %NProcShared keyword to specify the number of threads to use. Note that in this version of Gaussian, the %NProc directive used in earlier versions is obsolete.
With AVX2 optimization
Gaussian binaries that have AVX2 optimization are installed in /apps/eb/gaussian/16-AVX2. To use this version of Gaussian16, please first load the gaussian/16-AVX2 module and source g16.profile with
module load gaussian/16-AVX2 . $g16root/g16/bsd/g16.profile
With AVX optimization
Gaussian binaries that have AVX optimization are installed in /apps/eb/gaussian/16-AVX. To use this version of Gaussian16, please first load the gaussian/16-AVX module and source g16.profile with
module load gaussian/16-AVX . $g16root/g16/bsd/g16.profile
With SSE4 optimization
Gaussian binaries that have SSE4 optimization are installed in /apps/eb/gaussian/16-SSE4. To use this version of Gaussian16, please first load the gaussian/16-SSE4 module and source g16.profile with
module load gaussian/16-SSE4 . $g16root/g16/bsd/g16.profile
Which g16 binaries should I use?
In general, the binaries that have AVX2 optimization perform better (run faster) than the AVX optimized binaries, which in turn perform better than the binaries with SSE4 optimization.
The Intel Broadwell, Intel Skylake, and the AMD EPYC processors support AVX2. For nodes with these processors we recommend that you use the binaries with AVX2 optimization. If you run jobs on the queue called 'batch' and want to target any of these node types, you can request the node feature EDR.
The AMD Opteron processors do not support AVX2, we suggest that you use the binaries with AVX optimization if you run g16 jobs on nodes with Opteron processors.
Important Note: You might want to check that the results generated with the binaries that use different optimizations (AVX2, AVX, and SSE4) generate identical and correct results.
Example of a shell script sub.sh to run g09 on the batch queue using an Intel node:
#!/bin/bash #SBATCH --partitioon=batch #SBATCH --job_name=myjobname #SBATCH --ntasks=1 #SBATCH --cpus-per-task=4 #SBATCH --mem=10gb #SBATCH --time=04:00:00 #SBATCH --constraint=Intel cd $SLURM_SUBMIT_DIR module load gaussian/09-Intel-SSE4_2 . $g09root/g09/bsd/g09.profile g09 params.com params.log
where params.com is a sample input parameter file that needs to be replaced by the name of the file you want to use. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of cores per node, and the job name need to be modified appropriately as well. In this example, the standard output of the g09 command will be saved into a file called params.log.
Please set the ppn= variable to be the same number as the number of threads specified in your parameter file (params.com) with the variable %NProcShared, e.g. %NProcShared=4. Please also request as much memory for the job as you set the %Mem keyword in your parameter file.
Example of a shell script sub.sh to run g16 on the batch queue using an Intel node or an AMD EPYC node:
#!/bin/bash #SBATCH --partitioon=batch #SBATCH --job_name=myjobname #SBATCH --ntasks=1 #SBATCH --cpus-per-task=4 #SBATCH --mem=10gb #SBATCH --time=04:00:00 #SBATCH --constraint=EDR cd $SLURM_SUBMIT_DIR module load gaussian/16-AVX2 . $g16root/g16/bsd/g16.profile g16 params.com params.log
where params.com is a sample input parameter file that needs to be replaced by the name of the file you want to use. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of cores per node, and the job name need to be modified appropriately as well. In this example, the standard output of the g09 command will be saved into a file called params.log.
Please set the ppn= variable to be the same number as the number of threads specified in your parameter file (params.com) with the variable %NProcShared, e.g. %NProcShared=4. Please also request as much memory for the job as you set the %Mem keyword in your parameter file.
Submit the job to the queue with
qsub sub.sh
Gaussian scratch files:
g09 and g16 jobs generate temporary files called Gau-* in a temporary scratch area called /scratch/gtemp/username. The job automatically deletes these files when it completes successfully. However, if the job crashes (or if you delete a running g09 or g16 job), these files are left in your scratch area. Please remove leftover temporary files manually so they do not accumulate on the scratch area (these temporary files can be huge and they fill up the scratch area very easily).
Documentation
Installation
Built without TCP-Linda, therefore it can only run within one node.
- Gaussian09: the version for Intel processors is installed in /apps/eb/gaussian/09-Intel-SSE4_2
- Gaussian09: the version for AMD processors is installed in /apps/eb/gaussian/09-AMD-SSE4a
- Gaussian16: the version with AVX2 optimization is installed in /apps/eb/gaussian/16-AVX2
- Gaussian16: the version with AVX optimization is installed in /apps/eb/gaussian/16-AVX
- Gaussian16: the version with SSE4 optimization is installed in /apps/eb/gaussian/16-SSE4
System
64-bit Linux