ORCA-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 10: Line 10:
=== Version ===
=== Version ===
   
   
4.2.1
3.0.3, 4.2.1
   
   
=== Author / Distributor ===
=== Author / Distributor ===
Line 39: Line 39:
<pre class="gcommand">
<pre class="gcommand">
ml ORCA/4.2.1-gompi-2019b
ml ORCA/4.2.1-gompi-2019b
</pre>
''Version 3.0.3'''
Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64, a directory that is not on users' default path.
To use this version of ORCA, first load the module with
<pre class="gcommand">
ml ORCA/3_0_3-linux_x86-64
</pre>
</pre>


Line 100: Line 109:


In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the ''X'' in PAL''X'' or the number X in the '''%pal nprocs X end''' line).
In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the ''X'' in PAL''X'' or the number X in the '''%pal nprocs X end''' line).
Example shell script (sub.sh) to run ORCA 3.0.3 in parallel on the batch queue, using 64 MPI processes:
<pre class="gscript">
#!/bin/bash
#SBATCH --partition=batch
#SBATCH -job-name=testorca
#SBATCH --nodes=2
#SBATCH --ntasks=64
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g
#SBATCH --mail-user=username@uga.edu
#SBATCH --mail-type=ALL
#SBATCH --constraint=EDR
cd $SLURM_SUBMIT_DIR
ml ORCA/3_0_3-linux_x86-64
/apps/eb/ORCA/3_0_3-linux_x86-64/orca example.inp > example.${SLURM_JOB_ID}.log
</pre>
Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries.




Line 117: Line 151:


*Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
*Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
*Version 3.0.3: Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64,


=== System ===
=== System ===
64-bit Linux
64-bit Linux

Revision as of 22:56, 19 August 2021

Category

Chemistry

Program On

Sapelo2

Version

3.0.3, 4.2.1

Author / Distributor

F. Neese and other contributors, please see https://orcaforum.cec.mpg.de/

Description

From https://orcaforum.cec.mpg.de/: "The program ORCA is a modern electronic structure program package written by F. Neese, with contributions from many current and former coworkers and several collaborating groups. The binaries of ORCA are available free of charge for academic users for a variety of platforms. ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects."

See also https://orcaforum.cec.mpg.de/license.html: "If results obtained with ORCA package are published in scientific literature, you will reference the program as F. Neese: The ORCA program system (WIREs Comput Mol Sci 2012, 2: 73-78). Using specific methods included in ORCA may require citing additional articles, as described in the manual."

Running Program

Also refer to Running Jobs on Sapelo2.

For more information on Environment Modules on Sapelo2 please see the Lmod page.


Version 4.2.1

Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b, a directory that is not on users' default path.

To use this version of ORCA, first load the module with

ml ORCA/4.2.1-gompi-2019b

Version 3.0.3'

Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64, a directory that is not on users' default path.

To use this version of ORCA, first load the module with

ml ORCA/3_0_3-linux_x86-64


Running ORCA in parallel:

If you are going to use up to 8 MPI processes, then in your input file, called example.inp here, you can use the PALX header option to specify the number of MPI processes to use. In the example.inp file below, 4 MPI processes are requested

! BP86 def2-SVP Opt PAL4 
# BP86 is here the method (DFT functional), def2-SVP is the basis set and Opt is
 the jobtype (geometry optimization). Order of the keywords is not important.

*xyz 0 1
H 0.0 0.0 0.0
H 0.0 0.0 1.0
*

To use more than 8 MPI processes, please do not set the PALX header option, but use the following line in the input file

%pal nprocs X end

where X should be replaced by an integer number. For example, to request 12 MPI processes, add the line

%pal nprocs 12 end

Example shell script (sub.sh) to run ORCA 4.2.1 in parallel on the batch queue, using 64 MPI processes:

#!/bin/bash
#SBATCH --partition=batch
#SBATCH -job-name=testorca
#SBATCH --nodes=2
#SBATCH --ntasks=64
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g
#SBATCH --mail-user=username@uga.edu
#SBATCH --mail-type=ALL
#SBATCH --constraint=EDR

cd $SLURM_SUBMIT_DIR

ml ORCA/4.2.1-gompi-2019b

echo
echo "Job ID: $SLURM_JOB_ID"
echo "Partition:  $SLURM_JOB_PARTITION"
echo "Cores:  $SLURM_NTASKS"
echo "Nodes:  $SLURM_NODELIST"
echo "mpirun: $(which mpirun)"
echo

/apps/eb/ORCA/4.2.1-gompi-2019b/orca example.inp > example.${SLURM_JOB_ID}.log

Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries, and context sharing scripts aren't needed with the --constraint=EDR Slurm header.

In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the X in PALX or the number X in the %pal nprocs X end line).

Example shell script (sub.sh) to run ORCA 3.0.3 in parallel on the batch queue, using 64 MPI processes:

#!/bin/bash
#SBATCH --partition=batch
#SBATCH -job-name=testorca
#SBATCH --nodes=2
#SBATCH --ntasks=64
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g
#SBATCH --mail-user=username@uga.edu
#SBATCH --mail-type=ALL
#SBATCH --constraint=EDR

cd $SLURM_SUBMIT_DIR

ml ORCA/3_0_3-linux_x86-64

/apps/eb/ORCA/3_0_3-linux_x86-64/orca example.inp > example.${SLURM_JOB_ID}.log

Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries.


Sample job submission command:

sbatch sub.sh

Documentation

Please see links from https://orcaforum.cec.mpg.de/

A user manual is available at https://orcaforum.cec.mpg.de/OrcaManual.pdf

Installation

  • Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
  • Version 3.0.3: Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64,

System

64-bit Linux