ORCA-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 10: Line 10:
=== Version ===
=== Version ===
   
   
3.0.3, 4.2.1
4.2.1, 5.0.4
   
   
=== Author / Distributor ===
=== Author / Distributor ===
Line 31: Line 31:
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.


'''Version 5.0.4'''


'''Version 4.2.1'''
Pre-compiled binaries for 64-bit Linux that use OpenMPI 4.1.4 libraries are installed in /apps/eb/ORCA/5.0.4-gompi-2022a, a directory that is not on users' default path.
 
Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b, a directory that is not on users' default path.


To use this version of ORCA, first load the module with
To use this version of ORCA, first load the module with
<pre class="gcommand">
<pre class="gcommand">
ml ORCA/4.2.1-gompi-2019b
ml ORCA/5.0.4-gompi-2022a
</pre>
</pre>


'''Version 3.0.3'''
'''Version 4.2.1'''


Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64, a directory that is not on users' default path.
Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b, a directory that is not on users' default path.


To use this version of ORCA, first load the module with
To use this version of ORCA, first load the module with
<pre class="gcommand">
<pre class="gcommand">
ml ORCA/3_0_3-linux_x86-64
ml ORCA/4.2.1-gompi-2019b
</pre>
</pre>
This version works on the EPYC Naples processors, but it does not work on the EPYC Rome processors. When using this version of ORCA, you might want to request the node feature Naples.
 


'''Running ORCA in parallel:'''
'''Running ORCA in parallel:'''
Line 87: Line 86:
#SBATCH --time=48:00:00
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g
#SBATCH --mem-per-cpu=1g
#SBATCH --mail-user=username@uga.edu
#SBATCH --mail-type=ALL
#SBATCH --constraint=EDR


cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
Line 95: Line 91:
ml ORCA/4.2.1-gompi-2019b
ml ORCA/4.2.1-gompi-2019b


echo
$EBROOTORCA/bin/orca example.inp > example.${SLURM_JOB_ID}.log
echo "Job ID: $SLURM_JOB_ID"
echo "Partition:  $SLURM_JOB_PARTITION"
echo "Cores:  $SLURM_NTASKS"
echo "Nodes:  $SLURM_NODELIST"
echo "mpirun: $(which mpirun)"
echo
 
/apps/eb/ORCA/4.2.1-gompi-2019b/orca example.inp > example.${SLURM_JOB_ID}.log


</pre>
</pre>
Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries, and context sharing scripts aren't needed with the --constraint=EDR Slurm header.
Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries.


In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the ''X'' in PAL''X'' or the number X in the '''%pal nprocs X end''' line).
In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the ''X'' in PAL''X'' or the number X in the '''%pal nprocs X end''' line).


Example shell script (sub.sh) to run ORCA 3.0.3 in parallel on the batch queue, using 64 MPI processes:
Example shell script (sub.sh) to run ORCA 5.0.4 in parallel on the batch queue, using 64 MPI processes:


<pre class="gscript">
<pre class="gscript">
Line 122: Line 110:
#SBATCH --time=48:00:00
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g
#SBATCH --mem-per-cpu=1g
#SBATCH --mail-user=username@uga.edu
#SBATCH --mail-type=ALL
#SBATCH --constraint=EDR


cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml ORCA/3_0_3-linux_x86-64
ml ORCA/5.0.4-gompi-2022a


/apps/eb/ORCA/3_0_3-linux_x86-64/orca example.inp > example.${SLURM_JOB_ID}.log
$EBROOTORCA/bin/orca example.inp > example.${SLURM_JOB_ID}.log


</pre>
</pre>
Line 151: Line 136:


*Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
*Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
*Version 3.0.3: Pre-compiled binaries for 64-bit Linux that use OpenMPI 1.6.5 libraries are installed in /apps/eb/ORCA/3_0_3-linux_x86-64,
*Version 5.0.4: Pre-compiled binaries for 64-bit Linux that use OpenMPI 4.1.4 libraries are installed in /apps/eb/ORCA/5.0.4-gompi-2022a


=== System ===
=== System ===
64-bit Linux
64-bit Linux

Latest revision as of 12:41, 5 September 2023

Category

Chemistry

Program On

Sapelo2

Version

4.2.1, 5.0.4

Author / Distributor

F. Neese and other contributors, please see https://orcaforum.cec.mpg.de/

Description

From https://orcaforum.cec.mpg.de/: "The program ORCA is a modern electronic structure program package written by F. Neese, with contributions from many current and former coworkers and several collaborating groups. The binaries of ORCA are available free of charge for academic users for a variety of platforms. ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects."

See also https://orcaforum.cec.mpg.de/license.html: "If results obtained with ORCA package are published in scientific literature, you will reference the program as F. Neese: The ORCA program system (WIREs Comput Mol Sci 2012, 2: 73-78). Using specific methods included in ORCA may require citing additional articles, as described in the manual."

Running Program

Also refer to Running Jobs on Sapelo2.

For more information on Environment Modules on Sapelo2 please see the Lmod page.

Version 5.0.4

Pre-compiled binaries for 64-bit Linux that use OpenMPI 4.1.4 libraries are installed in /apps/eb/ORCA/5.0.4-gompi-2022a, a directory that is not on users' default path.

To use this version of ORCA, first load the module with

ml ORCA/5.0.4-gompi-2022a

Version 4.2.1

Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b, a directory that is not on users' default path.

To use this version of ORCA, first load the module with

ml ORCA/4.2.1-gompi-2019b


Running ORCA in parallel:

If you are going to use up to 8 MPI processes, then in your input file, called example.inp here, you can use the PALX header option to specify the number of MPI processes to use. In the example.inp file below, 4 MPI processes are requested

! BP86 def2-SVP Opt PAL4 
# BP86 is here the method (DFT functional), def2-SVP is the basis set and Opt is
 the jobtype (geometry optimization). Order of the keywords is not important.

*xyz 0 1
H 0.0 0.0 0.0
H 0.0 0.0 1.0
*

To use more than 8 MPI processes, please do not set the PALX header option, but use the following line in the input file

%pal nprocs X end

where X should be replaced by an integer number. For example, to request 12 MPI processes, add the line

%pal nprocs 12 end

Example shell script (sub.sh) to run ORCA 4.2.1 in parallel on the batch queue, using 64 MPI processes:

#!/bin/bash
#SBATCH --partition=batch
#SBATCH -job-name=testorca
#SBATCH --nodes=2
#SBATCH --ntasks=64
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g

cd $SLURM_SUBMIT_DIR

ml ORCA/4.2.1-gompi-2019b

$EBROOTORCA/bin/orca example.inp > example.${SLURM_JOB_ID}.log

Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries.

In the sample submission script example.inp is the name of your input parameter file and the stdout of the program will be saved in a file called example.${SLURM_JOB_ID}.log, where ${SLURM_JOB_ID} will be replaced by the job id number. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of nodes and cores per node, the job name, and the email address need to be modified appropriately as well. Note that the number of total cores requested need to match the number of MPI processes specified in the input parameter file (the X in PALX or the number X in the %pal nprocs X end line).

Example shell script (sub.sh) to run ORCA 5.0.4 in parallel on the batch queue, using 64 MPI processes:

#!/bin/bash
#SBATCH --partition=batch
#SBATCH -job-name=testorca
#SBATCH --nodes=2
#SBATCH --ntasks=64
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=48:00:00
#SBATCH --mem-per-cpu=1g

cd $SLURM_SUBMIT_DIR

ml ORCA/5.0.4-gompi-2022a

$EBROOTORCA/bin/orca example.inp > example.${SLURM_JOB_ID}.log

Note that the orca binary needs to be invoked with the full path, otherwise the program will fail when it invokes other binaries.


Sample job submission command:

sbatch sub.sh

Documentation

Please see links from https://orcaforum.cec.mpg.de/

A user manual is available at https://orcaforum.cec.mpg.de/OrcaManual.pdf

Installation

  • Version 4.2.1: Pre-compiled binaries for 64-bit Linux that use OpenMPI 3.1.4 libraries are installed in /apps/eb/ORCA/4.2.1-gompi-2019b
  • Version 5.0.4: Pre-compiled binaries for 64-bit Linux that use OpenMPI 4.1.4 libraries are installed in /apps/eb/ORCA/5.0.4-gompi-2022a

System

64-bit Linux