Difference between revisions of "Sample Scripts"

From Research Computing Center Wiki
Jump to navigation Jump to search
(Created page with " ===Sample batch job submission scripts on zcluster=== ====To run a serial job==== <pre class="gscript"> #!/bin/bash cd working_directory time ./myprog < myin > myout </pre>...")
 
 
(34 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
[[Category:sapelo2]]
  
===Sample batch job submission scripts on zcluster===
+
[[Sample batch job submission scripts on Sapelo2]]
  
====To run a serial job====
+
[[Sample batch job submission scripts on the teaching cluster]]
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
time ./myprog < myin > myout
 
</pre>
 
 
 
Note that the 'time' command included in the sample script above is optional (this command measures the amount of time it takes to run an executable). Entering a standard input file (myin) and standard output file (myout) is only necessary if your executable requires input from standard input and if it outputs data to standard output.
 
 
 
You can specify the ''working_directory'' as in the example below:
 
<pre class="gscript">
 
#!/bin/bash
 
cd ${HOME}/projectA
 
time ./myprog < myin > myout
 
</pre>
 
 
 
If the job is submitted from within the working directory, you can use the following sample script:
 
<pre class="gscript">
 
#!/bin/bash
 
cd `pwd`
 
time ./myprog < myin > myout
 
</pre>
 
 
 
====To run R====
 
Sample script to run an R program called program.R:
 
<pre class="gscript">
 
#!/bin/bash
 
cd `pwd`
 
time /usr/local/R/2.15.2/bin/R CMD BATCH program.R
 
</pre>
 
 
 
====Parallel Jobs using MPI libraries====
 
 
 
Several MPI libraries are installed on zcluster. Here are sample scripts to use some version of MPI llibraries. For other versions, you need to change the path to mpirun appropriately. For a list of all MPI libraries installed on zcluster, please see Code Compilation on zcluster.
 
 
 
In the examples below, the executable name will be called myprog. The second line in the scripts below, namely cd working_directory or cd ${HOME}/subdirectory can be replaced by cd `pwd` if the job will be submitted from within the working directory.
 
 
 
NOTE:
 
MPICH jobs executed with mpirun have to use the -machinefile option as shown in the example above, otherwise your mpi job will not use the processors assigned to it by the queueing system. In contract, when using MPICH2, OpenMPI and MVAPICH2 it is not necessary to use the -machinefile option.
 
 
 
<u>To run a parallel MPI job using MPICH/PGI</u> (e.g. script name submpi.sh):
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
echo "Got $NSLOTS processors."
 
echo "Machines:"
 
cat $TMPDIR/machines
 
/usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines ./myprog
 
</pre>
 
 
 
Note that lines 3, 4, and 5 in this script are optional.
 
 
 
<u>To run a parallel MPI job using the default MPICH2 (PGI) compilers.</u> For example, if you compiled the code with mpicc or mpif90, etc, without full path (e.g. script name submpich2.sh)
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
export LD_LIBRARY_PATH=/usr/local/mpich2/1.4.1p1/pgi123/lib:${LD_LIBRARY_PATH}
 
mpirun -np $NSLOTS ./myprog
 
</pre>
 
 
 
<u>To run a parallel MPI job using MPICH2 and e.g. GNU 4.4.4 compilers</u> (e.g. script name submpich2.sh)
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
export LD_LIBRARY_PATH=/usr/local/mpich2/1.4.1p1/gcc_4.4.4/lib:${LD_LIBRARY_PATH}
 
/usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpirun -np $NSLOTS ./myprog
 
</pre>
 
 
 
Note that with MPICH2 it is not necessary to include the -machinefile option when submitting the job to a batch queue. When using other MPICH2 compilations, such as for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script.
 
 
 
<u>To run a parallel MPI job using OpenMPI 1.4.4 and e.g. GNU 4.1.2 compilers</u> (e.g. script name subompi.sh)
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
export LD_LIBRARY_PATH=/usr/local/openmpi/1.4.4/gcc412/lib:${LD_LIBRARY_PATH}
 
/usr/local/openmpi/1.4.4/gcc412/bin/mpirun ––mca btl "tcp,self" -np $NSLOTS ./myprog
 
</pre>
 
 
 
<u>To run a parallel MPI job using OpenMPI 1.6.2 and e.g. GNU 4.7.1 compilers</u> (e.g. script name subompi.sh)
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
export LD_LIBRARY_PATH=/usr/local/gcc/4.7.1/lib64:/usr/local/openmpi/1.6.2/gcc471/lib:${LD_LIBRARY_PATH}
 
/usr/local/openmpi/1.6.2/gcc471/bin/mpirun ––mca btl "tcp,self" -np $NSLOTS ./myprog
 
</pre>
 
 
 
Note that with OpenMPI you can use the mpirun command and there is no need to include the -machinefile option. When using other OpenMPI compilations, such as the one for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script. To use OpenMPI over Infiniband, remove the mpirun option
 
 
 
––mca btl "tcp,self"
 
 
 
from the script.
 
 
 
<u>To run a parallel MPI job using MVAPICH2/GNU over Infiniband</u> (e.g. script name submvapich2.sh)
 
 
 
<pre class="gscript">
 
#!/bin/bash
 
cd working_directory
 
export LD_LIBRARY_PATH=/usr/local/mvapich2/1.8/gcc444/lib:${LD_LIBRARY_PATH}
 
/usr/local/mvapich2/1.8/gcc444/bin/mpirun -np $NSLOTS ./myprog
 
</pre>
 

Latest revision as of 14:14, 7 January 2021