Sample Scripts: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
(Created page with " ===Sample batch job submission scripts on zcluster=== ====To run a serial job==== <pre class="gscript"> #!/bin/bash cd working_directory time ./myprog < myin > myout </pre>...")
 
No edit summary
Line 34: Line 34:
</pre>
</pre>


====Parallel Jobs using MPI libraries====
====To run Matlab====
 
Sample script to run a Matlab program called plotsin.m:
Several MPI libraries are installed on zcluster. Here are sample scripts to use some version of MPI llibraries. For other versions, you need to change the path to mpirun appropriately. For a list of all MPI libraries installed on zcluster, please see Code Compilation on zcluster.
<pre class="gscript">
 
#!/bin/bash
In the examples below, the executable name will be called myprog. The second line in the scripts below, namely cd working_directory or cd ${HOME}/subdirectory can be replaced by cd `pwd` if the job will be submitted from within the working directory.
cd `pwd`
matlab -nodisplay < plotsin.m > matlab.out
exit
</pre>


NOTE:
===Parallel Jobs using MPI libraries===
MPICH jobs executed with mpirun have to use the -machinefile option as shown in the example above, otherwise your mpi job will not use the processors assigned to it by the queueing system. In contract, when using MPICH2, OpenMPI and MVAPICH2 it is not necessary to use the -machinefile option.


<u>To run a parallel MPI job using MPICH/PGI</u> (e.g. script name submpi.sh):
<u>To run a parallel MPI job using MPICH/PGI</u>


<pre class="gscript">
<pre class="gscript">
Line 56: Line 58:
Note that lines 3, 4, and 5 in this script are optional.
Note that lines 3, 4, and 5 in this script are optional.


<u>To run a parallel MPI job using the default MPICH2 (PGI) compilers.</u> For example, if you compiled the code with mpicc or mpif90, etc, without full path (e.g. script name submpich2.sh)
<u>To run a parallel MPI job using the default MPICH2 (PGI) compilers.</u> For example, if you compiled the code with mpicc or mpif90, etc, without full path.  


<pre class="gscript">
<pre class="gscript">
Line 65: Line 67:
</pre>
</pre>


<u>To run a parallel MPI job using MPICH2 and e.g. GNU 4.4.4 compilers</u> (e.g. script name submpich2.sh)
<u>To run a parallel MPI job using MPICH2 and e.g. GNU 4.4.4 compilers</u>


<pre class="gscript">
<pre class="gscript">
Line 76: Line 78:
Note that with MPICH2 it is not necessary to include the -machinefile option when submitting the job to a batch queue. When using other MPICH2 compilations, such as for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script.
Note that with MPICH2 it is not necessary to include the -machinefile option when submitting the job to a batch queue. When using other MPICH2 compilations, such as for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script.


<u>To run a parallel MPI job using OpenMPI 1.4.4 and e.g. GNU 4.1.2 compilers</u> (e.g. script name subompi.sh)
<u>To run a parallel MPI job using OpenMPI 1.4.4 and e.g. GNU 4.1.2 compilers</u>


<pre class="gscript">
<pre class="gscript">
Line 85: Line 87:
</pre>
</pre>


<u>To run a parallel MPI job using OpenMPI 1.6.2 and e.g. GNU 4.7.1 compilers</u> (e.g. script name subompi.sh)
<u>To run a parallel MPI job using OpenMPI 1.6.2 and e.g. GNU 4.7.1 compilers</u>


<pre class="gscript">
<pre class="gscript">
Line 100: Line 102:
from the script.
from the script.


<u>To run a parallel MPI job using MVAPICH2/GNU over Infiniband</u> (e.g. script name submvapich2.sh)
<u>To run a parallel MPI job using MVAPICH2/GNU over Infiniband</u>


<pre class="gscript">
<pre class="gscript">
Line 107: Line 109:
export LD_LIBRARY_PATH=/usr/local/mvapich2/1.8/gcc444/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=/usr/local/mvapich2/1.8/gcc444/lib:${LD_LIBRARY_PATH}
/usr/local/mvapich2/1.8/gcc444/bin/mpirun -np $NSLOTS ./myprog
/usr/local/mvapich2/1.8/gcc444/bin/mpirun -np $NSLOTS ./myprog
</pre>
===To run a shared memory job using OpenMP===
Sample script to run a program that uses 4 OpenMP threads:
<pre class="gscript">
#!/bin/bash
cd working_directory
export OMP_NUM_THREADS=4
time ./myprog
</pre>
</pre>

Revision as of 14:44, 11 February 2013

Sample batch job submission scripts on zcluster

To run a serial job

#!/bin/bash
cd working_directory
time ./myprog < myin > myout

Note that the 'time' command included in the sample script above is optional (this command measures the amount of time it takes to run an executable). Entering a standard input file (myin) and standard output file (myout) is only necessary if your executable requires input from standard input and if it outputs data to standard output.

You can specify the working_directory as in the example below:

#!/bin/bash
cd ${HOME}/projectA
time ./myprog < myin > myout

If the job is submitted from within the working directory, you can use the following sample script:

#!/bin/bash
cd `pwd`
time ./myprog < myin > myout

To run R

Sample script to run an R program called program.R:

#!/bin/bash
cd `pwd`
time /usr/local/R/2.15.2/bin/R CMD BATCH program.R

To run Matlab

Sample script to run a Matlab program called plotsin.m:

#!/bin/bash
cd `pwd`
matlab -nodisplay < plotsin.m > matlab.out
exit

Parallel Jobs using MPI libraries

To run a parallel MPI job using MPICH/PGI

#!/bin/bash
cd working_directory
echo "Got $NSLOTS processors."
echo "Machines:"
cat $TMPDIR/machines
/usr/local/pgi/linux86-64/2012/mpi/mpich/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines ./myprog

Note that lines 3, 4, and 5 in this script are optional.

To run a parallel MPI job using the default MPICH2 (PGI) compilers. For example, if you compiled the code with mpicc or mpif90, etc, without full path.

#!/bin/bash
cd working_directory
export LD_LIBRARY_PATH=/usr/local/mpich2/1.4.1p1/pgi123/lib:${LD_LIBRARY_PATH}
mpirun -np $NSLOTS ./myprog

To run a parallel MPI job using MPICH2 and e.g. GNU 4.4.4 compilers

#!/bin/bash
cd working_directory
export LD_LIBRARY_PATH=/usr/local/mpich2/1.4.1p1/gcc_4.4.4/lib:${LD_LIBRARY_PATH}
/usr/local/mpich2/1.4.1p1/gcc_4.4.4/bin/mpirun -np $NSLOTS ./myprog

Note that with MPICH2 it is not necessary to include the -machinefile option when submitting the job to a batch queue. When using other MPICH2 compilations, such as for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script.

To run a parallel MPI job using OpenMPI 1.4.4 and e.g. GNU 4.1.2 compilers

#!/bin/bash
cd working_directory
export LD_LIBRARY_PATH=/usr/local/openmpi/1.4.4/gcc412/lib:${LD_LIBRARY_PATH}
/usr/local/openmpi/1.4.4/gcc412/bin/mpirun ––mca btl "tcp,self" -np $NSLOTS ./myprog

To run a parallel MPI job using OpenMPI 1.6.2 and e.g. GNU 4.7.1 compilers

#!/bin/bash
cd working_directory
export LD_LIBRARY_PATH=/usr/local/gcc/4.7.1/lib64:/usr/local/openmpi/1.6.2/gcc471/lib:${LD_LIBRARY_PATH}
/usr/local/openmpi/1.6.2/gcc471/bin/mpirun ––mca btl "tcp,self" -np $NSLOTS ./myprog

Note that with OpenMPI you can use the mpirun command and there is no need to include the -machinefile option. When using other OpenMPI compilations, such as the one for PGI compilers, users need to adjust the path to the libraries and to mpirun appropriately in the script. To use OpenMPI over Infiniband, remove the mpirun option

––mca btl "tcp,self"

from the script.

To run a parallel MPI job using MVAPICH2/GNU over Infiniband

#!/bin/bash
cd working_directory
export LD_LIBRARY_PATH=/usr/local/mvapich2/1.8/gcc444/lib:${LD_LIBRARY_PATH}
/usr/local/mvapich2/1.8/gcc444/bin/mpirun -np $NSLOTS ./myprog

To run a shared memory job using OpenMP

Sample script to run a program that uses 4 OpenMP threads:

#!/bin/bash
cd working_directory
export OMP_NUM_THREADS=4
time ./myprog