MATLAB-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
 
(31 intermediate revisions by 2 users not shown)
Line 10: Line 10:
=== Version ===
=== Version ===
   
   
R2019b (9.7.0.1216025), R2020a (9.8.0.1451342), R2020b (9.9.0.1524771), R2021a (9.10.0.1602886)
R2023a, R2023b
   
   
=== Author / Distributor ===
=== Author / Distributor ===
Line 26: Line 26:
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.
For more information on Environment Modules on Sapelo2 please see the [[Lmod]] page.


* Version R2021a is installed in /apps/gb/MATLAB/R2021a. In order to use this version of MATLAB, please first load the matlab/R2021a module with
* Version R2023a is installed in /apps/gb/MATLAB/R2023a. In order to use this version of MATLAB, please first load the matlab/R2023a module with
<pre class="gcommand">
<pre class="gcommand">
ml matlab/R2021a
ml matlab/R2023a
</pre>
</pre>


* Version R2020b is installed in /apps/gb/MATLAB/R2020b. In order to use this version of MATLAB, please first load the matlab/R2020b module with
<pre class="gcommand">
ml matlab/R2020b
</pre>


* Version R2020a is installed in /apps/gb/MATLAB/R2020a. In order to use this version of MATLAB, please first load the matlab/R2020a module with
====Running MATLAB interactively====
<pre class="gcommand">
ml matlab/R2020a
</pre>
 
* Version R2019b is installed in /apps/gb/MATLAB/R2019b. In order to use this version of MATLAB, please first load the matlab/R2019b module with
<pre class="gcommand">
ml matlab/R2019b
</pre>


====Running MATLAB interactively====
Please do not run MATLAB interactively on the Sapelo2 login node, instead please run it using the [[Running_Jobs_on_Sapelo2#How_to_open_an_interactive_session | interactive partition]] (without GUI) or using [[OnDemand]] (with GUI).


Please do not run MATLAB interactively on the Sapelo2 login node, instead please run it using the [[Running_Jobs_on_Sapelo2#How_to_open_an_interactive_session | interactive partition]].
The best way to run MATLAB interactively with a graphical front-end (GUI) is to run the MATLAB interactive application in the [[OnDemand]] interface to Sapelo2.


To run MATLAB interactively, start an interactive session with xqlogin (with GUI) or qlogin (without GUI) and at the interactive node shell prompt start the application.
To run MATLAB interactively without a GUI, please first start an interactive job with the '''interact''' command.


For example:
For example:


<!---
1. To run with the graphical front-end on a regular compute node:
1. To run with the graphical front-end on a regular compute node:
<pre class="gcommand">
<pre class="gcommand">
Line 62: Line 51:
matlab &
matlab &
</pre>
</pre>
Note that in order to have the MATLAB GUI display on your local machine, you will need to have an X client running on your local machine and configure your ssh session to tunnel X11. See below for an option on run MATLAB directly on your local machine and have it submit jobs to the cluster.  
Note that in order to have the MATLAB GUI display on your local machine, you will need to have an X client running on your local machine and configure your ssh session to tunnel X11. '''See below for an option on [https://wiki.gacrc.uga.edu/wiki/MatLab-Sapelo2#Using_a_MATLAB_client_installed_on_your_local_machine_to_run_jobs_on_the_cluster run MATLAB directly on your local machine and have it submit jobs to the cluster]. '''
--->


2. To run without the graphical front-end on a regular compute node:
1. To run without the graphical front-end on a regular compute node:
<pre class="gcommand">
<pre class="gcommand">
qlogin
interact


ml matlab/R2021a
ml matlab/R2023a


matlab -nodisplay
matlab -nodisplay
</pre>
</pre>


3. To run without the graphical front-end on a node in a different partition, e.g. in abc_p, or to request more resources (cores or memory), use for example
2. To run without the graphical front-end on a node in a different partition, e.g. in abc_p, or to request more resources (cores or memory), use for example
<pre class="gcommand">
<pre class="gcommand">
srun --pty  -p abc_p --mem=10G --ntasks=1 --cpus-per-task=2 --time=12:00:00 --job-name=qlogin /bin/bash -l
interact -p abc_p -c 4 --mem 20gb


ml matlab/R2021a
ml matlab/R2023a


matlab -nodisplay
matlab -nodisplay
</pre>
</pre>
For more information on how to run interactive jobs, please see [[Running_Jobs_on_Sapelo2#How_to_open_an_interactive_session | interactive partition]]


====Running MATLAB as a batch job====
====Running MATLAB as a batch job====
Line 112: Line 104:
#SBATCH --output=%x.%j.out     
#SBATCH --output=%x.%j.out     
#SBATCH --error=%x.%j.err     
#SBATCH --error=%x.%j.err     
#SBATCH --mail-type=END,FAIL     
#SBATCH --mail-user=username@uga.edu 
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml matlab/R2021a
ml matlab/R2023a


matlab -nodisplay < matrixinv.m  
matlab -nodisplay < matrixinv.m  
Line 134: Line 124:
#SBATCH --output=%x.%j.out     
#SBATCH --output=%x.%j.out     
#SBATCH --error=%x.%j.err     
#SBATCH --error=%x.%j.err     
#SBATCH --mail-type=END,FAIL     
#SBATCH --mail-user=username@uga.edu 
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml matlab/R2021a
ml matlab/R2023a


echo functionname | matlab -nodisplay -nosplash  
echo functionname | matlab -nodisplay -nosplash  
Line 155: Line 143:
Sample code psine.m
Sample code psine.m
<pre class="gscript">
<pre class="gscript">
defaultProfile=parallel.defaultClusterProfile;
p=parcluster('local');
p=parcluster(defaultProfile);
p.NumWorkers=25;
p.NumWorkers=25;
ppool=parpool(p,24);
ppool=parpool(p,24);
Line 179: Line 166:
#SBATCH --output=%x.%j.out     
#SBATCH --output=%x.%j.out     
#SBATCH --error=%x.%j.err     
#SBATCH --error=%x.%j.err     
#SBATCH --mail-type=END,FAIL     
#SBATCH --mail-user=username@uga.edu 
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml matlab/R2021a
ml matlab/R2023a


matlab -nodisplay < psine.m  
matlab -nodisplay < psine.m  
</pre>
</pre>
Note that the number that follows '''--cpus-per-tasks''' needs to match the number of matlab workers defined with <code>NumWorkers</code> in the matlab code .
Note that the number that follows '''--cpus-per-tasks''' needs to match the number of MATLAB workers defined with <code>NumWorkers</code> in the MATLAB code .


Sample job submission command
Sample job submission command
Line 212: Line 197:
====Parallel Computing - Using cores from one or more compute nodes ====
====Parallel Computing - Using cores from one or more compute nodes ====


In order to use the Parallel Computing toolbox to run MATLAB using either cores on a single node, or to use multiple nodes, you need to configure Matlab and create a new cluster profile. To do this, please login to Sapelo2, start an interactive session with <code>qlogin</code>, load the matlab module you want to use and start matlab. For example, to configure this for matlab/R2021a:
In order to use the Parallel Computing toolbox to run MATLAB using either cores on a single node, or to use cores on multiple nodes, you need to configure MATLAB and create a new cluster profile. To do this, please login to Sapelo2, start an interactive session with <code>interact</code>, load the matlab module you want to use and start matlab. For example, to configure this for matlab/R2023a:
<pre class="gcommand">
<pre class="gcommand">
qlogin
interact


ml matlab/R2021a
ml matlab/R2021a
Line 226: Line 211:
</pre>
</pre>


This function only needs to be called '''once''' per version of MATLAB.  Please be aware that running <code>configCluster</code> more than once per version will reset your cluster profile back to default settings and erase any saved modifications to the profile. If calling the <code>configCluster</code> function returns the error '''Unrecognized function or variable 'configCluster'.''', then run the following commands in an interactive MATLAB session:
This function only needs to be called '''once''' per version of MATLAB.  Please be aware that running <code>configCluster</code> more than once per version will reset your cluster profile back to default settings and erase any saved modifications to the profile. If calling the <code>configCluster</code> function returns the error '''Unrecognized function or variable 'configCluster' ''', then run the following command in an interactive MATLAB session:
<pre class="gscript">
<pre class="gscript">
>>rehash toolboxcache
>>rehash toolboxcache
Line 246: Line 231:
p.delete
p.delete
</pre>
</pre>
Note that the resources you want this parallel job to use (e.g. partition name, the walltime limit and the memory per CPU needs to be specified in this MATLAB code. For more details on how to request resources in the MATLAB code, please see the Configuring Jobs from inside MATLAB session below.
Note that the resources you want this parallel job to use (e.g. partition name, the walltime limit and the memory per CPU need to be specified in this MATLAB code. This parallel job will automatically request as many cores as needed for the parpool defined in the code, and the cores can be allocated on more than one node, if needed. For more details on how to request resources in the MATLAB code, please see the [https://wiki.gacrc.uga.edu/wiki/MatLab-Sapelo2#Configuring_Jobs_from_within_MATLAB_on_the_cluster_or_on_your_local_machine Configuring Jobs from within MATLAB] session below.


Sample job submission script sub.sh:
Sample job submission script sub.sh:
Line 260: Line 245:
#SBATCH --output=%x.%j.out     
#SBATCH --output=%x.%j.out     
#SBATCH --error=%x.%j.err     
#SBATCH --error=%x.%j.err     
#SBATCH --mail-type=END,FAIL     
#SBATCH --mail-user=username@uga.edu 
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


ml matlab/R2021a
ml matlab/R2023a


matlab -nodisplay < psine.m  
matlab -nodisplay < psine.m  
</pre>
</pre>
Note that this job only needs to request one core. When this job runs, MATLAB will submit another parallel jobs using the resources specified in the psine.m code. It is important that this script (sub.sh) uses the '''--mem-per-cpu''' option to request memory per cpu and '''not''' the total memory with '''--mem'''.  
Note that this job only needs to request one core. When this job runs, MATLAB will submit another parallel job using the resources specified in the psine.m code. It is important that this script (sub.sh) uses the '''--mem-per-cpu''' option to request memory per cpu and '''not''' the total memory with '''--mem'''.  


Sample job submission command
Sample job submission command
Line 277: Line 260:
====Using a MATLAB client installed on your local machine to run jobs on the cluster====
====Using a MATLAB client installed on your local machine to run jobs on the cluster====


The Sapelo2 MATLAB support package can be found on Sapelo2, at the following locationa:
In order to use a MATLAB client installed on your local machine and have it offload work onto the cluster (Sapelo2), you will need to first install some cluster integration files on your local machine. Please note that you need to have the same version of MATLAB client installed on your local machine, as the MATLAB version you will use on the cluster.


Windows: /apps/gb/MATLAB/UGA.nonshared.R2021a.zip
The Sapelo2 MATLAB support package can be found on Sapelo2, at the following location:
 
'''For R2023a'''
 
Windows: /apps/gb/MATLAB/UGA.nonshared.R2023a.zip
 
Linux/macOS: /apps/gb/MATLAB/UGA.nonshared.R2023a.tar.gz


Linux/macOS: /apps/gb/MATLAB/UGA.nonshared.R2021a.tar.gz


Download the appropriate archive file to your local machine and start MATLAB on your local machine.  The archive file should be untarred/unzipped in the location returned by calling
Download the appropriate archive file to your local machine and start MATLAB on your local machine.  The archive file should be untarred/unzipped in the location returned by calling
Line 293: Line 281:
</pre>
</pre>


Submission to the remote cluster (Sapelo2) requires SSH credentials and you will need to configure key-based SSH. You will be prompted for your ssh username and your identity file (private key). The username and location of the private key will be stored in MATLAB for future sessions.
Submission to the remote cluster (Sapelo2) requires SSH credentials and you will need to configure key-based SSH. For information on how to set key-based ssh, please see https://www.ssh.com/academy/ssh/keygen. If your local machine is a Mac, please generate the ssh key with <code>ssh-keygen -t rsa -m PEM</code>. Once you have the key-based ssh set up, you can submit jobs to the cluster from within the MATLAB client on your local machine. You will be prompted for your ssh username and your identity file (private key). The username and location of the private key will be stored in MATLAB for future sessions.


Jobs will now default to the cluster rather than submit to the local machine.
Jobs will now default to the cluster rather than submit to the local machine.
Line 357: Line 345:
</pre>
</pre>


====Parallel Interactive Jobs using MATLAB on the cluster====
====Submitting Independent Batch Jobs from within locally installed MATLAB====
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before. Start an interactive session with qlogin (or with srun), load the matlab module and start MATLAB with <code>matlab -nodislay</code>. In Matlab, run
Use the <code>batch</code> command to submit asynchronous jobs to the cluster. Users can either run a single function or a Matlab script as a batch job. You must make sure the script is in the MATLAB path. If running MATLAB locally, the command <code>userpath</code> displays the location you can save your .m scripts.  See the MATLAB documentation for <code>userpath</code> if you would like to add directories to that path. Since your local file system is different from the worker file system(Sapelo2), if you are submitting jobs to Sapelo 2 from local MATLAB you also have to set the 'AutoAddClientPath' option to false. The <code>batch</code> command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on <code>batch</code>.
 
Shown below is an example submitting a batch job from local MATLAB which runs the script testscript.m. Notice that the .m extension is left off.


<pre class="gscript">
<pre class="gscript">
Line 364: Line 354:
>> c = parcluster;
>> c = parcluster;


>> % Open a pool of 64 workers on the cluster
>> % Submit job to query where MATLAB is running on the cluster
>> p = c.parpool(64);
>> j = batch(testscript,'AutoAddClientPath',false);
</pre>
 
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.
>> % Query job for state
>> j.State
 
>> % If state is finished, fetch the results
>> load(j)


<pre class="gscript">
>> % Delete the job after results are no longer needed
>> % Run a parfor over 1000 iterations
>> j.delete
>> parfor idx = 1:1000
      a(idx) = …
  end
</pre>
Once we are done with the pool, delete it.
<pre class="gscript">
>> % Delete the pool
>> p.delete
</pre>
</pre>


====Submitting Independent Batch Jobs from within MATLAB====
You can also run functions with multiple arguments using the <code>batch</code> command. For a function to be available it must be in your MATLAB path. Built in functions will automatically be included, but your own functions you will have to save in your userpath. When running functions, you must specify that you are using the worker established with the <code>parcluster</code>  command, so use <code>c.batch</code> instead of <code>batch</code>. The example below shows a job which prints the working directory. When retrieving the data, use the command <code>fetchOutputs</code> instead of <code>load</code>.<pre class="gscript">
Use the batch command to submit asynchronous jobs to the cluster. The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.
 
<pre class="gscript">
>> % Get a handle to the cluster
>> % Get a handle to the cluster
>> c = parcluster;
>> c = parcluster;


>> % Submit job to query where MATLAB is running on the cluster
>> % Submit job to query where MATLAB is running on the cluster
>> j = c.batch(@pwd, 1, {}, …
>> j = c.batch(@pwd, 1,'AutoAddClientPath',false);
      'CurrentFolder','.', 'AutoAddClientPath',false);


>> % Query job for state
>> % Query job for state
>> j.State
>> j.State


>> % If state is finished, fetch the results
>> j.fetchOutputs{:}
>> j.fetchOutputs{:}


>> % Delete the job after results are no longer needed
ans =
>> j.delete
 
    '/home/keekov'
</pre>
</pre>


To retrieve a list of currently running or completed jobs, call <code>parcluster</code> to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.
To retrieve a list of currently running or completed jobs, call <code>parcluster</code> to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below. The <code>Jobs</code> command also allows you to see the job id and status of previously submitted jobs.  
<pre class="gscript">
<pre class="gscript">
>> c = parcluster;
>> c = parcluster;
Line 408: Line 390:
</pre>
</pre>


Once we have identified the job we want, we can retrieve the results as we’ve done previously.
Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).
<code>fetchOutputs</code> is used to retrieve function output arguments; if calling batch with a script, use load instead.  Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).


To view results of a previously completed job:
To fetch the outputs of a previously completed job use the <code>fetchOutputs</code> or <code>load</code>command. This job used the function method and so data is retrieved with <code>fetchOutputs</code> instead of <code>load.</code>
<pre class="gscript">
<pre class="gscript">
>> % Get a handle to the job with ID 2
>> % Get a handle to the job with ID 2
>> j2 = c.Jobs(2);
>> j2 = c.Jobs(2);
</pre>
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command. 
<pre class="gscript">
>> % Fetch results for job with ID 2
>> % Fetch results for job with ID 2
>> j2.fetchOutputs{:}
>> j2.fetchOutputs{:}
</pre>
</pre>


====Submitting Parallel Batch Jobs from within MATLAB====
====Submitting Parallel Batch Jobs from within locally installed MATLAB====
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as parallel_example.m.   
Users can also submit parallel workflows with the <code>batch</code> command.  Let’s use the following example for a parallel job, which is saved as ''parallel_example.m''.   


<pre class="gscript">
<pre class="gscript">
Line 443: Line 421:
</pre>
</pre>


This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     
This time when we use the <code>c.batch</code> command, to run a parallel job, we will also specify a MATLAB Pool.     
<pre class="gscript">
<pre class="gscript">
>> % Get a handle to the cluster
>> % Get a handle to the cluster
Line 503: Line 481:


The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.
==== Parallel Interactive Jobs using MATLAB on the cluster ====
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before. Start an interactive session with qlogin (or with srun), load the matlab module and start MATLAB with <code>matlab -nodislay</code>. In Matlab, run
<pre class="gscript">
>> % Get a handle to the cluster
>> c = parcluster;
>> % Open a pool of 64 workers on the cluster
>> p = c.parpool(64);
</pre>
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.
<pre class="gscript">
>> % Run a parfor over 1000 iterations
>> parfor idx = 1:1000
      a(idx) = …
  end
</pre>
Once we are done with the pool, delete it.
<pre class="gscript">
>> % Delete the pool
>> p.delete
</pre>


====Debugging a MATLAB job====
====Debugging a MATLAB job====
Line 516: Line 518:
</pre>
</pre>


When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling <code>schedID</code>
<pre class="gscript">
<pre class="gscript">
>> schedID(j)
>> schedID(j)
Line 527: Line 529:
MATLAB documentation is available at https://www.mathworks.com/help/matlab/
MATLAB documentation is available at https://www.mathworks.com/help/matlab/


Some documentation and sample files are available in /apps/gb/MATLAB/R2021a/help
Some documentation and sample files are available on Sapelo2, in /apps/gb/MATLAB/R2023a/help


To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:


*[http://www.mathworks.com/products/parallel-computing/code-examples.html Parallel Computing Coding Examples]
*[http://www.mathworks.com/help/parallel-computing/examples.html Parallel Computing Coding Examples]
*[http://www.mathworks.com/help/distcomp/index.html Parallel Computing Documentation]
*[http://www.mathworks.com/help/distcomp/index.html Parallel Computing Documentation]
*[http://www.mathworks.com/products/parallel-computing/index.html Parallel Computing Overview]
*[http://www.mathworks.com/products/parallel-computing/index.html Parallel Computing Overview]
Line 540: Line 542:
=== Installation ===
=== Installation ===


'''Version 2021a'''
'''Version 2023a'''
 
Installed in /apps/gb/MATLAB/R2021a.
 
Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2021a/toolbox
 
 
'''Version 2020b'''
 
Installed in /apps/gb/MATLAB/R2020b.
 
Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2020b/toolbox
 
 
'''Version 2020a'''
 
Installed in /apps/gb/MATLAB/R2020a.
 
Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2020a/toolbox
 


'''Version 2019b'''
Installed in /apps/gb/MATLAB/R2023a.


Installed in /apps/gb/MATLAB/R2019b.
Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2023a/toolbox


Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2019b/toolbox


=== System ===
=== System ===
64-bit Linux
64-bit Linux

Latest revision as of 11:32, 13 May 2024

Category

Other, Programming, Graphics

Program On

Sapelo2

Version

R2023a, R2023b

Author / Distributor

The MathWorks (see http://www.mathworks.com)

Description

MATLAB is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation.

Running Program

Also refer to Running Jobs on Sapelo2

For more information on Environment Modules on Sapelo2 please see the Lmod page.

  • Version R2023a is installed in /apps/gb/MATLAB/R2023a. In order to use this version of MATLAB, please first load the matlab/R2023a module with
ml matlab/R2023a


Running MATLAB interactively

Please do not run MATLAB interactively on the Sapelo2 login node, instead please run it using the interactive partition (without GUI) or using OnDemand (with GUI).

The best way to run MATLAB interactively with a graphical front-end (GUI) is to run the MATLAB interactive application in the OnDemand interface to Sapelo2.

To run MATLAB interactively without a GUI, please first start an interactive job with the interact command.

For example:


1. To run without the graphical front-end on a regular compute node:

interact

ml matlab/R2023a

matlab -nodisplay

2. To run without the graphical front-end on a node in a different partition, e.g. in abc_p, or to request more resources (cores or memory), use for example

interact -p abc_p -c 4 --mem 20gb 

ml matlab/R2023a

matlab -nodisplay

For more information on how to run interactive jobs, please see interactive partition

Running MATLAB as a batch job

MATLAB can also be run as a batch job, for example in the batch partition. To do this, first create a MATLAB M-file with the MATLAB commands. Then use a job submission file to submit this job to the batch partition.

Sample MATLAB M-file (matrixinv.m):

n = 500; 
Q = orth(randn(n,n));
d = logspace(0,-10,n);
A = Q*diag(d)*Q';
x = randn(n,1);
b = A*x;
tic, z = A\b, toc
err = norm(z-x)
res = norm(A*z-b)

Sample job submission script file (sub.sh) to run a serial (single-core) matlab program:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1              
#SBATCH --mem=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < matrixinv.m 

The parameters of the job, such as the maximum wall clock time, maximum memory, email address, the number of cores per task, and the job name need to be modified appropriately.

If you are using functions, you might have to use a sample script like this:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1              
#SBATCH --mem=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

echo functionname | matlab -nodisplay -nosplash 

To submit either of the two sample files sub.sh to the queue:

sbatch sub.sh

Parallel Computing - Using multiple CPU cores on a single compute node

The Parallel Computing toolbox allows a user to use many CPU cores. If you want to use cores on the same node for the job you can use the defaultProfile. Here is a simple example using the parfor loop with 24 MATLAB workers.

Sample code psine.m

p=parcluster('local');
p.NumWorkers=25;
ppool=parpool(p,24);

parfor i=1:1024
  A(i) = sin(i*2*pi/1024);
end
p = gcp;
delete(p)

Sample job submission script sub.sh

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --nodes=1
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=25            
#SBATCH --mem=50gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < psine.m 

Note that the number that follows --cpus-per-tasks needs to match the number of MATLAB workers defined with NumWorkers in the MATLAB code .

Sample job submission command

sbatch sub.sh


Parallel Computing - Using cores from one or more compute nodes

In order to use the Parallel Computing toolbox to run MATLAB using either cores on a single node, or to use cores on multiple nodes, you need to configure MATLAB and create a new cluster profile. To do this, please login to Sapelo2, start an interactive session with interact, load the matlab module you want to use and start matlab. For example, to configure this for matlab/R2023a:

interact

ml matlab/R2021a

matlab -nodisplay

In MATLAB, call the configCluster function:

>> configCluster

This function only needs to be called once per version of MATLAB. Please be aware that running configCluster more than once per version will reset your cluster profile back to default settings and erase any saved modifications to the profile. If calling the configCluster function returns the error Unrecognized function or variable 'configCluster' , then run the following command in an interactive MATLAB session:

>>rehash toolboxcache

Sample MATLAB code that can be run in a Slurm batch partition and use more than one node (psine.m)

c = parcluster;
c.AdditionalProperties.QueueName = 'batch';
c.AdditionalProperties.WallTime = '24:00:00';
c.AdditionalProperties.MemUsage = '5G';
c.saveProfile

p = c.parpool(5);

parfor i=1:1024
  A(i) = sin(i*2*pi/1024);
end
p.delete

Note that the resources you want this parallel job to use (e.g. partition name, the walltime limit and the memory per CPU need to be specified in this MATLAB code. This parallel job will automatically request as many cores as needed for the parpool defined in the code, and the cores can be allocated on more than one node, if needed. For more details on how to request resources in the MATLAB code, please see the Configuring Jobs from within MATLAB session below.

Sample job submission script sub.sh:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --nodes=1
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1           
#SBATCH --mem-per-cpu=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < psine.m 

Note that this job only needs to request one core. When this job runs, MATLAB will submit another parallel job using the resources specified in the psine.m code. It is important that this script (sub.sh) uses the --mem-per-cpu option to request memory per cpu and not the total memory with --mem.

Sample job submission command

sbatch sub.sh

Using a MATLAB client installed on your local machine to run jobs on the cluster

In order to use a MATLAB client installed on your local machine and have it offload work onto the cluster (Sapelo2), you will need to first install some cluster integration files on your local machine. Please note that you need to have the same version of MATLAB client installed on your local machine, as the MATLAB version you will use on the cluster.

The Sapelo2 MATLAB support package can be found on Sapelo2, at the following location:

For R2023a

Windows: /apps/gb/MATLAB/UGA.nonshared.R2023a.zip

Linux/macOS: /apps/gb/MATLAB/UGA.nonshared.R2023a.tar.gz


Download the appropriate archive file to your local machine and start MATLAB on your local machine. The archive file should be untarred/unzipped in the location returned by calling

>> userpath

Configure MATLAB to run parallel jobs on your cluster by calling configCluster, which only needs to be called once per version of MATLAB.

>> configCluster

Submission to the remote cluster (Sapelo2) requires SSH credentials and you will need to configure key-based SSH. For information on how to set key-based ssh, please see https://www.ssh.com/academy/ssh/keygen. If your local machine is a Mac, please generate the ssh key with ssh-keygen -t rsa -m PEM. Once you have the key-based ssh set up, you can submit jobs to the cluster from within the MATLAB client on your local machine. You will be prompted for your ssh username and your identity file (private key). The username and location of the private key will be stored in MATLAB for future sessions.

Jobs will now default to the cluster rather than submit to the local machine.

NOTE: If you would like to submit to the local machine then run the following command:

>> % Get a handle to the local resources
>> c = parcluster('local');

Configuring Jobs from within MATLAB on the cluster or on your local machine

Prior to having MATLAB submit a job to the cluster, we can specify various parameters to pass to our jobs, such as partition, e-mail, walltime, etc. The WallTime, MemUsage, and QueueName (partition name) fields are mandatory in order to submit a job.

>> % Get a handle to the cluster
>> c = parcluster;

[REQUIRED]

>> % Specify memory to use for MATLAB jobs, per core
>> c.AdditionalProperties.MemUsage = '5G';

>> % Specify a queue to use for MATLAB jobs				
>> c.AdditionalProperties.QueueName = 'partition-name';

>> % Specify the walltime (e.g. 5 hours)
>> c.AdditionalProperties.WallTime = '05:00:00';

[OPTIONAL]

>> % Specify an account to use for MATLAB jobs
>> c.AdditionalProperties.AccountName = 'account-name';

>> % Specify e-mail address to receive notifications about your job
>> c.AdditionalProperties.EmailAddress = 'user-id@uga.edu';

>> % Specify constraint for you job
>> c.AdditionalProperties.Constraint = 'Intel';

>> % Specify number of GPUs
>> c.AdditionalProperties.GpusPerNode = 1;

>> % Specify GPU type
>> c.AdditionalProperties.GpuType = 'K40';

Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.

>> c.saveProfile

To see the values of the current configuration options, display AdditionalProperties.

>> % To view current properties
>> c.AdditionalProperties

Unset a value when no longer needed.

>> % Turn off email notifications 
>> c.AdditionalProperties.EmailAddress = '';
>> c.saveProfile

Submitting Independent Batch Jobs from within locally installed MATLAB

Use the batch command to submit asynchronous jobs to the cluster. Users can either run a single function or a Matlab script as a batch job. You must make sure the script is in the MATLAB path. If running MATLAB locally, the command userpath displays the location you can save your .m scripts. See the MATLAB documentation for userpath if you would like to add directories to that path. Since your local file system is different from the worker file system(Sapelo2), if you are submitting jobs to Sapelo 2 from local MATLAB you also have to set the 'AutoAddClientPath' option to false. The batch command will return a job object which is used to access the output of the submitted job. See the MATLAB documentation for more help on batch.

Shown below is an example submitting a batch job from local MATLAB which runs the script testscript.m. Notice that the .m extension is left off.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit job to query where MATLAB is running on the cluster
>> j = batch(testscript,'AutoAddClientPath',false);

>> % Query job for state
>> j.State

>> % If state is finished, fetch the results
>> load(j)

>> % Delete the job after results are no longer needed
>> j.delete

You can also run functions with multiple arguments using the batch command. For a function to be available it must be in your MATLAB path. Built in functions will automatically be included, but your own functions you will have to save in your userpath. When running functions, you must specify that you are using the worker established with the parcluster command, so use c.batch instead of batch. The example below shows a job which prints the working directory. When retrieving the data, use the command fetchOutputs instead of load.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit job to query where MATLAB is running on the cluster
>> j = c.batch(@pwd, 1,'AutoAddClientPath',false);

>> % Query job for state
>> j.State

>> j.fetchOutputs{:}

ans =

    '/home/keekov'

To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object. The cluster object stores an array of jobs that were run, are running, or are queued to run. This allows us to fetch the results of completed jobs. Retrieve and view the list of jobs as shown below. The Jobs command also allows you to see the job id and status of previously submitted jobs.

>> c = parcluster;
>> jobs = c.Jobs;

Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).

To fetch the outputs of a previously completed job use the fetchOutputs or loadcommand. This job used the function method and so data is retrieved with fetchOutputs instead of load.

>> % Get a handle to the job with ID 2
>> j2 = c.Jobs(2);
>> % Fetch results for job with ID 2
>> j2.fetchOutputs{:}

Submitting Parallel Batch Jobs from within locally installed MATLAB

Users can also submit parallel workflows with the batch command. Let’s use the following example for a parallel job, which is saved as parallel_example.m.

function t = parallel_example(iter)

if nargin==0, iter = 8; end

disp('Start sim')

t0 = tic;
parfor idx = 1:iter
     A(idx) = idx;
     pause(2)
end
t = toc(t0);

disp('Sim Completed')

This time when we use the c.batch command, to run a parallel job, we will also specify a MATLAB Pool.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit a batch pool job using 4 workers for 16 simulations
>> j = c.batch(@parallel_example, 1, {16}, 'Pool',4, …
       'CurrentFolder','.', 'AutoAddClientPath',false);

>> % View current job status
>> j.State

>> % Fetch the results after a finished state is retrieved
>> j.fetchOutputs{:}
ans = 
	8.8872

The job ran in 8.89 seconds using four workers. Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers. For example, a job that needs eight workers will consume nine CPU cores.

We will run the same simulation but increase the Pool size. This time, to retrieve the results later, we will keep track of the job ID.

NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.

 
>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit a batch pool job using 8 workers for 16 simulations
>> j = c.batch(@parallel_example, 1, {16}, 'Pool', 8, …
       'CurrentFolder','.', 'AutoAddClientPath',false);

>> % Get the job ID
>> id = j.ID
id =
	4

>> % Clear j from workspace (as though we quit MATLAB)
>> clear j

Once we have a handle to the cluster, we will call the findJob method to search for the job with the specified job ID.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Find the old job
>> j = c.findJob('ID', 4);

>> % Retrieve the state of the job
>> j.State
ans = 
    finished

>> % Fetch the results
>> j.fetchOutputs{:};
ans = 
    4.7270

The job now runs in 4.73 seconds using eight workers. Run code with different number of workers to determine the ideal number to use.

Parallel Interactive Jobs using MATLAB on the cluster

To run an interactive pool job on the cluster, continue to use parpool as you’ve done before. Start an interactive session with qlogin (or with srun), load the matlab module and start MATLAB with matlab -nodislay. In Matlab, run

>> % Get a handle to the cluster
>> c = parcluster;

>> % Open a pool of 64 workers on the cluster
>> p = c.parpool(64);

Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.

>> % Run a parfor over 1000 iterations
>> parfor idx = 1:1000
      a(idx) = …
   end

Once we are done with the pool, delete it.

>> % Delete the pool
>> p.delete

Debugging a MATLAB job

If a serial job produces an error, call the getDebugLog method to view the error log file. When submitting independent jobs, with multiple tasks, specify the task number.

>> c.getDebugLog(j.Tasks(3))

For Pool jobs, only specify the job object.

>> c.getDebugLog(j)

When troubleshooting a job, the cluster admin may request the scheduler ID of the job. This can be derived by calling schedID

>> schedID(j)
ans = 
    25539

Documentation

MATLAB documentation is available at https://www.mathworks.com/help/matlab/

Some documentation and sample files are available on Sapelo2, in /apps/gb/MATLAB/R2023a/help

To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:

Installation

Version 2023a

Installed in /apps/gb/MATLAB/R2023a.

Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2023a/toolbox


System

64-bit Linux