MATLAB-Sapelo2

From Research Computing Center Wiki
Jump to navigation Jump to search

Category

Other, Programming, Graphics

Program On

Sapelo2

Version

R2023a (9.14.0.2286388)

Author / Distributor

The MathWorks (see http://www.mathworks.com)

Description

MATLAB is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation.

Running Program

Also refer to Running Jobs on Sapelo2

For more information on Environment Modules on Sapelo2 please see the Lmod page.

  • Version R2023a is installed in /apps/gb/MATLAB/R2023a. In order to use this version of MATLAB, please first load the matlab/R2023a module with
ml matlab/R2023a


Running MATLAB interactively

Please do not run MATLAB interactively on the Sapelo2 login node, instead please run it using the interactive partition (without GUI) or using OnDemand (with GUI).

The best way to run MATLAB interactively with a graphical front-end (GUI) is to run the MATLAB interactive application in the OnDemand interface to Sapelo2.

To run MATLAB interactively without a GUI, please first start an interactive job with the interact command.

For example:


1. To run without the graphical front-end on a regular compute node:

interact

ml matlab/R2023a

matlab -nodisplay

2. To run without the graphical front-end on a node in a different partition, e.g. in abc_p, or to request more resources (cores or memory), use for example

interact -p abc_p -c 4 --mem 20gb 

ml matlab/R2023a

matlab -nodisplay

For more information on how to run interactive jobs, please see interactive partition

Running MATLAB as a batch job

MATLAB can also be run as a batch job, for example in the batch partition. To do this, first create a MATLAB M-file with the MATLAB commands. Then use a job submission file to submit this job to the batch partition.

Sample MATLAB M-file (matrixinv.m):

n = 500; 
Q = orth(randn(n,n));
d = logspace(0,-10,n);
A = Q*diag(d)*Q';
x = randn(n,1);
b = A*x;
tic, z = A\b, toc
err = norm(z-x)
res = norm(A*z-b)

Sample job submission script file (sub.sh) to run a serial (single-core) matlab program:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1              
#SBATCH --mem=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < matrixinv.m 

The parameters of the job, such as the maximum wall clock time, maximum memory, email address, the number of cores per task, and the job name need to be modified appropriately.

If you are using functions, you might have to use a sample script like this:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1              
#SBATCH --mem=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

echo functionname | matlab -nodisplay -nosplash 

To submit either of the two sample files sub.sh to the queue:

sbatch sub.sh

Parallel Computing - Using multiple CPU cores on a single compute node

The Parallel Computing toolbox allows a user to use many CPU cores. If you want to use cores on the same node for the job you can use the defaultProfile. Here is a simple example using the parfor loop with 24 MATLAB workers.

Sample code psine.m

p=parcluster('local');
p.NumWorkers=25;
ppool=parpool(p,24);

parfor i=1:1024
  A(i) = sin(i*2*pi/1024);
end
p = gcp;
delete(p)

Sample job submission script sub.sh

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --nodes=1
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=25            
#SBATCH --mem=50gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < psine.m 

Note that the number that follows --cpus-per-tasks needs to match the number of MATLAB workers defined with NumWorkers in the MATLAB code .

Sample job submission command

sbatch sub.sh


Parallel Computing - Using cores from one or more compute nodes

In order to use the Parallel Computing toolbox to run MATLAB using either cores on a single node, or to use cores on multiple nodes, you need to configure MATLAB and create a new cluster profile. To do this, please login to Sapelo2, start an interactive session with interact, load the matlab module you want to use and start matlab. For example, to configure this for matlab/R2023a:

interact

ml matlab/R2021a

matlab -nodisplay

In MATLAB, call the configCluster function:

>> configCluster

This function only needs to be called once per version of MATLAB. Please be aware that running configCluster more than once per version will reset your cluster profile back to default settings and erase any saved modifications to the profile. If calling the configCluster function returns the error Unrecognized function or variable 'configCluster' , then run the following command in an interactive MATLAB session:

>>rehash toolboxcache

Sample MATLAB code that can be run in a Slurm batch partition and use more than one node (psine.m)

c = parcluster;
c.AdditionalProperties.QueueName = 'batch';
c.AdditionalProperties.WallTime = '24:00:00';
c.AdditionalProperties.MemUsage = '5G';
c.saveProfile

p = c.parpool(5);

parfor i=1:1024
  A(i) = sin(i*2*pi/1024);
end
p.delete

Note that the resources you want this parallel job to use (e.g. partition name, the walltime limit and the memory per CPU need to be specified in this MATLAB code. This parallel job will automatically request as many cores as needed for the parpool defined in the code, and the cores can be allocated on more than one node, if needed. For more details on how to request resources in the MATLAB code, please see the Configuring Jobs from within MATLAB session below.

Sample job submission script sub.sh:

#!/bin/bash
#SBATCH --job-name=myjobname 
#SBATCH --partition=batch  
#SBATCH --nodes=1
#SBATCH --ntasks=1    
#SBATCH --cpus-per-task=1           
#SBATCH --mem-per-cpu=5gb                  
#SBATCH --time=48:00:00               
#SBATCH --output=%x.%j.out    
#SBATCH --error=%x.%j.err    
cd $SLURM_SUBMIT_DIR

ml matlab/R2023a

matlab -nodisplay < psine.m 

Note that this job only needs to request one core. When this job runs, MATLAB will submit another parallel job using the resources specified in the psine.m code. It is important that this script (sub.sh) uses the --mem-per-cpu option to request memory per cpu and not the total memory with --mem.

Sample job submission command

sbatch sub.sh

Using a MATLAB client installed on your local machine to run jobs on the cluster

In order to use a MATLAB client installed on your local machine and have it offload work onto the cluster (Sapelo2), you will need to first install some cluster integration files on your local machine. Please note that you need to have the same version of MATLAB client installed on your local machine, as the MATLAB version you will use on the cluster.

The Sapelo2 MATLAB support package can be found on Sapelo2, at the following location:

For R2023a

Windows: /apps/gb/MATLAB/UGA.nonshared.R2023a.zip

Linux/macOS: /apps/gb/MATLAB/UGA.nonshared.R2023a.tar.gz


Download the appropriate archive file to your local machine and start MATLAB on your local machine. The archive file should be untarred/unzipped in the location returned by calling

>> userpath

Configure MATLAB to run parallel jobs on your cluster by calling configCluster, which only needs to be called once per version of MATLAB.

>> configCluster

Submission to the remote cluster (Sapelo2) requires SSH credentials and you will need to configure key-based SSH. For information on how to set key-based ssh, please see https://www.ssh.com/academy/ssh/keygen. If your local machine is a Mac, please generate the ssh key with ssh-keygen -t rsa -m PEM. Once you have the key-based ssh set up, you can submit jobs to the cluster from within the MATLAB client on your local machine. You will be prompted for your ssh username and your identity file (private key). The username and location of the private key will be stored in MATLAB for future sessions.

Jobs will now default to the cluster rather than submit to the local machine.

NOTE: If you would like to submit to the local machine then run the following command:

>> % Get a handle to the local resources
>> c = parcluster('local');

Configuring Jobs from within MATLAB on the cluster or on your local machine

Prior to having MATLAB submit a job to the cluster, we can specify various parameters to pass to our jobs, such as partition, e-mail, walltime, etc. The WallTime, MemUsage, and QueueName (partition name) fields are mandatory in order to submit a job.

>> % Get a handle to the cluster
>> c = parcluster;

[REQUIRED]

>> % Specify memory to use for MATLAB jobs, per core
>> c.AdditionalProperties.MemUsage = '5G';

>> % Specify a queue to use for MATLAB jobs				
>> c.AdditionalProperties.QueueName = 'partition-name';

>> % Specify the walltime (e.g. 5 hours)
>> c.AdditionalProperties.WallTime = '05:00:00';

[OPTIONAL]

>> % Specify an account to use for MATLAB jobs
>> c.AdditionalProperties.AccountName = 'account-name';

>> % Specify e-mail address to receive notifications about your job
>> c.AdditionalProperties.EmailAddress = 'user-id@uga.edu';

>> % Specify constraint for you job
>> c.AdditionalProperties.Constraint = 'Intel';

>> % Specify number of GPUs
>> c.AdditionalProperties.GpusPerNode = 1;

>> % Specify GPU type
>> c.AdditionalProperties.GpuType = 'K40';

Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.

>> c.saveProfile

To see the values of the current configuration options, display AdditionalProperties.

>> % To view current properties
>> c.AdditionalProperties

Unset a value when no longer needed.

>> % Turn off email notifications 
>> c.AdditionalProperties.EmailAddress = '';
>> c.saveProfile

Submitting Independent Batch Jobs from within locally installed MATLAB

Use the batch command to submit asynchronous jobs to the cluster. Users can either run a single function or a Matlab script as a batch job. You must make sure the script is in the MATLAB path. If running MATLAB locally, the command userpath displays the location you can save your .m scripts. See the MATLAB documentation for userpath if you would like to add directories to that path. Since your local file system is different from the worker file system(Sapelo2), if you are submitting jobs to Sapelo 2 from local MATLAB you also have to set the 'AutoAddClientPath' option to false. The batch command will return a job object which is used to access the output of the submitted job. See the MATLAB documentation for more help on batch.

Shown below is an example submitting a batch job from local MATLAB which runs the script testscript.m. Notice that the .m extension is left off.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit job to query where MATLAB is running on the cluster
>> j = batch(testscript,'AutoAddClientPath',false);

>> % Query job for state
>> j.State

>> % If state is finished, fetch the results
>> load(j)

>> % Delete the job after results are no longer needed
>> j.delete

You can also run functions with multiple arguments using the batch command. For a function to be available it must be in your MATLAB path. Built in functions will automatically be included, but your own functions you will have to save in your userpath. When running functions, you must specify that you are using the worker established with the parcluster command, so use c.batch instead of batch. The example below shows a job which prints the working directory. When retrieving the data, use the command fetchOutputs instead of load.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit job to query where MATLAB is running on the cluster
>> j = c.batch(@pwd, 1,'AutoAddClientPath',false);

>> % Query job for state
>> j.State

>> j.fetchOutputs{:}

ans =

    '/home/keekov'

To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object. The cluster object stores an array of jobs that were run, are running, or are queued to run. This allows us to fetch the results of completed jobs. Retrieve and view the list of jobs as shown below. The Jobs command also allows you to see the job id and status of previously submitted jobs.

>> c = parcluster;
>> jobs = c.Jobs;

Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).

To fetch the outputs of a previously completed job use the fetchOutputs or loadcommand. This job used the function method and so data is retrieved with fetchOutputs instead of load.

>> % Get a handle to the job with ID 2
>> j2 = c.Jobs(2);
>> % Fetch results for job with ID 2
>> j2.fetchOutputs{:}

Submitting Parallel Batch Jobs from within locally installed MATLAB

Users can also submit parallel workflows with the batch command. Let’s use the following example for a parallel job, which is saved as parallel_example.m.

function t = parallel_example(iter)

if nargin==0, iter = 8; end

disp('Start sim')

t0 = tic;
parfor idx = 1:iter
     A(idx) = idx;
     pause(2)
end
t = toc(t0);

disp('Sim Completed')

This time when we use the c.batch command, to run a parallel job, we will also specify a MATLAB Pool.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit a batch pool job using 4 workers for 16 simulations
>> j = c.batch(@parallel_example, 1, {16}, 'Pool',4, …
       'CurrentFolder','.', 'AutoAddClientPath',false);

>> % View current job status
>> j.State

>> % Fetch the results after a finished state is retrieved
>> j.fetchOutputs{:}
ans = 
	8.8872

The job ran in 8.89 seconds using four workers. Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers. For example, a job that needs eight workers will consume nine CPU cores.

We will run the same simulation but increase the Pool size. This time, to retrieve the results later, we will keep track of the job ID.

NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.

 
>> % Get a handle to the cluster
>> c = parcluster;

>> % Submit a batch pool job using 8 workers for 16 simulations
>> j = c.batch(@parallel_example, 1, {16}, 'Pool', 8, …
       'CurrentFolder','.', 'AutoAddClientPath',false);

>> % Get the job ID
>> id = j.ID
id =
	4

>> % Clear j from workspace (as though we quit MATLAB)
>> clear j

Once we have a handle to the cluster, we will call the findJob method to search for the job with the specified job ID.

>> % Get a handle to the cluster
>> c = parcluster;

>> % Find the old job
>> j = c.findJob('ID', 4);

>> % Retrieve the state of the job
>> j.State
ans = 
    finished

>> % Fetch the results
>> j.fetchOutputs{:};
ans = 
    4.7270

The job now runs in 4.73 seconds using eight workers. Run code with different number of workers to determine the ideal number to use.

Parallel Interactive Jobs using MATLAB on the cluster

To run an interactive pool job on the cluster, continue to use parpool as you’ve done before. Start an interactive session with qlogin (or with srun), load the matlab module and start MATLAB with matlab -nodislay. In Matlab, run

>> % Get a handle to the cluster
>> c = parcluster;

>> % Open a pool of 64 workers on the cluster
>> p = c.parpool(64);

Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.

>> % Run a parfor over 1000 iterations
>> parfor idx = 1:1000
      a(idx) = …
   end

Once we are done with the pool, delete it.

>> % Delete the pool
>> p.delete

Debugging a MATLAB job

If a serial job produces an error, call the getDebugLog method to view the error log file. When submitting independent jobs, with multiple tasks, specify the task number.

>> c.getDebugLog(j.Tasks(3))

For Pool jobs, only specify the job object.

>> c.getDebugLog(j)

When troubleshooting a job, the cluster admin may request the scheduler ID of the job. This can be derived by calling schedID

>> schedID(j)
ans = 
    25539

Documentation

MATLAB documentation is available at https://www.mathworks.com/help/matlab/

Some documentation and sample files are available on Sapelo2, in /apps/gb/MATLAB/R2023a/help

To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:

Installation

Version 2023a

Installed in /apps/gb/MATLAB/R2023a.

Available toolboxes: Almost all toolboxes for which UGA has a license. For details, see the directories in /apps/gb/MATLAB/R2023a/toolbox


System

64-bit Linux