CryoSPARC-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 30: Line 30:
'''Worker nodes:'''
'''Worker nodes:'''
* Two NVIDIA Tesla K40m nodes, Intel Xeon processors (16 cores and 128GB of RAM) and 8 NVIDIA K40m GPU cards per node.
* Two NVIDIA Tesla K40m nodes, Intel Xeon processors (16 cores and 128GB of RAM) and 8 NVIDIA K40m GPU cards per node.
* cryoSPARC recommends to use SSD or caching particle data. /lscratch/gacrc-cryo is set up on worker nodes for this purpose.
* cryoSPARC recommends using SSD or caching particle data. /lscratch/gacrc-cryo is set up on worker nodes for this purpose.
* The amount of space that cryoSPARC can use in /lscratch/gacrc-cryo is capped at 100GB.
* The amount of space that cryoSPARC can use in /lscratch/gacrc-cryo is capped at 100GB.
'''cryoSPARC group:''' cryosparc
'''cryoSPARC group:''' cryosparc
Line 88: Line 88:


====== Launch job via Slurm (recommended) ======
====== Launch job via Slurm (recommended) ======
In cryoSPARC, queue a job to "'''Lane Sapelo2 (cluster)'''"; The job will be running on a worker node via Sapelo2 Slurm using gacrc-cryo account. We highly recommend using "Lane Sapelo2 (cluster)"


====== Launch and run job on a worker node ======
====== Launch and run job on a worker node ======

Revision as of 11:22, 17 December 2021

Category

Engineering

Program On

Sapelo2

Version

3.3.1

Author / Distributor

See https://guide.cryosparc.com/

Description

"CryoSPARC (Cryo-EM Single Particle Ab-Initio Reconstruction and Classification) is a state of the art HPC software solution for complete processing of single-particle cryo-electron microscopy (cryo-EM) data. CryoSPARC is useful for solving cryo-EM structures of membrane proteins, viruses, complexes, flexible molecules, small particles, phase plate data and negative stain data." For more information, please see https://guide.cryosparc.com/.

NOTE: Users are required to be added into GACRC cryosparc group before being allowed to run this software. Please fill out the GACRC General Support form to request. We will reach out to you once we received your request.

Configurations

Master node VM:

  • Host name: ss-cryo.gacrc.uga.edu
  • Intel Xeon processors (8 cores) and 24GB of RAM
  • mongodb is installed.

Worker nodes:

  • Two NVIDIA Tesla K40m nodes, Intel Xeon processors (16 cores and 128GB of RAM) and 8 NVIDIA K40m GPU cards per node.
  • cryoSPARC recommends using SSD or caching particle data. /lscratch/gacrc-cryo is set up on worker nodes for this purpose.
  • The amount of space that cryoSPARC can use in /lscratch/gacrc-cryo is capped at 100GB.

cryoSPARC group: cryosparc

cryoSPARC service account: gacrc-cryo

  • gacrc-cryo is the service user account that will run the cryoSPARC workflow jobs for all regular cryoSPARC users on the master node and each worker node that will be used for computation.
  • Some tasks can only be performed by gacrc-cryo, like start or stop cryosparcm from the master node, user management, connect or update worker nodes to master, etc..
  • Regular cryoSPARC users can still run cryosparcm on the master node to check status of the master and its database, using cryosparcm status and cryosparcm checkdb.

cryoSPARC group space: /work/cryosparc/, with a per group quota 500GB and a maximum of 100,000 files.

There are 6 sub folders in /work/cryosparc/:

  • cryosparc_master/ , cryosparc_worker/ : Master and worker installation folders
  • database/ : cryoSPARC database folder
  • users/ : cryoSPARC user project folder
  • cryosparc_cluster/ : The folder storing cluster integration scripts
  • testdataset/ : The folder storing cryoSPARC test data
  • src_v3.3.1/ : The folder storing cryoSPARC sources v3.3.1

Running cryoSPARC from Sapelo2

User login

User needs to establish a SSH tunnel to expose the port 39000 from the master node to a local computer.

If you are using a Linux or Apple desktop or laptop, you can use the following command in Terminal to establish the ssh tunnel:

ssh -N -L 39000:128.192.75.59:39000 username@ss-cryo.gacrc.uga.edu

If you are using a Windows desktop or laptop, please download the plink program to use in place of the ssh client:

plink -ssh -N -L 39000:128.192.75.59:39000 username@ss-cryo.gacrc.uga.edu

Note: Please put the plink.exe in the current directory where you have a command window open.

Unless you have SSH public key configured, you will be prompted for your MyID password and for Archpass Duo authentication. Once authentication is established, this session prompt will hang and you are ready to go to access the cryoSPARC User Interface.

Once you established the ssh tunnel by running the above command, you can open a browser (Chrome) on the local machine and navigate tohttp://localhost:39000. The cryoSPARC User Interface should be presented with the cryoSPARC login page.

Please refer to https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/accessing-cryosparc

How to run cryoSPARC workflow jobs
Project space selection

Project in cryoSPARC is a high level container corresponding with a project directory on the file system, which stores all associated Jobs of a project. Each project in cryoSPARC is entirely contained within a file system directory. All the jobs and their respective intermediate and output data created within a project will be stored within the project directory

  • In cryoSPARC group space /work/cryosparc/, for each cryoSPARC user, a default user project folder is created at /work/cryosparc/users/<username>. Please note, this folder is not suitable for running a project with large data, since /work/cryosparc has a per group quota 500GB and a maximum limit of 100,000 files. If you want to process large EM data, we highly recommend that you use your own scratch space for running the project.
  • You can use the scratch space to run a project with large data (recommended). Steps to set up a project folder, for example cryo_project/, in your scratch space are shown below:
    1. cd /scratch/username
    2. mkdir ./cryo_project
    3. chgrp cryosparc ./cryo_project
    4. chmod g+rwx ./cryo_project
    5. chmod o+rx /scratch/username
  • gacrc-cryo is the service user account to launch and run jobs for all regular cryoSPARC users. The above steps 3, 4, and 5 will enable gacrc-cryo to write/read into/from /scratch/username/cryo_project. Once a project is completed, we suggest that you turn off the rx permission on your scratch folder by chmod o-rx /scratch/username
  • When you start a new project in cryoSPARC, please select the appropriate project space.
Launch job on the mater node

cryoSPARC will decide on its own to run some types of workflow jobs on the master node, like "Import Movies", "Inspect Picks", and the interactive job "Select 2D Classes".

Launch job via Slurm (recommended)

In cryoSPARC, queue a job to "Lane Sapelo2 (cluster)"; The job will be running on a worker node via Sapelo2 Slurm using gacrc-cryo account. We highly recommend using "Lane Sapelo2 (cluster)"

Launch and run job on a worker node

Documentation

About cryoSPARC: https://guide.cryosparc.com/

User Interface and Usage Guide: https://guide.cryosparc.com/processing-data/user-interface-and-usage-guide

Accessing the cryoSPARC User Interface https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/accessing-cryosparc

All Job Types in cryoSPARC: https://guide.cryosparc.com/processing-data/all-job-types-in-cryosparc

Management and Monitoring: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring

Cluster (Slurm) integration: https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#connect-a-cluster-to-cryosparc

Introductory Tutorial: https://guide.cryosparc.com/processing-data/cryo-em-data-processing-in-cryosparc-introductory-tutorial

Tutorials and Usage Guides: https://guide.cryosparc.com/processing-data/tutorials-and-case-studies

Installation

  • Version 3.3.1 master is installed on the master node (ss-cryo.gacrc.uga.edu). Source codes are downloaded in /work/cryosparc/cryosparc_master on the master node.
  • Version 3.3.1 workers are installed on two worker nodes (NVIDIA Tesla K40m GPU nodes rb6-[3-4]). Source codes are downloaded in /work/cryosparc/cryosparc_worker on the master ndoe.

System

64-bit Linux