CryoSPARC-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 22: Line 22:
'''NOTE''': Users are required to be added into GACRC cryosparc group before being allowed to run this software. Please fill out the [https://uga.teamdynamix.com/TDClient/2060/Portal/Requests/ServiceDet?ID=25844 GACRC General Support form] to request. We will reach out to you after we get your request.
'''NOTE''': Users are required to be added into GACRC cryosparc group before being allowed to run this software. Please fill out the [https://uga.teamdynamix.com/TDClient/2060/Portal/Requests/ServiceDet?ID=25844 GACRC General Support form] to request. We will reach out to you after we get your request.


=== Running Program ===
=== Configurations on Sapelo2 ===
'''Master node VM:'''
Also refer to [[Running Jobs on Sapelo2]].


'''3.3.1'''
# Host name: ss-cryo.gacrc.uga.edu
CryoSPARC 3.3.1 is installed with Gaussian 09, in /apps/eb/gaussian/09-Intel-SSE4_2/gv and /apps/eb/gaussian/09-AMD-SSE4a/gv.
# 8 CPUs (Intel Xeon Gold 6230); 24GB total RAM
# mongodb is installed


'''Version 6.1'''
'''Worker nodes:'''
GaussView 6.1 is installed with Gaussian 16, in /apps/eb/gaussian/16-AVX2/gv,  /apps/eb/gaussian/16-AVX/gv, and /apps/eb/gaussian/16-SSE4/gv.


# Two NVIDIA Tesla K40m nodes rb6-[3-4]
# cryoSPARC recommends to use SSD or caching particle data. /lscratch/gacrc-cryo was set up on worker nodes for this purpose.
# The amount of space that cryoSPARC can use in /lscratch/gacrc-cryo is capped at 100GB.


Please do not run GaussView directly on
=== Running Program ===
Also refer to [[Running Jobs on Sapelo2]].


=== Documentation ===
=== Documentation ===
   
   
About cryoSPARC: https://guide.cryosparc.com/
About cryoSPARC: https://guide.cryosparc.com/
User Interface and Usage Guide: https://guide.cryosparc.com/processing-data/user-interface-and-usage-guide
User Interface and Usage Guide: https://guide.cryosparc.com/processing-data/user-interface-and-usage-guide
All Job Types in cryoSPARC: https://guide.cryosparc.com/processing-data/all-job-types-in-cryosparc
All Job Types in cryoSPARC: https://guide.cryosparc.com/processing-data/all-job-types-in-cryosparc
Management and Monitoring: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring
Management and Monitoring: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring
Introductory Tutorial: https://guide.cryosparc.com/processing-data/cryo-em-data-processing-in-cryosparc-introductory-tutorial
Introductory Tutorial: https://guide.cryosparc.com/processing-data/cryo-em-data-processing-in-cryosparc-introductory-tutorial
Tutorials and Usage Guides: https://guide.cryosparc.com/processing-data/tutorials-and-case-studies
Tutorials and Usage Guides: https://guide.cryosparc.com/processing-data/tutorials-and-case-studies
Cluster (Slurm) integration: <nowiki>https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#connect-a-cluster-to-cryosparc</nowiki>


=== Installation ===
=== Installation ===

Revision as of 11:32, 16 December 2021

Category

Engineering

Program On

Sapelo2

Version

3.3.1

Author / Distributor

See https://guide.cryosparc.com/

Description

"CryoSPARC (Cryo-EM Single Particle Ab-Initio Reconstruction and Classification) is a state of the art HPC software solution for complete processing of single-particle cryo-electron microscopy (cryo-EM) data. CryoSPARC is useful for solving cryo-EM structures of membrane proteins, viruses, complexes, flexible molecules, small particles, phase plate data and negative stain data." For more information, please see https://guide.cryosparc.com/.

NOTE: Users are required to be added into GACRC cryosparc group before being allowed to run this software. Please fill out the GACRC General Support form to request. We will reach out to you after we get your request.

Configurations on Sapelo2

Master node VM:

  1. Host name: ss-cryo.gacrc.uga.edu
  2. 8 CPUs (Intel Xeon Gold 6230); 24GB total RAM
  3. mongodb is installed

Worker nodes:

  1. Two NVIDIA Tesla K40m nodes rb6-[3-4]
  2. cryoSPARC recommends to use SSD or caching particle data. /lscratch/gacrc-cryo was set up on worker nodes for this purpose.
  3. The amount of space that cryoSPARC can use in /lscratch/gacrc-cryo is capped at 100GB.

Running Program

Also refer to Running Jobs on Sapelo2.

Documentation

About cryoSPARC: https://guide.cryosparc.com/

User Interface and Usage Guide: https://guide.cryosparc.com/processing-data/user-interface-and-usage-guide

All Job Types in cryoSPARC: https://guide.cryosparc.com/processing-data/all-job-types-in-cryosparc

Management and Monitoring: https://guide.cryosparc.com/setup-configuration-and-management/management-and-monitoring

Introductory Tutorial: https://guide.cryosparc.com/processing-data/cryo-em-data-processing-in-cryosparc-introductory-tutorial

Tutorials and Usage Guides: https://guide.cryosparc.com/processing-data/tutorials-and-case-studies

Cluster (Slurm) integration: https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure/downloading-and-installing-cryosparc#connect-a-cluster-to-cryosparc

Installation

  • Version 3.3.1 master is installed on the master node (ss-cryo.gacrc.uga.edu). Source codes are downloaded in /work/cryosparc/cryosparc_master on the master node.
  • Version 3.3.1 workers are installed on two worker nodes (NVIDIA Tesla K40m GPU nodes rb6-[3-4]). Source codes are downloaded in /work/cryosparc/cryosparc_worker on the master ndoe.

System

64-bit Linux