Cactus-Sapelo2: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
(Created page with "Category:Sapelo2Category:SoftwareCategory:Bioinformatics === Category === Bioinformatics === Program On === Sapelo2 === Version === 2.6.7, 2.6.9, 2.7.0 ===Author...")
 
 
(One intermediate revision by one other user not shown)
Line 5: Line 5:
Sapelo2
Sapelo2
=== Version ===
=== Version ===
2.6.7, 2.6.9, 2.7.0
1.2.3, 2.0.3, 2.4.3, 2.4.4, 2.5.0, 2.6.0, 2.6.7, 2.6.9, 2.7.0
===Author / Distributor===
===Author / Distributor===
Please see https://github.com/ComparativeGenomicsToolkit/cactus
Please see https://github.com/ComparativeGenomicsToolkit/cactus
Line 18: Line 18:


*Version 2.6.7 is installed as module called Cactus/2.6.7-GCCcore-11.3.0-Python-3.10.4
*Version 2.6.7 is installed as module called Cactus/2.6.7-GCCcore-11.3.0-Python-3.10.4
*Version 1.2.3 is installed as a singularity image at /apps/singularity-images/cactus_v1.2.3.sif
*Version 2.0.3 is installed as a singularity image at /apps/singularity-images/cactus_v2.0.3.sif  
*Version 2.4.3 is installed as a singularity image at /apps/singularity-images/cactus_v2.4.3.sif
*Version 2.4.4 is installed as a singularity image at /apps/singularity-images/cactus_v2.4.4.sif
*Version 2.5.0 is installed as a singularity image at /apps/singularity-images/cactus_v2.5.0.sif
*Version 2.6.0 is installed as a singularity image at /apps/singularity-images/cactus_v2.6.0.sif  


*Version 2.6.9 is installed as a singularity image at /apps/singularity-images/cactus_v2.6.9.sif
*Version 2.6.9 is installed as a singularity image at /apps/singularity-images/cactus_v2.6.9.sif
Line 26: Line 32:




To run the commands in the singularity containers, please bind mount /run/user. For example,
To run the commands in the singularity containers, please use an overlay for the /tmp partition. This can be done with the following steps:


<pre class="gcommand">
<pre class="gcommand">
singularity exec -B /run/user /apps/singularity-images/cactus_v2.7.0.sif cactus [options]
export CACTUS_TMPDIR=/lscratch/$USER/cactus-$SLURM_JOB_ID
</pre>
mkdir -p -m 700 $CACTUS_TMPDIR/upper $CACTUS_TMPDIR/work
truncate -s 300M jobStore.img
apptainer exec /apps/singularity-images/cactus_v2.7.0.sif mkfs.ext3 -d $CACTUS_TMPDIR jobStore.img
 
mkdir -m 700 -p $CACTUS_TMPDIR/tmp
mkdir cactus_wd
 
apptainer exec --cleanenv --overlay jobStore.img --bind $CACTUS_TMPDIR/tmp:/tmp \
--env PYTHONNOUSERSITE=1 /apps/singularity-images/cactus_v2.7.0.sif cactus-pangenome \
--workDir=cactus_wd [options]


or
cd /lscratch/$USER
rm -r -f cactus-$SLURM_JOB_ID


<pre class="gcommand">
singularity exec -B /run/user /apps/singularity-images/cactus_v2.7.0.sif cactus-pangenome [options]
</pre>
</pre>
where the cactus-pangenome command in the example can be replaced by the cactus command.




Line 54: Line 69:
cd $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR


singularity exec -B /run/user /apps/singularity-images/cactus_v2.7.0.sif cactus [options]
export CACTUS_TMPDIR=/lscratch/$USER/cactus-$SLURM_JOB_ID
mkdir -p -m 700 $CACTUS_TMPDIR/upper $CACTUS_TMPDIR/work
truncate -s 300M jobStore.img
apptainer exec /apps/singularity-images/cactus_v2.7.0.sif mkfs.ext3 -d $CACTUS_TMPDIR jobStore.img
 
mkdir -m 700 -p $CACTUS_TMPDIR/tmp
mkdir cactus_wd
 
apptainer exec --cleanenv --overlay jobStore.img --bind $CACTUS_TMPDIR/tmp:/tmp \
--env PYTHONNOUSERSITE=1 /apps/singularity-images/cactus_v2.7.0.sif cactus-pangenome \
--workDir=cactus_wd  ./js ./evolverPrimates.txt --outDir primates-pg --outName primates-pg \
--reference simChimp --vcf --giraffe --gfa --gbz
 
cd /lscratch/$USER
rm -r -f cactus-$SLURM_JOB_ID


</pre>
</pre>
where [options] need to be replaced by the options (command and arguments) you want to use.  Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of cores per node, and the job name need to be modified appropriately as well.  
where the sample options used here need to be replaced by the options (command and arguments) you want to use.  Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of cores per node, and the job name need to be modified appropriately as well.


=== Documentation ===
=== Documentation ===

Latest revision as of 10:25, 9 May 2024

Category

Bioinformatics

Program On

Sapelo2

Version

1.2.3, 2.0.3, 2.4.3, 2.4.4, 2.5.0, 2.6.0, 2.6.7, 2.6.9, 2.7.0

Author / Distributor

Please see https://github.com/ComparativeGenomicsToolkit/cactus

Description

From https://github.com/ComparativeGenomicsToolkit/cactus: "Cactus is a reference-free whole-genome alignment program, as well as a pagenome graph construction toolkit."

Running Program

Also refer to Running Jobs on Sapelo2

For more information on Environment Modules on Sapelo2 please see the Lmod page.

  • Version 2.6.7 is installed as module called Cactus/2.6.7-GCCcore-11.3.0-Python-3.10.4
  • Version 1.2.3 is installed as a singularity image at /apps/singularity-images/cactus_v1.2.3.sif
  • Version 2.0.3 is installed as a singularity image at /apps/singularity-images/cactus_v2.0.3.sif  
  • Version 2.4.3 is installed as a singularity image at /apps/singularity-images/cactus_v2.4.3.sif
  • Version 2.4.4 is installed as a singularity image at /apps/singularity-images/cactus_v2.4.4.sif
  • Version 2.5.0 is installed as a singularity image at /apps/singularity-images/cactus_v2.5.0.sif
  • Version 2.6.0 is installed as a singularity image at /apps/singularity-images/cactus_v2.6.0.sif  
  • Version 2.6.9 is installed as a singularity image at /apps/singularity-images/cactus_v2.6.9.sif
  • Version 2.7.0 is installed as a singularity image at /apps/singularity-images/cactus_v2.7.0.sif
  • Version 2.7.0 with GPU support is installed as a singularity image at /apps/singularity-images/cactus_v2.7.0-gpu.sif


To run the commands in the singularity containers, please use an overlay for the /tmp partition. This can be done with the following steps:

export CACTUS_TMPDIR=/lscratch/$USER/cactus-$SLURM_JOB_ID
mkdir -p -m 700 $CACTUS_TMPDIR/upper $CACTUS_TMPDIR/work
truncate -s 300M jobStore.img
apptainer exec /apps/singularity-images/cactus_v2.7.0.sif mkfs.ext3 -d $CACTUS_TMPDIR jobStore.img

mkdir -m 700 -p $CACTUS_TMPDIR/tmp
mkdir cactus_wd

apptainer exec --cleanenv --overlay jobStore.img --bind $CACTUS_TMPDIR/tmp:/tmp \
	--env PYTHONNOUSERSITE=1 /apps/singularity-images/cactus_v2.7.0.sif cactus-pangenome \
	--workDir=cactus_wd [options]

cd /lscratch/$USER
rm -r -f cactus-$SLURM_JOB_ID

where the cactus-pangenome command in the example can be replaced by the cactus command.


Sample job submission script (sub.sh) to run cactus version 2.7.0:

#!/bin/bash
#SBATCH --job-name=testcactus         # Job name
#SBATCH --partition=batch             # Partition (queue) name
#SBATCH --ntasks=1                    # Run on a single CPU
#SBATCH --mem=5gb                     # Job memory request
#SBATCH --time=02:00:00               # Time limit hrs:min:sec
#SBATCH --output=%x.%j.out            # Standard output log
#SBATCH --error=%x.%j.err             # Standard error log

cd $SLURM_SUBMIT_DIR

export CACTUS_TMPDIR=/lscratch/$USER/cactus-$SLURM_JOB_ID
mkdir -p -m 700 $CACTUS_TMPDIR/upper $CACTUS_TMPDIR/work
truncate -s 300M jobStore.img
apptainer exec /apps/singularity-images/cactus_v2.7.0.sif mkfs.ext3 -d $CACTUS_TMPDIR jobStore.img

mkdir -m 700 -p $CACTUS_TMPDIR/tmp
mkdir cactus_wd

apptainer exec --cleanenv --overlay jobStore.img --bind $CACTUS_TMPDIR/tmp:/tmp \
	--env PYTHONNOUSERSITE=1 /apps/singularity-images/cactus_v2.7.0.sif cactus-pangenome \
	--workDir=cactus_wd  ./js ./evolverPrimates.txt --outDir primates-pg --outName primates-pg \
	--reference simChimp --vcf --giraffe --gfa --gbz

cd /lscratch/$USER
rm -r -f cactus-$SLURM_JOB_ID

where the sample options used here need to be replaced by the options (command and arguments) you want to use. Other parameters of the job, such as the maximum wall clock time, maximum memory, the number of cores per node, and the job name need to be modified appropriately as well.

Documentation

Please see https://github.com/ComparativeGenomicsToolkit/cactus

Installation

Singularity images built from the docker container provided by the authors. For example:

singularity pull docker://quay.io/comparative-genomics-toolkit/cactus:v2.7.0

singularity pull docker://quay.io/comparative-genomics-toolkit/cactus:v2.7.0-gpu

System

64-bit Linux