Disk Storage

From Research Computing Center Wiki
Revision as of 13:18, 19 November 2018 by Shtsai (talk | contribs)
Jump to navigation Jump to search


Storage Overview

Network attached storage systems at the GACRC are tiered in three levels based on speed and capacity. Ranked in order of decreasing speed, the file systems are "scratch" and "work", "home", and "offline" storage.

The home filesystem is the "landing zone" when users login, and the scratch filesystem is where jobs should be run. Scratch is considered temporary and files are not to be left on it long-term. The work file system is a group-shared space that can be used to store common files needed by jobs. The offline storage filesystem is where data that is currently being used should be stored when it is not being used on scratch.

Each compute node has local physical hard drives that the user can utilize as temporary storage, aka lscratch. The lscratch device is a very fast storage device compared to the network attached storage systems. The drawback is that the capacity is low and it cannot be accessed from outside the compute node. The data in lscratch is not backed up and it can be deleted anytime after the job on the compute node is finished.


Home file system

When you login into a system (e.g. sapelo2 or xfer nodes), you will land on your home directory. Home directories are "auto mounted" on the login nodes and xfer nodes when you login. Your home directory on the xfer nodes is the same as your home directory on sapelo2. Sapelo2 interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens and compute nodes will mount a user's home directory when a job submitted by this user is dispatched to those compute nodes. Users of the teaching cluster have a separate home directory, which is not the same as on Sapelo2.

Home directories have a per user quota and have snapshots. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten. A user's snapshot is stored within his/her home file system, thus snapshots consume a user's home directory quota. If files are created and deleted with frequency, the snapshots will grow and might end up using a large fraction (or all) the space available within a user's home file system.

The recommended data workflow is to have files in the home directory *change* as little as possible. These should be databases, applications that you use frequently but do not need to modify that often and other things that you, primarily, *read from*. Think of snapshots as the memory of the files that were stored there - no matter if you add, change or delete the files, the total sum of that activity will build up over time and may exceed your quota.

Summary of the home directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
home dir quota = 100GB
home dir path = /home/jsmith
snapshots = yes


Scratch file system

The scratch file system resides on a high-speed storage device and it should be used to store temporary files needed for current jobs. Files that are not needed for current jobs should not be left on the scratch file system. This file system is mounted on the login nodes, xfer nodes, and compute nodes.

The scratch file system is mounted on the login nodes, xfer nodes, and compute nodes.

The recommended data workflow will have jobs write output files, include intermediate data, such as checkpoint files, and file results into the scratch file system. Files results should then be transferred out of the scratch file system, if these are not needed for other jobs that are being submitted soon.

Because the scratch file system stores large amounts of data that change a lot, it is does not have snapshots turned on and it is not backed up in anyway. Files deleted from a scratch directory cannot be recovered.

There is no per user quota in the scratch file system, but a file retention policy is implemented to help prevent this file system from filling up. More information on the file retention policy is available at (to be added).

Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
scratch dir quota = no per user quota
scratch dir path = /scratch/jsmith
snapshots = no


Work file system

The work file system resides on a high-speed storage device and it should be used to store files needed for jobs. Each group has a directory in the work file system and this space can be used to store files needed by multiple users within a group. The work file system has a per group quota and files stored there are not subject to the auto-purge policy that is applied to the scratch file system.

The work file system is mounted on the login nodes, xfer nodes, and compute nodes.

The recommended data workflow is to have files needed for jobs, possibly by multiple users within a group, such as reference data and model data, be stored in the group work directory.

The work file system does not have snapshots turned on and it is not backed up in anyway. Files deleted from a work directory cannot be recovered.

Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
work dir group quota = (to be added)
work dir path = /work/abclab
snapshots = no


lscratch file system

Each compute node has local physical hard drives that the user can utilize as temporary storage. The file system defined on the hard drives is called /lscratch. The lscratch device is a very fast storage device compared to the network attached storage systems. The drawback is that the capacity is low and it cannot be accessed from outside the compute node. This file system can be used for single-core jobs and for multi-thread jobs that run within a single node. In general, parallel jobs that use more than one node (e.g. MPI jobs) cannot use the /lscratch file system.

The data in lscratch is not backed up and it needs to be deleted when job on the compute node is finished.

Jobs that do not need to write large output files, but that need to access the files often (for example, to write small amounts of data into disk), can benefit from using /lscratch. Jobs that use /lscratch should request the amount of space in /lscratch. For information on how to request lscratch space for jobs, please refer to [ https://wiki.gacrc.uga.edu/wiki/Running_Jobs_on_Sapelo2#How_to_run_a_job_from_the_compute_node.27s_local_disk_.28.2Flscratch.29 How to run a job from lscratch]

Summary of the lscratch directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
quota = Limited by device size
path = /lscratch
snapshots = no


Project file system

For home and scratch directories, users are assigned the following quotas (maximum space allowed):

zcluster
home= 100GB
scratch= 4TB
/lscratch= Limited by device size

sapelo 
home= 100GB
scratch= Currently none
/lscratch= Limited by device size (Approx. 250GB)

sapelo2
home= 100GB
scratch= Currently none
/lscratch= Limited by device size (Approx. 250GB)

Note: A user's home and scratch directories on Sapelo2 are the same as on Sapelo, so users don't have to transfer data between these two clusters.

The offline storage filesystem is named "project" and is configured for use by lab groups, and by default, each lab group has a 1TB quota. Individual members of a lab group can create subdirectories under their lab's project directory. PI's of lab groups can request additional storage on project as needed. Please note that this storage is not meant for long-term (e.g., archive) storage of data. That type of storage is the responsibility of the user.


Storage Architecture Summary

The home and scratch filesystems are mounted on the zcluster, Sapelo, and Sapelo2 cluster as follows, using an example user 'jsmith' in a lab group 'abclab':

zcluster-

home= /home/abclab/jsmith
scratch= /escratch4/jsmith/jsmith_Month_Day
lscratch= /lscratch/jsmith

sapelo-

home= /home/jsmith
scratch= /lustre1/jsmith
lscratch= /lscratch

sapelo2-

home= /home/jsmith
scratch= /lustre1/jsmith
lscratch= /lscratch

Note that Sapelo and Sapelo2 users already have a scratch directory. Users of the zcluster need to type

make_escratch

while on the login node (not interactive nodes) to create a scratch directory - the command will return the name of the directory.


The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs. It is mounted on the "xfer" nodes. The xfer nodes (discussed under Transferring Files) are the preferred servers to use for copying and moving files between all of the filesystems, and to and from the outside world.

The project filesystem has a consistent mount point of:

/project/abclab

Auto Mounting Filesystems

Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo and Sapelo2 home directories and the project filesystems. Sapelo interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens.


Snapshots

Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.

Home directories on sapelo have snapshots taken once a day and maintained for 4 days, giving the user the ability to retrieve old files for up to 4 days after they have deleted them. On the zcluster, some home directories have snapshots taken once a day, and some have snapshots taken once every 2 days; these are maintained for 4 days.

Any directory on the /home filesystem contains a completely invisible directory named ".snapshot". This directory cannot be listed with ls or viewed by any program at all. Only the "cd" command can be used to enter this directory. Users of /home directories may retrieve files from these snapshots by using the "cd" command and copying files from the appropriate snapshot to any location they would like.

Note: ANY user, from any HOME directory can access the snapshots *from that directory* to restore files

Here is the example for zcluster:

[cecombs@sites test]$ cd .snapshot
[cecombs@sites .snapshot]$ ls
2013.04.16.00.00.01.daily  2013.04.17.00.00.01.daily  2013.04.18.00.00.01.daily
[cecombs@sites .snapshot]$ cd 2013.04.18.00.00.01.daily/
[cecombs@sites 2013.04.18.00.00.01.daily]$ cp my-to-restore-file /home/rccstaff/cecombs/test

For Sapelo, please send in a ticket for such request. It is a different procedure at backend.

Current Storage Systems

(1) Seagate (Xyratex) ClusterStor1500 Lustre appliance (480TB) - $SCRATCH on Sapelo2

(2) DDN SFA14KX Lustre appliance (1.26PB) - $SCRATCH & $WORK on Sapelo2

(3) Penguin IceBreakers ZFS storage chains (84TB usable capacity) - $HOME on Sapelo2

(4) Penguin IceBreakers ZFS storage chains (374TB usable capacity) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access

(5) Panasas ActiveStor 100H (1PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access

(6) ZFS storage chains (720TB) - backup environment for $HOME and $PROJECT.