Disk Storage
Storage Overview
Network attached storage systems at the GACRC are tiered in three levels based on speed and capacity. Ranked in order of decreasing speed, the file systems are "scratch" and "work", "home", and "offline" storage.
The home filesystem is the "landing zone" when users login, and the scratch filesystem is where jobs should be run. Scratch is considered temporary and files are not to be left on it long-term. The work file system is a group-shared space that can be used to store common files needed by jobs. The offline storage filesystem is where data that is currently being used should be stored when it is not being used on scratch.
Each compute node has local physical hard drives that the user can utilize as temporary storage, aka lscratch. The lscratch device is a very fast storage device compared to the network attached storage systems. The drawback is that the capacity is low and it cannot be accessed from outside the compute node. The data in lscratch is not backed up and it can be deleted anytime after the job on the compute node is finished.
Home file system
When you login into a system (e.g. sapelo2 or xfer nodes), you will land on your home directory. Home directories are "auto mounted" on the login nodes and xfer nodes when you login. Your home directory on the xfer nodes is the same as your home directory on sapelo2. Sapelo2 interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens and compute nodes will mount a user's home directory when a job submitted by this user is dispatched to those compute nodes. Users of the teaching cluster have a separate home directory, which is not the same as on Sapelo2.
Home directories have a per user quota and have snapshots. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten. A user's snapshot is stored within his/her home file system, thus snapshots consume a user's home directory quota. If files are created and deleted with frequency, the snapshots will grow and might end up using a large fraction (or all) the space available within a user's home file system.
The recommended data workflow is to have files in the home directory *change* as little as possible. These should be databases, applications that you use frequently but do not need to modify that often and other things that you, primarily, *read from*. Think of snapshots as the memory of the files that were stored there - no matter if you add, change or delete the files, the total sum of that activity will build up over time and may exceed your quota.
For home directory quota (maximum space allowed) and access path using an example user 'jsmith' are:
sapelo2 home dir quota = 100GB home dir path = /home/jsmith snapshots = yes
Scratch file system
Work file system
lscratch file system
Project file system
For home and scratch directories, users are assigned the following quotas (maximum space allowed):
zcluster home= 100GB scratch= 4TB /lscratch= Limited by device size sapelo home= 100GB scratch= Currently none /lscratch= Limited by device size (Approx. 250GB) sapelo2 home= 100GB scratch= Currently none /lscratch= Limited by device size (Approx. 250GB)
Note: A user's home and scratch directories on Sapelo2 are the same as on Sapelo, so users don't have to transfer data between these two clusters.
The offline storage filesystem is named "project" and is configured for use by lab groups, and by default, each lab group has a 1TB quota. Individual members of a lab group can create subdirectories under their lab's project directory. PI's of lab groups can request additional storage on project as needed. Please note that this storage is not meant for long-term (e.g., archive) storage of data. That type of storage is the responsibility of the user.
Storage Architecture Summary
The home and scratch filesystems are mounted on the zcluster, Sapelo, and Sapelo2 cluster as follows, using an example user 'jsmith' in a lab group 'abclab':
zcluster- home= /home/abclab/jsmith scratch= /escratch4/jsmith/jsmith_Month_Day lscratch= /lscratch/jsmith sapelo- home= /home/jsmith scratch= /lustre1/jsmith lscratch= /lscratch sapelo2- home= /home/jsmith scratch= /lustre1/jsmith lscratch= /lscratch
Note that Sapelo and Sapelo2 users already have a scratch directory. Users of the zcluster need to type
make_escratch
while on the login node (not interactive nodes) to create a scratch directory - the command will return the name of the directory.
The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs. It is mounted on the "xfer" nodes. The xfer nodes (discussed under Transferring Files) are the preferred servers to use for copying and moving files between all of the filesystems, and to and from the outside world.
The project filesystem has a consistent mount point of:
/project/abclab
Auto Mounting Filesystems
Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo and Sapelo2 home directories and the project filesystems. Sapelo interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens.
Snapshots
Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.
Home directories on sapelo have snapshots taken once a day and maintained for 4 days, giving the user the ability to retrieve old files for up to 4 days after they have deleted them. On the zcluster, some home directories have snapshots taken once a day, and some have snapshots taken once every 2 days; these are maintained for 4 days.
Any directory on the /home filesystem contains a completely invisible directory named ".snapshot". This directory cannot be listed with ls or viewed by any program at all. Only the "cd" command can be used to enter this directory. Users of /home directories may retrieve files from these snapshots by using the "cd" command and copying files from the appropriate snapshot to any location they would like.
Note: ANY user, from any HOME directory can access the snapshots *from that directory* to restore files
Here is the example for zcluster:
[cecombs@sites test]$ cd .snapshot [cecombs@sites .snapshot]$ ls 2013.04.16.00.00.01.daily 2013.04.17.00.00.01.daily 2013.04.18.00.00.01.daily [cecombs@sites .snapshot]$ cd 2013.04.18.00.00.01.daily/ [cecombs@sites 2013.04.18.00.00.01.daily]$ cp my-to-restore-file /home/rccstaff/cecombs/test
For Sapelo, please send in a ticket for such request. It is a different procedure at backend.
Current Storage Systems
(1) Seagate (Xyratex) ClusterStor1500 Lustre appliance (480TB) - $SCRATCH on Sapelo2
(2) DDN SFA14KX Lustre appliance (1.26PB) - $SCRATCH & $WORK on Sapelo2
(3) Penguin IceBreakers ZFS storage chains (84TB usable capacity) - $HOME on Sapelo2
(4) Penguin IceBreakers ZFS storage chains (374TB usable capacity) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access
(5) Panasas ActiveStor 100H (1PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access
(6) ZFS storage chains (720TB) - backup environment for $HOME and $PROJECT.