Disk Storage: Difference between revisions
Line 1: | Line 1: | ||
[[Category:Zcluster]][[Category:Storage]] | [[Category:Zcluster]][[Category:Storage]] | ||
== Storage Overview == | == Storage Overview == | ||
== | Network attached storage systems at the GACRC are tiered in three levels based on speed and capacity. Ranked in order of decreasing speed, the file systems are "scratch", "home", and "offline" storage. The home filesystem is the "landing zone" when users login, and the scratch filesystem is where jobs should be run. Scratch is considered temporary and files are not to be left on it long-term. The offline storage filesystem is where data that is currently being used should be stored when it is not being used on scratch. | ||
For home and scratch directories, users are assigned the following quotas (maximum space allowed): | |||
zcluster | |||
home= 100GB | |||
scratch= 4TB | |||
sapelo | |||
home= 100GB | |||
scratch= Currently none | |||
The offline storage filesystem is named "project" and is configured for use by lab groups, and by default, each lab group has a 1TB quota. Individual members of a lab group can create subdirectories under their lab's project directory. PI's of lab groups can request additional storage on project as needed. Please note that this storage is not meant for long-term (e.g., archive) storage of data. That type of storage is the responsibility of the user. | |||
=== Storage Architecture === | |||
The home and scratch filesystems are mounted on the zcluster and the sapelo cluster as follows, using an example user 'jsmith' in a lab group 'abclab': | |||
zcluster- | |||
home= /home/abclab/jsmith | |||
scratch= /escratch4/jsmith/jsmith_Month_Day | |||
sapelo- | |||
home= /home/jsmith | |||
scratch= /lustre1/jsmith | |||
Note that sapelo users already have a scratch directory. Users of the zcluster need to type 'make_escratch' to create a scratch directory - the command will return the name of the directory. | |||
The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs. It is mounted on the zcluster login node, and on the file "copy" and "xfer" nodes. The copy and xfer nodes (discussed under [[Transferring Files | Transferring Files]]) are the preferred servers to use for copying and moving files between all of the filesystems, and to and from the outside world. | |||
[ | |||
[ | |||
The project filesystem has a consistent mount point of: | |||
/project/abclab | |||
==== | === Auto Mounting Filesystems === | ||
Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo home directories and the project filesystems. For the zcluster copy nodes, this includes the project filesystems. Sapelo interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens. | |||
=== Snapshots === | |||
Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten. | |||
Home directories on sapelo have snapshots taken once a day and maintained for 4 days, giving the user the ability to retrieve old files for up to 4 days after they have deleted them. On the zcluster, some home directories have snapshots taken once a day, and some have snapshots taken once every 2 days; these are maintained for 4 days. | |||
Contact the GACRC staff if you need to recover data from a snapshot. | |||
=== Current Storage Systems === | |||
(1) Panasas ActiveStor 12 storage cluster with 133TB usable capacity, running PanFS parallel file system. Currently supporting the home filesystem on the zcluster | |||
(1) Seagate (Xyratex) Lustre appliance with 240TB usable capacity. Currently supporting the scratch filesystem on sapelo | |||
(3) Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 84TB usable capacity. Currently supporting home directories on sapelo | |||
(2) Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 374TB usable capacity. This storage is used as an active project repository | |||
(1) Penguin IceBreaker storage chains running ZFS mounted through NFS for a total of 142TB usable capacity. This storage is used as a backup resource for the home and project filesystems |
Revision as of 13:27, 4 May 2016
Storage Overview
Network attached storage systems at the GACRC are tiered in three levels based on speed and capacity. Ranked in order of decreasing speed, the file systems are "scratch", "home", and "offline" storage. The home filesystem is the "landing zone" when users login, and the scratch filesystem is where jobs should be run. Scratch is considered temporary and files are not to be left on it long-term. The offline storage filesystem is where data that is currently being used should be stored when it is not being used on scratch.
For home and scratch directories, users are assigned the following quotas (maximum space allowed):
zcluster home= 100GB scratch= 4TB
sapelo home= 100GB scratch= Currently none
The offline storage filesystem is named "project" and is configured for use by lab groups, and by default, each lab group has a 1TB quota. Individual members of a lab group can create subdirectories under their lab's project directory. PI's of lab groups can request additional storage on project as needed. Please note that this storage is not meant for long-term (e.g., archive) storage of data. That type of storage is the responsibility of the user.
Storage Architecture
The home and scratch filesystems are mounted on the zcluster and the sapelo cluster as follows, using an example user 'jsmith' in a lab group 'abclab':
zcluster-
home= /home/abclab/jsmith scratch= /escratch4/jsmith/jsmith_Month_Day
sapelo-
home= /home/jsmith scratch= /lustre1/jsmith
Note that sapelo users already have a scratch directory. Users of the zcluster need to type 'make_escratch' to create a scratch directory - the command will return the name of the directory.
The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs. It is mounted on the zcluster login node, and on the file "copy" and "xfer" nodes. The copy and xfer nodes (discussed under Transferring Files) are the preferred servers to use for copying and moving files between all of the filesystems, and to and from the outside world.
The project filesystem has a consistent mount point of:
/project/abclab
Auto Mounting Filesystems
Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo home directories and the project filesystems. For the zcluster copy nodes, this includes the project filesystems. Sapelo interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens.
Snapshots
Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.
Home directories on sapelo have snapshots taken once a day and maintained for 4 days, giving the user the ability to retrieve old files for up to 4 days after they have deleted them. On the zcluster, some home directories have snapshots taken once a day, and some have snapshots taken once every 2 days; these are maintained for 4 days.
Contact the GACRC staff if you need to recover data from a snapshot.
Current Storage Systems
(1) Panasas ActiveStor 12 storage cluster with 133TB usable capacity, running PanFS parallel file system. Currently supporting the home filesystem on the zcluster
(1) Seagate (Xyratex) Lustre appliance with 240TB usable capacity. Currently supporting the scratch filesystem on sapelo
(3) Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 84TB usable capacity. Currently supporting home directories on sapelo
(2) Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 374TB usable capacity. This storage is used as an active project repository
(1) Penguin IceBreaker storage chains running ZFS mounted through NFS for a total of 142TB usable capacity. This storage is used as a backup resource for the home and project filesystems