Disk Storage: Difference between revisions

From Research Computing Center Wiki
Jump to navigation Jump to search
No edit summary
 
(23 intermediate revisions by 5 users not shown)
Line 13: Line 13:
=== Home file system ===
=== Home file system ===


When you login into a system (e.g. sapelo2 or xfer nodes), you will land on your home directory. Home directories are "auto mounted" on the login nodes and xfer nodes when you login. Your home directory on the xfer nodes is the same as your home directory on sapelo2. Sapelo2 interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens and compute nodes will mount a user's home directory when a job submitted by this user is dispatched to those compute nodes. Users of the teaching cluster have a separate home directory, which is not the same as on Sapelo2.  
When you login into a system (e.g. sapelo2 or xfer nodes), you will land on your home directory. Home directories are "auto mounted" on the login nodes and xfer nodes when you login. Your home directory on the xfer nodes is the same as your home directory on sapelo2. Sapelo2 compute nodes will also mount a user's home directory when a job starts (be that interactive or batch). Users of the teaching cluster have a separate home directory, which is not the same as on Sapelo2.  


Home directories have a per user quota and have snapshots. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten. A user's snapshot is stored within his/her home file system, thus snapshots consume a user's home directory quota. If files are created and deleted with frequency, the snapshots will grow and might end up using a large fraction (or all) the space available within a user's home file system.  
Home directories have a per user quota and have snapshots. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.  


The recommended data workflow is to have files in the home directory *change* as little as possible. These should be databases, applications that you use frequently but do not need to modify that often and other things that you, primarily, *read from*. Think of snapshots as the memory of the files that were stored there - no matter if you add, change or delete the files, the total sum of that activity will build up over time and may exceed your quota.
<!-- A user's snapshot is stored within his/her home file system, thus snapshots consume a user's home directory quota. If files are created and deleted with frequency, the snapshots will grow and might end up using a large fraction (or all) the space available within a user's home file system.
-->
The recommended data workflow is to have files in the home directory *change* as little as possible. These should be databases, applications that you use frequently but do not need to modify that often and other things that you, primarily, *read from*. <!-- Think of snapshots as the memory of the files that were stored there - no matter if you add, change or delete the files, the total sum of that activity will build up over time and may exceed your quota. -->


Summary of the home directory characteristics for a sample user 'jsmith' in 'abclab':
Summary of the home directory characteristics for a sample user 'jsmith' in 'abclab':
<pre>
<pre>
sapelo2
sapelo2
home dir quota = 100GB
home dir quota = 200GB
home dir path = /home/jsmith
home dir path = /home/jsmith
snapshots = yes
snapshots = yes
subject to 30-day purge = no
</pre>
</pre>


Line 32: Line 35:
The scratch file system resides on a high-speed storage device and it should be used to store temporary files needed for current jobs. Files that are not needed for current jobs should not be left on the scratch file system. This file system is mounted on the login nodes, xfer nodes, and compute nodes.
The scratch file system resides on a high-speed storage device and it should be used to store temporary files needed for current jobs. Files that are not needed for current jobs should not be left on the scratch file system. This file system is mounted on the login nodes, xfer nodes, and compute nodes.


The scratch file system is mounted on the login nodes, xfer nodes, and compute nodes.
The recommended data workflow will have jobs write output files, including intermediate data, such as checkpoint files, and final results into the scratch file system. Final results, intermediate files, and other data should then be transferred out of and immediately deleted from the scratch file system, if these are not needed for other jobs that are being submitted soon.  


The recommended data workflow will have jobs write output files, include intermediate data, such as checkpoint files, and file results into the scratch file system. Files results should then be transferred out of the scratch file system, if these are not needed for other jobs that are being submitted soon.  
Because the scratch file system stores large amounts of data that change a lot, it is does not have snapshots turned on and it is not backed up in anyway. Files deleted from a scratch directory cannot be recovered.
 
There is no per user quota in the scratch file system, but a file retention policy is implemented to help prevent this file system from filling up.
 
 
'''Scratch file system "30-day purge" policy'''
 
<blockquote style="background-color: lightyellow; border: solid thin grey;">
Any file that is not accessed or modified by a compute job in a time period of at least 30 days will be automatically deleted off the /scratch file system. Measures circumventing this policy will be monitored and actively discouraged.
 
There is no storage size quota for /scratch usage. Space is only limited by the physical size of the scratch space being used. If usage across the entire file system is more than 80% of total capacity, the GACRC will take additional measures to reduce usage to a more suitable level.  Amongst possible actions, request/force users to clean up their /scratch directories or reduce temporarily the 30 day limit to a lower limit.  


Because the scratch file system stores large amounts of data that change a lot, it is does not have snapshots turned on and it is not backed up in anyway. Files deleted from a scratch directory cannot be recovered.  
Please see [[Policies#Policy_Statement_for_SCRATCH_File_System|purge policy]] for more info.
</blockquote>


There is no per user quota in the scratch file system, but a file retention policy is implemented to help prevent this file system from filling up. More information on the file retention policy is available at (to be added).


Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':
Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':
<pre>
<pre>
sapelo2
sapelo2
scratch dir quota = no per user quota
scratch dir quota = Currently no per user quota
scratch dir path = /scratch/jsmith
scratch dir path = /scratch/jsmith
snapshots = no
snapshots = no
subject to 30-day purge = yes
</pre>
</pre>


=== Work file system ===
=== Work file system ===
Line 55: Line 68:
The work file system is mounted on the login nodes, xfer nodes, and compute nodes.
The work file system is mounted on the login nodes, xfer nodes, and compute nodes.


The recommended data workflow is to have files needed for jobs, possibly by multiple users within a group, such as reference data and model data, be stored in the group work directory.  
The recommended data workflow is to have files that are often needed for repeated jobs, possibly by multiple users within a group, such as reference data and model data, be stored in the group's work directory. This directory is not intended as a place for jobs to write output files.  


The work file system does not have snapshots turned on and it is not backed up in anyway. Files deleted from a work directory cannot be recovered.  
The work file system does not have snapshots turned on and it is not backed up in anyway. Files deleted from a work directory cannot be recovered.  
   
   
Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':
Summary of the work directory characteristics for a sample user 'jsmith' in 'abclab':
<pre>
<pre>
sapelo2
sapelo2
work dir group quota = (to be added)
work dir group quota = 500GB and a maximum of 100,000 files
work dir path = /work/abclab
work dir path = /work/abclab
snapshots = no
snapshots = no
subject to 30-day purge = no
</pre>
</pre>


=== lscratch file system ===
=== lscratch file system ===
Line 74: Line 87:
The data in lscratch is not backed up and it needs to be deleted when job on the compute node is finished.
The data in lscratch is not backed up and it needs to be deleted when job on the compute node is finished.


Jobs that do not need to write large output files, but that need to access the files often (for example, to write small amounts of data into disk), can benefit from using /lscratch. Jobs that use /lscratch should request the amount of space in /lscratch. For information on how to request lscratch space for jobs, please refer to [https://wiki.gacrc.uga.edu/wiki/Running_Jobs_on_Sapelo2#How_to_run_a_job_from_the_compute_node.27s_local_disk_.28.2Flscratch.29 How to run a job from lscratch]
Jobs that do not need to write large output files, but that need to access the files often (for example, to write small amounts of data into disk), can benefit from using /lscratch. Jobs that use /lscratch should request the amount of space in /lscratch. For information on how to request lscratch space for jobs, please refer to [https://wiki.gacrc.uga.edu/wiki/Running_Jobs_on_Sapelo2#How_to_run_a_job_using_the_local_scratch_.2Flscratch_on_a_compute_node How to run a job from lscratch]


Summary of the lscratch directory characteristics for a sample user 'jsmith' in 'abclab':
Summary of the lscratch directory characteristics for a sample user 'jsmith' in 'abclab':
<pre>
<pre>
sapelo2
sapelo2
quota = Limited by device size
quota = Limited by device size (Approx. 210GB on the AMD nodes and 800GB on the Intel nodes)
path = /lscratch
path = /lscratch
snapshots = no
snapshots = no
subject to purge = yes (files to be deleted when job exits the node)
</pre>
</pre>


Line 90: Line 104:


The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs.  It is mounted on the "xfer" nodes when it is first accessed using its full path.  
The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs.  It is mounted on the "xfer" nodes when it is first accessed using its full path.  
The project filesystem has snapshots turned on.


The recommended data workflow is to have data not needed for current jobs, but that are still needed for future jobs on the cluster, be transferred into the project file system and deleted from the scratch area.
The recommended data workflow is to have data not needed for current jobs, but that are still needed for future jobs on the cluster, be transferred into the project file system and deleted from the scratch area.


Summary of the project directory characteristics for a sample group 'abclab':
<pre>
sapelo2
quota = default of 1TB per group
path = /project/abclab
snapshots = yes
subject to 30-day purge = no
</pre>




For home and scratch directories, users are assigned the following quotas (maximum space allowed):
[[#top|Back to Top]]
 
 
== Storage Architecture Summary ==
 
Mount path for home, scratch, work, and lscratch filesystems using an example user 'jsmith' in a lab group 'abclab':
<pre>
<pre>
zcluster
sapelo2
home= 100GB
 
scratch= 4TB
home= /home/jsmith
/lscratch= Limited by device size
scratch= /scratch/jsmith
work= /work/abclab
lscratch= /lscratch
 
</pre>


sapelo
home= 100GB
scratch= Currently none
/lscratch= Limited by device size (Approx. 250GB)


Quota for home, scratch, work, and lscratch filesystems:
<pre>
sapelo2
sapelo2
home= 100GB
 
scratch= Currently none
home= 200GB
/lscratch= Limited by device size (Approx. 250GB)
scratch= Currently no quota
work= (to be added)
lscratch= Limited by device size (Approx. 210GB on the AMD nodes and 800GB on the Intel nodes)
 
</pre>
</pre>


'''Note:''' A user's home and scratch directories on Sapelo2 are the same as on Sapelo, so users don't have to transfer data between these two clusters.


The offline storage filesystem is named "project" and is configured for use by lab groups, and by default, each lab group has a 1TB quota.  Individual members of a lab group can create subdirectories under their lab's project directory.  PI's of lab groups can request additional storage on project as needed.  Please note that this storage is not meant for long-term (e.g., archive) storage of data.  That type of storage is the responsibility of the user.


[[#top|Back to Top]]
== Auto Mounting Filesystems ==
 
Some filesystems are "auto mounted" when they are first accessed on a server.  For the xfer nodes, this includes Sapelo2 home directories and the project filesystems. Sapelo2 compute nodes will mount a user's home directory when a job starts.
 
 
== Snapshots ==
 
'''Home directories'''
 
Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.


== Storage Architecture Summary ==
Home directories on Sapelo2 have snapshots taken once a day and are maintained on Sapelo2 for 14 days, giving the user the ability to retrieve old files for up to 14 days after they have deleted them. 


The home and scratch filesystems are mounted on the zcluster, Sapelo, and Sapelo2 cluster as follows, using an example user 'jsmith' in a lab group 'abclab':
'''Note: Users can access the previous 14 days of snapshots of their own home directories and restore their files.'''
<pre>
zcluster-


home= /home/abclab/jsmith
If you would like to recover a file that you have deleted from your home directory within the last 14 days, you can check if the file is available in any of the snapshots and, if so, copy the file back. This can be done on a transfer node (xfer.gacrc.uga.edu) or on a Sapelo2 compute node.
scratch= /escratch4/jsmith/jsmith_Month_Day
lscratch= /lscratch/jsmith


sapelo-
Here is an example for user jsmith, on an xfer node:


home= /home/jsmith
<pre>
scratch= /lustre1/jsmith
[jsmith@xfer1 ]$ pwd
lscratch= /lscratch
/home/jsmith


sapelo2-
[jsmith@xfer1 ]$ ls /home/.zfs/snapshot/
zrepl_20220907_063420_000  zrepl_20220920_220422_000  zrepl_20220921_200422_000
zrepl_20220908_070420_000  zrepl_20220920_230422_000  zrepl_20220921_210422_000
zrepl_20220909_073421_000  zrepl_20220921_000422_000  zrepl_20220921_220422_000
zrepl_20220910_073421_000  zrepl_20220921_010422_000  zrepl_20220921_230422_000
zrepl_20220911_073421_000  zrepl_20220921_020422_000  zrepl_20220922_000422_000
zrepl_20220912_073421_000  zrepl_20220921_030422_000  zrepl_20220922_010421_000
zrepl_20220913_073421_000  zrepl_20220921_040422_000  zrepl_20220922_020422_000
zrepl_20220914_073421_000  zrepl_20220921_050422_000  zrepl_20220922_030422_000
zrepl_20220915_073421_000  zrepl_20220921_060422_000  zrepl_20220922_040422_000
zrepl_20220916_073421_000  zrepl_20220921_070422_000  zrepl_20220922_050421_000
zrepl_20220917_073421_000  zrepl_20220921_080422_000  zrepl_20220922_060422_000
zrepl_20220918_080421_000  zrepl_20220921_090421_000  zrepl_20220922_070422_000
zrepl_20220919_080422_000  zrepl_20220921_100422_000  zrepl_20220922_080422_000
zrepl_20220920_083422_000  zrepl_20220921_113421_000  zrepl_20220922_090422_000
zrepl_20220920_143422_000  zrepl_20220921_123422_000  zrepl_20220922_100422_000
zrepl_20220920_153422_000  zrepl_20220921_133423_000  zrepl_20220922_110422_000
zrepl_20220920_163422_000  zrepl_20220921_150422_000  zrepl_20220922_120422_000
zrepl_20220920_173422_000  zrepl_20220921_160422_000  zrepl_20220922_130422_000
zrepl_20220920_190421_000  zrepl_20220921_170422_000  zrepl_20220922_140422_000
zrepl_20220920_200422_000  zrepl_20220921_180421_000  zrepl_20220922_143422_000
zrepl_20220920_210422_000  zrepl_20220921_190422_000  zrepl_20220922_150422_000


home= /home/jsmith
[jsmith@xfer1 ]$ cd /home/.zfs/snapshot/zrepl_20220907_063420_000/jsmith
scratch= /lustre1/jsmith
lscratch= /lscratch


[jsmith@xfer1 ]$ cp my-to-restore-file /home/jsmith
</pre>
</pre>


Note that Sapelo and Sapelo2 users already have a scratch directory. Users of the zcluster need to type
Weekly and monthly snapshots are also made going as far back as 6 months, but GACRC staff must retrieve these snapshots for you upon [https://uga.teamdynamix.com/TDClient/2060/Portal/Requests/ServiceDet?ID=25844 request].


make_escratch


while on the login node (not interactive nodes) to create a scratch directory - the command will return the name of the directory.
'''Project file systems'''


The project file systems are also snapshotted and the method to recover a file from a snapshot depends on whether your project directory is located on the Panasas storage device or on a ZFS storage device (SN13). One of the two methods below should work for you.


The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs.  It is mounted on the "xfer" nodes.  The xfer nodes (discussed under [[Transferring Files | Transferring Files]]) are the preferred servers to use for copying and moving files between all of the filesystems, and to and from the outside world.
'''Note: ANY user in a lab can access the snapshots of his/her group project file system and restore files he/she has there.'''


The project filesystem has a consistent mount point of:
'''Method 1 (for project folders on the Panasas)'''
Each /project filesystem on the Panasas contains a completely invisible directory named ".snapshot". This directory cannot be listed with ls or viewed by any program at all. Only the "cd" command can be used to enter this directory. Users of /project directories may retrieve files from these snapshots by changing into their snapshot directory /project/abclab/.snapshot and then cd into an appropriate snapshot directory and copying files from the that snapshot to any location they would like.


/project/abclab
Here is an example for user jsmith who is in the abclab group, on an xfer node:


== Auto Mounting Filesystems ==
<pre>
[jsmith@xfer1 ]$ pwd
/home/jsmith
 
[jsmith@xfer1 ]$ cd /project/abclab/.snapshot
 
[jsmith@xfer1 .snapshot]$ ls   
2019.02.17.04.00.03.Weekly  2019.03.01.06.00.03.Daily
2019.02.26.06.00.03.Daily  2019.03.02.06.00.03.Daily
2019.02.27.06.00.03.Daily  2019.03.03.04.00.03.Weekly
2019.02.28.06.00.03.Daily  2019.03.03.06.00.03.Daily


Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo and Sapelo2 home directories and the project filesystems. Sapelo interactive ("qlogin") nodes will mount a user's home directory when the qlogin happens.  
[jsmith@xfer1 snapshot]$ cd 2019.03.03.06.00.03.Daily


[jsmith@xfer1 2019.03.03.06.00.03.Daily]$ cp my-to-restore-file /home/jsmith/test
</pre>


== Snapshots ==
'''Method 2 (for project folders on SN13)'''


Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.
Each /project filesystem on SN13 contains a hidden directory called .zfs and the snapshots are located in the directory /project/abclab/.zfs/snapshot (you can <code>cd</code> into this directory and list the snapshots with the <code>ls</code> command).


Home directories on sapelo have snapshots taken once a day and maintained for 4 days, giving the user the ability to retrieve old files for up to 4 days after they have deleted them.  On the zcluster, some home directories have snapshots taken once a day, and some have snapshots taken once every 2 days; these are maintained for 4 days.
Here is an example for user jsmith who is in the abclab group, on an xfer node:


Any directory on the /home filesystem contains a completely invisible directory named ".snapshot". This directory cannot be listed with ls or viewed by any program at all. Only the "cd" command can be used to enter this directory. Users of /home directories may retrieve files from these snapshots by using the "cd" command and copying files from the appropriate snapshot to any location they would like.
<pre>
[jsmith@xfer1 ]$ pwd
/home/jsmith
[jsmith@xfer1 ]$ date
Wed Aug 18 11:40:59 EDT 2021


'''Note: ANY user, from any HOME directory can access the snapshots *from that directory* to restore files'''
[jsmith@xfer1 ]$ cd /project/abclab/.zfs/snapshot


Here is the example for zcluster:
[jsmith@xfer1 snapshot]$ ls   
zrepl_20210729_211245_000  zrepl_20210811_052246_000  zrepl_20210818_012411_000
zrepl_20210730_215245_000  zrepl_20210812_054245_000  zrepl_20210818_022410_000
zrepl_20210731_222244_000  zrepl_20210813_201246_000  zrepl_20210818_035041_000
zrepl_20210801_225245_000  zrepl_20210816_152410_000  zrepl_20210818_052018_000
zrepl_20210802_235244_000  zrepl_20210817_152411_000  zrepl_20210818_062051_000
zrepl_20210804_003245_000  zrepl_20210817_172411_000  zrepl_20210818_075046_000
zrepl_20210805_021244_000  zrepl_20210817_182411_000  zrepl_20210818_092039_000
zrepl_20210806_025245_000  zrepl_20210817_202410_000  zrepl_20210818_115019_000
zrepl_20210807_035245_000  zrepl_20210817_212410_000  zrepl_20210818_125036_000
zrepl_20210808_045245_000  zrepl_20210817_222410_000  zrepl_20210818_135021_000
zrepl_20210809_050244_000  zrepl_20210817_232411_000  zrepl_20210818_145018_000
zrepl_20210810_050245_000  zrepl_20210818_002411_000  zrepl_20210818_152028_000


<pre>
[jsmith@xfer1 snapshot]$ cd zrepl_20210818_152028_000
[cecombs@sites test]$ cd .snapshot
[jsmith@xfer1 zrepl_20210818_152028_000]$ cp my-to-restore-file /home/jsmith/test
[cecombs@sites .snapshot]$ ls
2013.04.16.00.00.01.daily  2013.04.17.00.00.01.daily  2013.04.18.00.00.01.daily
[cecombs@sites .snapshot]$ cd 2013.04.18.00.00.01.daily/
[cecombs@sites 2013.04.18.00.00.01.daily]$ cp my-to-restore-file /home/rccstaff/cecombs/test
</pre>
</pre>


For Sapelo, please send in a ticket for such request. It is a different procedure at backend.
[[#top|Back to Top]]
 


== Current Storage Systems ==
== Current Storage Systems ==


(1) Seagate (Xyratex) ClusterStor1500 Lustre appliance (480TB) - $SCRATCH on Sapelo2
(1) ZFS storage chain (300TB) - $HOME on Sapelo2


(2) DDN SFA14KX Lustre appliance (1.26PB) - $SCRATCH & $WORK on Sapelo2
(2) DDN SFA14KX Lustre appliance (2.5PB) - $SCRATCH & $WORK on Sapelo2


(3) Penguin IceBreakers ZFS storage chains (84TB usable capacity) - $HOME on Sapelo2
(3) Panasas ActiveStor 100H (1PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access


(4) Penguin IceBreakers ZFS storage chains (374TB usable capacity) -  $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access
(4) ZFS storage chain (1.2PB) -  $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access


(5) Panasas ActiveStor 100H (1PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access
(4) ZFS storage chains (2.4PB) - backup environment for $HOME and $PROJECT.


(6) ZFS storage chains (720TB) -  backup environment for $HOME and $PROJECT.





Latest revision as of 18:15, 27 March 2023


Storage Overview

Network attached storage systems at the GACRC are tiered in three levels based on speed and capacity. Ranked in order of decreasing speed, the file systems are "scratch" and "work", "home", and "offline" storage.

The home filesystem is the "landing zone" when users login, and the scratch filesystem is where jobs should be run. Scratch is considered temporary and files are not to be left on it long-term. The work file system is a group-shared space that can be used to store common files needed by jobs. The offline storage filesystem is where data that is currently being used should be stored when it is not being used on scratch.

Each compute node has local physical hard drives that the user can utilize as temporary storage, aka lscratch. The lscratch device is a very fast storage device compared to the network attached storage systems. The drawback is that the capacity is low and it cannot be accessed from outside the compute node. The data in lscratch is not backed up and it can be deleted anytime after the job on the compute node is finished.


Home file system

When you login into a system (e.g. sapelo2 or xfer nodes), you will land on your home directory. Home directories are "auto mounted" on the login nodes and xfer nodes when you login. Your home directory on the xfer nodes is the same as your home directory on sapelo2. Sapelo2 compute nodes will also mount a user's home directory when a job starts (be that interactive or batch). Users of the teaching cluster have a separate home directory, which is not the same as on Sapelo2.

Home directories have a per user quota and have snapshots. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.

The recommended data workflow is to have files in the home directory *change* as little as possible. These should be databases, applications that you use frequently but do not need to modify that often and other things that you, primarily, *read from*.

Summary of the home directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
home dir quota = 200GB
home dir path = /home/jsmith
snapshots = yes
subject to 30-day purge = no


Scratch file system

The scratch file system resides on a high-speed storage device and it should be used to store temporary files needed for current jobs. Files that are not needed for current jobs should not be left on the scratch file system. This file system is mounted on the login nodes, xfer nodes, and compute nodes.

The recommended data workflow will have jobs write output files, including intermediate data, such as checkpoint files, and final results into the scratch file system. Final results, intermediate files, and other data should then be transferred out of and immediately deleted from the scratch file system, if these are not needed for other jobs that are being submitted soon.

Because the scratch file system stores large amounts of data that change a lot, it is does not have snapshots turned on and it is not backed up in anyway. Files deleted from a scratch directory cannot be recovered.

There is no per user quota in the scratch file system, but a file retention policy is implemented to help prevent this file system from filling up.


Scratch file system "30-day purge" policy

Any file that is not accessed or modified by a compute job in a time period of at least 30 days will be automatically deleted off the /scratch file system. Measures circumventing this policy will be monitored and actively discouraged.

There is no storage size quota for /scratch usage. Space is only limited by the physical size of the scratch space being used. If usage across the entire file system is more than 80% of total capacity, the GACRC will take additional measures to reduce usage to a more suitable level. Amongst possible actions, request/force users to clean up their /scratch directories or reduce temporarily the 30 day limit to a lower limit.

Please see purge policy for more info.


Summary of the scratch directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
scratch dir quota = Currently no per user quota
scratch dir path = /scratch/jsmith
snapshots = no
subject to 30-day purge = yes

Work file system

The work file system resides on a high-speed storage device and it should be used to store files needed for jobs. Each group has a directory in the work file system and this space can be used to store files needed by multiple users within a group. The work file system has a per group quota and files stored there are not subject to the auto-purge policy that is applied to the scratch file system.

The work file system is mounted on the login nodes, xfer nodes, and compute nodes.

The recommended data workflow is to have files that are often needed for repeated jobs, possibly by multiple users within a group, such as reference data and model data, be stored in the group's work directory. This directory is not intended as a place for jobs to write output files.

The work file system does not have snapshots turned on and it is not backed up in anyway. Files deleted from a work directory cannot be recovered.

Summary of the work directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
work dir group quota = 500GB and a maximum of 100,000 files
work dir path = /work/abclab
snapshots = no
subject to 30-day purge = no

lscratch file system

Each compute node has local physical hard drives that the user can utilize as temporary storage. The file system defined on the hard drives is called /lscratch. The lscratch device is a very fast storage device compared to the network attached storage systems. The drawback is that the capacity is low and it cannot be accessed from outside the compute node. This file system can be used for single-core jobs and for multi-thread jobs that run within a single node. In general, parallel jobs that use more than one node (e.g. MPI jobs) cannot use the /lscratch file system.

The data in lscratch is not backed up and it needs to be deleted when job on the compute node is finished.

Jobs that do not need to write large output files, but that need to access the files often (for example, to write small amounts of data into disk), can benefit from using /lscratch. Jobs that use /lscratch should request the amount of space in /lscratch. For information on how to request lscratch space for jobs, please refer to How to run a job from lscratch

Summary of the lscratch directory characteristics for a sample user 'jsmith' in 'abclab':

sapelo2
quota = Limited by device size (Approx. 210GB on the AMD nodes and 800GB on the Intel nodes)
path = /lscratch
snapshots = no
subject to purge = yes (files to be deleted when job exits the node) 


Project file system

The offline storage filesystem is named "project" and is configured for use by lab groups. By default, each lab group has a 1TB quota. Individual members of a lab group can create subdirectories under their lab's project directory. PI's of lab groups can request additional storage on project as needed. Please note that this storage is not meant for long-term (e.g., archive) storage of data. That type of storage is the responsibility of the user.

The project filesystem is not mounted on the compute nodes and cannot be accessed by running jobs. It is mounted on the "xfer" nodes when it is first accessed using its full path.

The project filesystem has snapshots turned on.

The recommended data workflow is to have data not needed for current jobs, but that are still needed for future jobs on the cluster, be transferred into the project file system and deleted from the scratch area.

Summary of the project directory characteristics for a sample group 'abclab':

sapelo2
quota = default of 1TB per group
path = /project/abclab
snapshots = yes
subject to 30-day purge = no


Back to Top


Storage Architecture Summary

Mount path for home, scratch, work, and lscratch filesystems using an example user 'jsmith' in a lab group 'abclab':

sapelo2

home= /home/jsmith
scratch= /scratch/jsmith
work= /work/abclab 
lscratch= /lscratch


Quota for home, scratch, work, and lscratch filesystems:

sapelo2

home= 200GB
scratch= Currently no quota
work= (to be added)
lscratch= Limited by device size (Approx. 210GB on the AMD nodes and 800GB on the Intel nodes)


Auto Mounting Filesystems

Some filesystems are "auto mounted" when they are first accessed on a server. For the xfer nodes, this includes Sapelo2 home directories and the project filesystems. Sapelo2 compute nodes will mount a user's home directory when a job starts.


Snapshots

Home directories

Home directories are snapshotted. Snapshots are like backups in that they are read-only moment-in-time captures of files and directories which can be used to restore files that may have been accidentally deleted or overwritten.

Home directories on Sapelo2 have snapshots taken once a day and are maintained on Sapelo2 for 14 days, giving the user the ability to retrieve old files for up to 14 days after they have deleted them.

Note: Users can access the previous 14 days of snapshots of their own home directories and restore their files.

If you would like to recover a file that you have deleted from your home directory within the last 14 days, you can check if the file is available in any of the snapshots and, if so, copy the file back. This can be done on a transfer node (xfer.gacrc.uga.edu) or on a Sapelo2 compute node.

Here is an example for user jsmith, on an xfer node:

[jsmith@xfer1 ]$ pwd
/home/jsmith

[jsmith@xfer1 ]$ ls /home/.zfs/snapshot/
zrepl_20220907_063420_000  zrepl_20220920_220422_000  zrepl_20220921_200422_000
zrepl_20220908_070420_000  zrepl_20220920_230422_000  zrepl_20220921_210422_000
zrepl_20220909_073421_000  zrepl_20220921_000422_000  zrepl_20220921_220422_000
zrepl_20220910_073421_000  zrepl_20220921_010422_000  zrepl_20220921_230422_000
zrepl_20220911_073421_000  zrepl_20220921_020422_000  zrepl_20220922_000422_000
zrepl_20220912_073421_000  zrepl_20220921_030422_000  zrepl_20220922_010421_000
zrepl_20220913_073421_000  zrepl_20220921_040422_000  zrepl_20220922_020422_000
zrepl_20220914_073421_000  zrepl_20220921_050422_000  zrepl_20220922_030422_000
zrepl_20220915_073421_000  zrepl_20220921_060422_000  zrepl_20220922_040422_000
zrepl_20220916_073421_000  zrepl_20220921_070422_000  zrepl_20220922_050421_000
zrepl_20220917_073421_000  zrepl_20220921_080422_000  zrepl_20220922_060422_000
zrepl_20220918_080421_000  zrepl_20220921_090421_000  zrepl_20220922_070422_000
zrepl_20220919_080422_000  zrepl_20220921_100422_000  zrepl_20220922_080422_000
zrepl_20220920_083422_000  zrepl_20220921_113421_000  zrepl_20220922_090422_000
zrepl_20220920_143422_000  zrepl_20220921_123422_000  zrepl_20220922_100422_000
zrepl_20220920_153422_000  zrepl_20220921_133423_000  zrepl_20220922_110422_000
zrepl_20220920_163422_000  zrepl_20220921_150422_000  zrepl_20220922_120422_000
zrepl_20220920_173422_000  zrepl_20220921_160422_000  zrepl_20220922_130422_000
zrepl_20220920_190421_000  zrepl_20220921_170422_000  zrepl_20220922_140422_000
zrepl_20220920_200422_000  zrepl_20220921_180421_000  zrepl_20220922_143422_000
zrepl_20220920_210422_000  zrepl_20220921_190422_000  zrepl_20220922_150422_000

[jsmith@xfer1 ]$ cd /home/.zfs/snapshot/zrepl_20220907_063420_000/jsmith

[jsmith@xfer1 ]$ cp my-to-restore-file /home/jsmith

Weekly and monthly snapshots are also made going as far back as 6 months, but GACRC staff must retrieve these snapshots for you upon request.


Project file systems

The project file systems are also snapshotted and the method to recover a file from a snapshot depends on whether your project directory is located on the Panasas storage device or on a ZFS storage device (SN13). One of the two methods below should work for you.

Note: ANY user in a lab can access the snapshots of his/her group project file system and restore files he/she has there.

Method 1 (for project folders on the Panasas)

Each /project filesystem on the Panasas contains a completely invisible directory named ".snapshot". This directory cannot be listed with ls or viewed by any program at all. Only the "cd" command can be used to enter this directory. Users of /project directories may retrieve files from these snapshots by changing into their snapshot directory /project/abclab/.snapshot and then cd into an appropriate snapshot directory and copying files from the that snapshot to any location they would like.

Here is an example for user jsmith who is in the abclab group, on an xfer node:

[jsmith@xfer1 ]$ pwd
/home/jsmith

[jsmith@xfer1 ]$ cd /project/abclab/.snapshot

[jsmith@xfer1 .snapshot]$ ls    
2019.02.17.04.00.03.Weekly  2019.03.01.06.00.03.Daily
2019.02.26.06.00.03.Daily   2019.03.02.06.00.03.Daily
2019.02.27.06.00.03.Daily   2019.03.03.04.00.03.Weekly
2019.02.28.06.00.03.Daily   2019.03.03.06.00.03.Daily

[jsmith@xfer1 snapshot]$ cd 2019.03.03.06.00.03.Daily

[jsmith@xfer1 2019.03.03.06.00.03.Daily]$ cp my-to-restore-file /home/jsmith/test

Method 2 (for project folders on SN13)

Each /project filesystem on SN13 contains a hidden directory called .zfs and the snapshots are located in the directory /project/abclab/.zfs/snapshot (you can cd into this directory and list the snapshots with the ls command).

Here is an example for user jsmith who is in the abclab group, on an xfer node:

[jsmith@xfer1 ]$ pwd
/home/jsmith
[jsmith@xfer1 ]$ date
Wed Aug 18 11:40:59 EDT 2021

[jsmith@xfer1 ]$ cd /project/abclab/.zfs/snapshot

[jsmith@xfer1 snapshot]$ ls    
zrepl_20210729_211245_000  zrepl_20210811_052246_000  zrepl_20210818_012411_000
zrepl_20210730_215245_000  zrepl_20210812_054245_000  zrepl_20210818_022410_000
zrepl_20210731_222244_000  zrepl_20210813_201246_000  zrepl_20210818_035041_000
zrepl_20210801_225245_000  zrepl_20210816_152410_000  zrepl_20210818_052018_000
zrepl_20210802_235244_000  zrepl_20210817_152411_000  zrepl_20210818_062051_000
zrepl_20210804_003245_000  zrepl_20210817_172411_000  zrepl_20210818_075046_000
zrepl_20210805_021244_000  zrepl_20210817_182411_000  zrepl_20210818_092039_000
zrepl_20210806_025245_000  zrepl_20210817_202410_000  zrepl_20210818_115019_000
zrepl_20210807_035245_000  zrepl_20210817_212410_000  zrepl_20210818_125036_000
zrepl_20210808_045245_000  zrepl_20210817_222410_000  zrepl_20210818_135021_000
zrepl_20210809_050244_000  zrepl_20210817_232411_000  zrepl_20210818_145018_000
zrepl_20210810_050245_000  zrepl_20210818_002411_000  zrepl_20210818_152028_000

[jsmith@xfer1 snapshot]$ cd zrepl_20210818_152028_000
[jsmith@xfer1 zrepl_20210818_152028_000]$ cp my-to-restore-file /home/jsmith/test

Back to Top


Current Storage Systems

(1) ZFS storage chain (300TB) - $HOME on Sapelo2

(2) DDN SFA14KX Lustre appliance (2.5PB) - $SCRATCH & $WORK on Sapelo2

(3) Panasas ActiveStor 100H (1PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access

(4) ZFS storage chain (1.2PB) - $PROJECT research groups' long-term space - only for active projects requiring Sapelo2 access

(4) ZFS storage chains (2.4PB) - backup environment for $HOME and $PROJECT.