Disk Storage: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
[[Category:Zcluster]] | |||
The home and ephemeral scratch directories on GACRC cluster reside on a Panasas ActiveStor 12 storage cluster and these filesystems are mounted on the different clusters. | The home and ephemeral scratch directories on GACRC cluster reside on a Panasas ActiveStor 12 storage cluster and these filesystems are mounted on the different clusters. |
Revision as of 16:48, 13 February 2013
The home and ephemeral scratch directories on GACRC cluster reside on a Panasas ActiveStor 12 storage cluster and these filesystems are mounted on the different clusters.
All users have a default 100GB home quota (i.e., maximum limit) on their home directory; however, justifiable requests for quotas up to 2TB can be made by contacting the GACRC IT Manager (currently Greg Derda: derda@uga.edu). Storage in the home directory to avoid archive storage fees is not a justifiable request. Requests for home quotas greater than 2TB must be submitted by the PI of a lab group, and approved by the GACRC advisory committee (via the IT Manager). Users may create lab directories for data that is shared by a lab group, but those directories count against the quota of the creating user. An example of this, for the “abclab” users, would be: /home/abclab/labdata. Home directories are backed up.
The current scratch file system is mounted on the compute clusters as escratch. Researchers who need to use scratch space can type
make_escratch
and a sub-directory will be created, and the user will be told the path to the sub-directory e.g., /escratch/jsmith_Oct_22. The life span of the directory will be one week longer than the longest duration queue, which is currently 30 days (i.e., life span = 37 days). At that time, the directory and its contents will be deleted. Users can create one escratch directory per day if needed. The total space a user can use on scratch (all scratch directories combined) is 2TB. The scratch directories are not backed up.
To see how much space you are consuming on the home and scratch file systems, please use the command
quota_rep
Some labs also have a subscription archival storage space, which is mounted on the zcluster login node and on the copy nodes as /oflow (note that /oflow is not mounted on the compute nodes).