Available Filesystems and Quotas
Roar Collab (RC) offers several file storage options for users, each with their own quotas and data retention policies. The multiple options are available for users to optimize their workflows.
Storage Information
Storage | Location | Space Quota | Files Quota | Backup Policy | Use Case |
---|---|---|---|---|---|
Home | /storage/home | 16 GB | 500,000 files | Daily snapshot | Configuration files |
Work | /storage/work | 128 GB | 1 million files | Daily snapshot | Primary user-level data |
Scratch | /scratch | None | 1 million files | No Backup Files purged after 30 days |
Temporary files |
Group | /storage/group | Specific to allocation |
1 million files per TB allocated |
Daily snapshot | Primary shared data |
Home should primarily be used for configuration files and should not be used as a primary storage location for data. Work should be used as the primary personal data storage location. Scratch should be used for temporary files and for reading and writing large data files.
To provide a user with access to a paid group storage allocation, the owner of the storage allocation should submit a request to icds@psu.edu to add the user to their <owner>_collab
group.
Check Usage
To check storage usage against the storage quotas, run the following command on RC:
$ check_quotas
The ouputs generated by these scripts are not generated in real-time, but the underlying quota information is updated several times per day. After removing many files, for instance, the updates to the storage usage will not be reflected in the outputs until the next update period.
For a real-time look into the memory usage for a particular storage location, navigate to the storage location and run the following command:
$ du -sch .[!.]* * | sort -h
For a real-time look into the number of files in a storage location, navigate to the storage location and run the following command:
$ find . -type f | wc -l
A user can check the storage usage of an accessible group storage location by navigating to the group storage location and running the following command:
$ df -ui .