Difference between revisions of "Storage"

From UFRC
Jump to navigation Jump to search
 
(32 intermediate revisions by 4 users not shown)
Line 1: Line 1:
UF Research Computing maintains several shared storage systems that are intended for different user activities. General overview of UFRC procedures for using those filesystems can be found in [https://www.rc.ufl.edu/services/procedures/storage/types/ https://www.rc.ufl.edu/services/procedures/storage/types/]. Here we discuss practical use of the filesystems on HiPerGator.
+
[[Category:Data]][[Category:Essentials]]
 +
{|align=right
 +
  |__TOC__
 +
  |}
 +
UF Research Computing maintains several shared storage systems that are intended for different user activities. A summary of the filesystems and their use  can be found in the [https://www.rc.ufl.edu/documentation/policies/storage/ RC storage policy]. Here we discuss practical use of the filesystems on HiPerGator. See our [[FAQ]] to see what to do when you run out of storage space.
  
 
==Home Storage==
 
==Home Storage==
Your home directory is the first directory you see when you log into HiPerGator. It's always found at '~', '/home/$USER' or $HOME paths. The shell variables above can be used in scripts. The HOME directories are the smallest storage devices available to our users. They contain files important for setting up user shell environment and secure shell connections. Do not remove any .bash* files or the .ssh directory or you will have problems using your HiPerGator account. [https://support.rc.ufl.edu Let us know] if some of them were removed by accident, so we could reset the files to standard versions.
+
Your home directory is the first directory you see when you log into HiPerGator. It's always found at <code>~</code>, <code>/home/$USER</code> or <code>$HOME</code> paths. The shell variables above can be used in scripts. The home directories are the smallest storage devices available to our users. They contain files important for setting up user shell environment and secure shell connections. Do not remove any .bash* files or the .ssh directory or you will have problems using your HiPerGator account. [https://support.rc.ufl.edu Let us know] if some of them were removed by accident, so we could reset the files to standard versions.
  
The first rule of using the HOME directory is to not use it for reading or writing data files in any analyses run on HiPerGator. It is permissible to keep software builds, conda environments, text documents, and valuable scripts in $HOME as it is somewhat protected by daily [[Snapshots|snapshots]].
+
The first rule of using the home directory is to not use it for reading or writing data files in any analyses run on HiPerGator. It is permissible to keep software builds, conda environments, text documents, and valuable scripts in $HOME as it is somewhat protected by daily [[Snapshots|snapshots]].
  
 
==Blue Storage==
 
==Blue Storage==
Blue Storage is our main high-performance parallel filesystem. This is where all job input/output a.k.a 'job i/o' or reading and writing files must happen. By default your personal directory tree will start at '<code>/ufrc/GROUP/USER</code>'. That directory cannot be modified by other group members. There is a shared directory at '<code>/ufrc/GROUP/share</code>' for groups that prefer to share all their data between group members. The parallel nature of the Blue Storage makes it very efficient at reading and writing large files, which can be 'striped' or broken into pieces to be stored on different storage servers. It does not deal well with directories that have a large number of very small files. If a job produces those it is advisable to make use of the [[Temporary Directories]] to alleviate the burden on Blue Storage and make it more responsive and performant for everyone. For groups with additional projects the default path is '<code>/ufrc/PROJECT</code>'. That directory is set up similarly to the 'share' directory in the primary group directory tree.
+
Blue Storage is our main high-performance parallel filesystem. This is where all job input/output a.k.a 'job i/o' or reading and writing files must happen. By default your personal directory tree will start at <code>/blue/GROUP/USER</code>. That directory cannot be modified by other group members. There is a shared directory at <code>/blue/GROUP/share</code> for groups that prefer to share all their data between group members. The parallel nature of the Blue Storage makes it very efficient at reading and writing large files, which can be 'striped' or broken into pieces to be stored on different storage servers. It does not deal well with directories that have a large number of very small files. If a job produces those it is advisable to make use of the [[Temporary Directories]] to alleviate the burden on Blue Storage and make it more responsive and performant for everyone. For groups that purchased separate storage for additional projects the default path to the project directories is <code>/blue/PROJECT</code>. That directory is set up similarly to the 'share' directory in the primary group directory tree.
 +
===Storage Automounting===
 +
Do not be alarmed if you do not see your Blue Storage directory when looking in <code>/blue</code> at first. Blue Storage is connected (mounted) on demand, so you have to 'use' it before it becomes visible. For example, changing into your blue directory or listing its contents will connect it and make it visible. so use one of the following commands to avoid surprises:
 +
<pre>cd /blue/GROUP #moves you to the folder
 +
ls /blue/GROUP #lists the contents of the folder</pre>
 +
 
 +
If you are using Jupyter Notebook or other GUI or web applications that make it difficult to browse to a specific path you can [[Jupyter_Notebooks#Create_the_Link | create a symlink (shortcut)]]. Example: ln -s path_to_link_to name_of_link
  
 
==Orange Storage==
 
==Orange Storage==
As described in the UFRC filesystem document above Orange storage is cheaper than Blue, but that means that it cannot support the full brunt of the applications running on HiPerGator. Limit its use to long-term storage of data that's not currently in use or for very gentle access like serial reading of raw data for QC/Filtering with the output of that first step in many workflows going to your Blue Storage directory tree. Do not be alarmed if you do not see your Orange Storage directory with 'ls' at first. Orange Storage is connected (mounted) on demand, so you have to 'use' it before it becomes visible. For example, changing into your orange directory will connect and make it visible. so use  
+
Orange storage is cheaper than Blue, but its hardware is also more limited. Therefore, orange storage cannot support the full brunt of the applications running on HiPerGator. Limit its use to long-term storage of data that's not currently in use or for very gentle access like serial reading of raw data for QC/Filtering with the output of that first step in many workflows going to your Blue Storage directory tree.
cd /orange/mygroup
+
 
before using
+
We only create the /orange/$GROUP directory when a new quota is added (no user or share directories like the ones pre-created for a group in /blue). Users in a group are expected to work out their own approach to storing and sharing data in their orange directory tree.
ls /orange/mygroup
+
 
to avoid surprises.
+
===Storage Automounting===
 +
Do not be alarmed if you do not see your Orange Storage directory with 'ls' at first. Orange Storage is connected (mounted) on demand, so you have to 'use' it before it becomes visible. For example, changing into your orange directory or listing its contents will connect it and make it visible. so use one of the following commands to avoid surprises:
 +
*<pre>cd /orange/mygroup</pre>
 +
*<pre>ls /orange/mygroup</pre>
 +
 
 +
==Red Storage==
 +
Red storage is fully flash based and can support high rates of i/o. The point to remember about Red storage is that the allocations are short-term and the data is removed within 24 hrs of the allocation's end date. See the policy page for how to request an allocation.
 +
 
 +
==Local Scratch Storage==
 +
All HiPerGator compute nodes have local storage. That storage is flash-based on HPG3 and newer nodes and can support high i/o rates. Older nodes use spinning disks with lower i/o rates compared to flash. Using local scratch storage on HiPerGator compute nodes [[Temporary Directories]] is a way to insulate an analysis from most of the other jobs running on HiPerGator, which are generally using Blue storage. Therefore it may be possible to use TMPDIR to get much higher i/o rates as the job competes for local scratch i/o with a limited number of jobs running on the same compute node that also chose to use local scratch. The caveat is that using local scratch requires staging out input data (copying it from /blue to $TMPDIR within a job) and staging in the results (copying results files back to /blue) since the job's temporary scratch directory is automatically removed at the end of the job, so the files on it are irretrievably lost.
 +
 
 +
==Checking Quotas and Managing Space==
 +
The [[UFRC_environment_module|ufrc environment module]] has several tools useful for checking storage use and quotas as well as exploring directories and their space use.
 +
* <code>home_quota</code> - show your HiPerGator Home directory quota usage.
 +
* <code>blue_quota</code> - show HiPerGator Blue Storage (<code>/blue</code>) quota usage for your user and group.
 +
* <code>orange_quota</code> - show HiPerGator Orange Storage (<code>/orange</code>) quota usage for your project(s).
 +
* <code>ncdu</code> - an interactive program for showing directory sizes, browsing a directory tree, and removing files and directories in a terminal (ssh session).
 +
 
 +
==Shared Work and Storage Management==
 +
Note that HiPerGator is a RedHat Enterprise Linux based cluster. Its main shared filesystems are based on [https://www.opensfs.org/lustre/ Lustre]. All filesystem management limitations follow from this setup.
 +
 
 +
The sponsor of a group on HiPerGator has the ultimate authority over any data produced by their group members within the limitations of the Linux kernel, linux [https://linuxize.com/post/understanding-linux-file-permissions/ filesystem permission model], and Lustre filesystem [https://wiki.lustre.org/images/5/57/PosixAccessControlInLinux.pdf implementation] of the POSIX [https://www.linux.com/news/posix-acls-linux/ Access Control Lists].
 +
*What this means in practical terms is that the sponsor can decide on any action pertaining to the disposition of the files under their control, but how a particular change can be implemented, if at all possible, can vary both in its scope, the amount of initial and maintenance effort required, and the support request timeline.
 +
*It's important to understand both the security model and limitations imposed by the system when considering what approaches to data management within a group or a project directory are possible.
 +
 
 +
In a default setup each primary or project group with a Blue storage quota will have a <code>/blue/GROUP</code> top level directory, which contains individual user directories and a <code>share</code> sub-directory for collaborative projects. The initial permissions are set such that only individual users have write access to their personal directories and the share directory is group-writable. The default for groups is that group members have read access to other group member folders.
 +
 
 +
For the Orange filesystem the <code>/orange/GROUP</code> directory is group-writable with no other directories or permissions created by default.
 +
 
 +
Access to files in the <code>/orange/GROUP</code> or <code>/blue/GROUP/share</code> directories depends on individual <code>umask</code> settings or results of <code>chmod</code> commands to change permissions and are within the purview of the group members with no RC staff involvement.
 +
*It is not expected that other users are given write access to another group member's files outside of the share directory.
 +
*When an account is deleted because of inactivity or an explicit sponsor support request the default action is to move the user's personal directory to <code>/blue/GROUP/share</code> and <code>chown</code> all files in Blue or Orange group directory trees that were owned by the user account in question to the username of the group's sponsor.
 +
*A sponsor opening a [https://support.rc.ufl.edu support request] can ask for a different dispensation of the former group member's files.
 +
 
 +
If the default directory and permission configuration and the account removal procedure fits how the group operates no further changes are necessary. However, we have observed additional approaches to group data management as well as support requests for changes that can be contrary to system limitations or are difficult to implement and maintain and therefore may not be advisable even if somewhat feasible.
 +
 
 +
===Collaborative Approaches===
 +
See also: [[Sharing Within A Cluster]]
 +
 
 +
In a fully collaborative research group on HiPerGator all members are encouraged to set <code>umask 007</code> in their <code>~/.bashrc</code> file to make sure that write permissions are set on all files and directories created by group members.
 +
*This will allow all users in the group to manage (read, write, execute) all group files. If security against accidental file deletion is desirable the group is advised to purchase [[Tivoli Backup]] from UFIT ICT.
 +
If there is a need to share files with members of other groups or external collaborators, multiple approaches are possible. The most straightforward way is to to share a directory via a [[Globus]] Collection. This approach will work for both HPG and external collaborators to make a copy of the data and permissions are controlled in Globus.
 +
 
 +
* If a selected set of users from multiple HPG groups must work on a project the preferred approach is for a sponsor or sponsors of the project to request a project group creation and purchase a storage quota for the project. In that case all members of the project will be added to the project group as secondary members and will have manage project file in a manner similar to how the Blue share or Orange directory is managed.
 +
 
 +
If it is necessary to give access to the directory to a member of a different HPG group the most straightforward solution from system administration viewpoint is to add that user as a secondary member of the sharing group. However, this change gives the user access to all group-readable files and the ability to use the group's computational resources on HiPerGator. This may not be desirable for one reason or another. The request to add a user to a group should be made by, and will require approval of, the group's sponsor via a [https://support.rc.ufl.edu support request].
 +
 
 +
* There are more complex situations. Unfortunately, there is no general mechanism in Linux to allow hierarchical access permissions on filesystems. It may be possible to set Lustre filesystem ACLs on <code>/blue</code> and <code>/orange</code> directories to allow more complex permissions. For example, allowing a user to manage another user's files, which means they will be able to modify, rename, move, or remove files and directories they do not own or have no group level write access to. Using Lustre ACLs is not a straightforward approach and the ACL interactions with the Linux filesystem permissions may still preclude write access to some files and directories. We are willing to make such changes or work with the group to determine a usable <code>setfacl</code> command. However, the success is not guaranteed and the ACL permissions may be lost or not applied to new files. Please use the [https://support.rc.ufl.edu RC Support System] to get in touch if you really need to use filesystem ACLs and are having trouble or need group-wide settings.

Latest revision as of 16:11, 3 June 2024

UF Research Computing maintains several shared storage systems that are intended for different user activities. A summary of the filesystems and their use can be found in the RC storage policy. Here we discuss practical use of the filesystems on HiPerGator. See our FAQ to see what to do when you run out of storage space.

Home Storage

Your home directory is the first directory you see when you log into HiPerGator. It's always found at ~, /home/$USER or $HOME paths. The shell variables above can be used in scripts. The home directories are the smallest storage devices available to our users. They contain files important for setting up user shell environment and secure shell connections. Do not remove any .bash* files or the .ssh directory or you will have problems using your HiPerGator account. Let us know if some of them were removed by accident, so we could reset the files to standard versions.

The first rule of using the home directory is to not use it for reading or writing data files in any analyses run on HiPerGator. It is permissible to keep software builds, conda environments, text documents, and valuable scripts in $HOME as it is somewhat protected by daily snapshots.

Blue Storage

Blue Storage is our main high-performance parallel filesystem. This is where all job input/output a.k.a 'job i/o' or reading and writing files must happen. By default your personal directory tree will start at /blue/GROUP/USER. That directory cannot be modified by other group members. There is a shared directory at /blue/GROUP/share for groups that prefer to share all their data between group members. The parallel nature of the Blue Storage makes it very efficient at reading and writing large files, which can be 'striped' or broken into pieces to be stored on different storage servers. It does not deal well with directories that have a large number of very small files. If a job produces those it is advisable to make use of the Temporary Directories to alleviate the burden on Blue Storage and make it more responsive and performant for everyone. For groups that purchased separate storage for additional projects the default path to the project directories is /blue/PROJECT. That directory is set up similarly to the 'share' directory in the primary group directory tree.

Storage Automounting

Do not be alarmed if you do not see your Blue Storage directory when looking in /blue at first. Blue Storage is connected (mounted) on demand, so you have to 'use' it before it becomes visible. For example, changing into your blue directory or listing its contents will connect it and make it visible. so use one of the following commands to avoid surprises:

cd /blue/GROUP #moves you to the folder
ls /blue/GROUP #lists the contents of the folder

If you are using Jupyter Notebook or other GUI or web applications that make it difficult to browse to a specific path you can create a symlink (shortcut). Example: ln -s path_to_link_to name_of_link

Orange Storage

Orange storage is cheaper than Blue, but its hardware is also more limited. Therefore, orange storage cannot support the full brunt of the applications running on HiPerGator. Limit its use to long-term storage of data that's not currently in use or for very gentle access like serial reading of raw data for QC/Filtering with the output of that first step in many workflows going to your Blue Storage directory tree.

We only create the /orange/$GROUP directory when a new quota is added (no user or share directories like the ones pre-created for a group in /blue). Users in a group are expected to work out their own approach to storing and sharing data in their orange directory tree.

Storage Automounting

Do not be alarmed if you do not see your Orange Storage directory with 'ls' at first. Orange Storage is connected (mounted) on demand, so you have to 'use' it before it becomes visible. For example, changing into your orange directory or listing its contents will connect it and make it visible. so use one of the following commands to avoid surprises:

  • cd /orange/mygroup
  • ls /orange/mygroup

Red Storage

Red storage is fully flash based and can support high rates of i/o. The point to remember about Red storage is that the allocations are short-term and the data is removed within 24 hrs of the allocation's end date. See the policy page for how to request an allocation.

Local Scratch Storage

All HiPerGator compute nodes have local storage. That storage is flash-based on HPG3 and newer nodes and can support high i/o rates. Older nodes use spinning disks with lower i/o rates compared to flash. Using local scratch storage on HiPerGator compute nodes Temporary Directories is a way to insulate an analysis from most of the other jobs running on HiPerGator, which are generally using Blue storage. Therefore it may be possible to use TMPDIR to get much higher i/o rates as the job competes for local scratch i/o with a limited number of jobs running on the same compute node that also chose to use local scratch. The caveat is that using local scratch requires staging out input data (copying it from /blue to $TMPDIR within a job) and staging in the results (copying results files back to /blue) since the job's temporary scratch directory is automatically removed at the end of the job, so the files on it are irretrievably lost.

Checking Quotas and Managing Space

The ufrc environment module has several tools useful for checking storage use and quotas as well as exploring directories and their space use.

  • home_quota - show your HiPerGator Home directory quota usage.
  • blue_quota - show HiPerGator Blue Storage (/blue) quota usage for your user and group.
  • orange_quota - show HiPerGator Orange Storage (/orange) quota usage for your project(s).
  • ncdu - an interactive program for showing directory sizes, browsing a directory tree, and removing files and directories in a terminal (ssh session).

Shared Work and Storage Management

Note that HiPerGator is a RedHat Enterprise Linux based cluster. Its main shared filesystems are based on Lustre. All filesystem management limitations follow from this setup.

The sponsor of a group on HiPerGator has the ultimate authority over any data produced by their group members within the limitations of the Linux kernel, linux filesystem permission model, and Lustre filesystem implementation of the POSIX Access Control Lists.

  • What this means in practical terms is that the sponsor can decide on any action pertaining to the disposition of the files under their control, but how a particular change can be implemented, if at all possible, can vary both in its scope, the amount of initial and maintenance effort required, and the support request timeline.
  • It's important to understand both the security model and limitations imposed by the system when considering what approaches to data management within a group or a project directory are possible.

In a default setup each primary or project group with a Blue storage quota will have a /blue/GROUP top level directory, which contains individual user directories and a share sub-directory for collaborative projects. The initial permissions are set such that only individual users have write access to their personal directories and the share directory is group-writable. The default for groups is that group members have read access to other group member folders.

For the Orange filesystem the /orange/GROUP directory is group-writable with no other directories or permissions created by default.

Access to files in the /orange/GROUP or /blue/GROUP/share directories depends on individual umask settings or results of chmod commands to change permissions and are within the purview of the group members with no RC staff involvement.

  • It is not expected that other users are given write access to another group member's files outside of the share directory.
  • When an account is deleted because of inactivity or an explicit sponsor support request the default action is to move the user's personal directory to /blue/GROUP/share and chown all files in Blue or Orange group directory trees that were owned by the user account in question to the username of the group's sponsor.
  • A sponsor opening a support request can ask for a different dispensation of the former group member's files.

If the default directory and permission configuration and the account removal procedure fits how the group operates no further changes are necessary. However, we have observed additional approaches to group data management as well as support requests for changes that can be contrary to system limitations or are difficult to implement and maintain and therefore may not be advisable even if somewhat feasible.

Collaborative Approaches

See also: Sharing Within A Cluster

In a fully collaborative research group on HiPerGator all members are encouraged to set umask 007 in their ~/.bashrc file to make sure that write permissions are set on all files and directories created by group members.

  • This will allow all users in the group to manage (read, write, execute) all group files. If security against accidental file deletion is desirable the group is advised to purchase Tivoli Backup from UFIT ICT.

If there is a need to share files with members of other groups or external collaborators, multiple approaches are possible. The most straightforward way is to to share a directory via a Globus Collection. This approach will work for both HPG and external collaborators to make a copy of the data and permissions are controlled in Globus.

  • If a selected set of users from multiple HPG groups must work on a project the preferred approach is for a sponsor or sponsors of the project to request a project group creation and purchase a storage quota for the project. In that case all members of the project will be added to the project group as secondary members and will have manage project file in a manner similar to how the Blue share or Orange directory is managed.

If it is necessary to give access to the directory to a member of a different HPG group the most straightforward solution from system administration viewpoint is to add that user as a secondary member of the sharing group. However, this change gives the user access to all group-readable files and the ability to use the group's computational resources on HiPerGator. This may not be desirable for one reason or another. The request to add a user to a group should be made by, and will require approval of, the group's sponsor via a support request.

  • There are more complex situations. Unfortunately, there is no general mechanism in Linux to allow hierarchical access permissions on filesystems. It may be possible to set Lustre filesystem ACLs on /blue and /orange directories to allow more complex permissions. For example, allowing a user to manage another user's files, which means they will be able to modify, rename, move, or remove files and directories they do not own or have no group level write access to. Using Lustre ACLs is not a straightforward approach and the ACL interactions with the Linux filesystem permissions may still preclude write access to some files and directories. We are willing to make such changes or work with the group to determine a usable setfacl command. However, the success is not guaranteed and the ACL permissions may be lost or not applied to new files. Please use the RC Support System to get in touch if you really need to use filesystem ACLs and are having trouble or need group-wide settings.