The National Institute for Computational Sciences

File Systems - SIP

Summary

The below table describes the ACF file systems.
File System PurposePath to User's DirectoryQuota, Purge policy
Home Directory /nics/[a,b,c,d]/home/{username} 10GB quota, not purged
Lustre Scratch Directory /lustre/sip/proj/{project}/{username} No Quota , purged
Project Directory /projects/{project} By Request, not purged
ACF file systems are generally very reliable, however, data may still be lost or corrupted. Users are responsible for backing up critical data unless arrangements are made in advance. Backups can be provided by request for a fee for critical data.

Backups are performed on the Home Directory file system. Lustre SIP project directories are only backed up by request for a fee. Lustre SIP scratch directories are NOT backed up.

NFS Home Directories

NFS space is available and used for home directories with approximately 500 gigabytes (GB) of space available in this file system. User home directories are provided by NFS and is unpurged with a default quota of 10 gigabytes (GB) of storage space. This is the location to store user files up to the quota limit. The environment variable $HOME points to your home directory path. To request an increase in home directory quota limit submit a request to help@nics.utk.edu.

Home directories are regularly backed up.

SIP Project space

Project space is available on each login node for each project. The location of each project's project space is at the /project/{project-name} location. Each user should create their own subdirectory under this project space the first time you make use of it. There is a special process to mount this project space. Each project will get a password that allows users to mount and use this space. Use the following command:
sudo luksmount {project-name}

For example:
[victor@sip-login1 ~]$ sudo luksmount SIP-STA0001
Enter passphrase for /dev/VolGroup/lv_SIP-STA0001:

Scratch Directories on Lustre SIP

The Lustre SIP file system provides about 15 terabytes (TB) of global high performance scratch space for data sets related to running jobs on the SIP resources and transferring data in and out of the data transfer nodes. Every user has their own scratch directory created at account creation time located in their lustre project space /lustre/sip/proj/{project}/{username}.The environment variable $SCRATCHDIR points to each users scratch directory location. Scratch space on SIP can be purged weekly, but has no storage space or quota limit associated with it.

Lustre SIP Scratch directories are NOT backed up.

Important Points for Users Using Lustre SIP Scratch

  • The Lustre SIP Scratch file system is scratch space, intended for work related to job setup, running jobs, and job cleanup and post-processing on SIP resources and not for long term data storage. Files in scratch directories are not backed up and data that has not been used for 30 days is subject to being purged. It is the user's responsibility to back up all important data to another storage resource.

    The Lustre find command can be used to determine files that are eligible to purge:

    > lfs find /lustre/sip/{project}/$USER -mtime +30 -type f
    
  • This will recursively list all regular files in your Lustre scratch area that are in eligible to be purged.

  • Striping is an important concept with Lustre—. Striping is the ability to break files into chunks and spread them across multiple storage targets (called OSTs). The striping defaults set up for NICS resources are usually sufficient but may need to be altered in certain use cases, like when dealing with very large files. Please see our Lustre Striping Guide for details.

  • Beware of using normal Linux commands for inspecting and managing your files and directories in Lustre scratch space. Using ls -l can cause undue load and may hang because it necessitates access to all OSTs holding your files. Make sure that your ls is not aliased to ls -l.

  • Use lfs quota to see your total usage on the Lustre system. You must specify your username and the Lustre path with this command, for example:

    > lfs quota -u <username> /lustre/haven
    

For more detailed information regarding Lustre usage, see the following pages:

NICS will be developing additional storage policies and will notify users about any storage policy changes .