Introduction
The AI.Panther cluster provides multiple storage types to support a wide range of research and computing workflows. Each storage type is designed for a specific purpose and differs in performance characteristics, access model, and data-retention characteristics.
This article describes the four primary storage types available on AI.Panther:
- Network Home Directories (/home1)
- Local Scratch (/localscratch)
- Project Storage (/shared/projects)
- Shared Scratch Storage (/shared/scratch)
Storage Overview
| Storage Type |
Path |
Intended Use |
Backups |
Retention |
| Network Home |
/home1/<user> |
User home directories, configs, scripts (all users) |
Yes |
Persistent |
| Local Scratch |
/localscratch |
Temporary, high-churn workloads (all users) |
No |
Auto-purged |
| Project |
/shared/projects/<project_name> |
Shared research project data (requires project approval) |
No |
Time-limited |
| Shared Scratch |
/shared/scratch |
Temporary, high-churn workloads across nodes |
No |
Auto-purged |
Network Home Directories
Overview
User home directories are provided via NFS and mounted at:
/home1
This is where all user home directories reside. The filesystem is mounted and available on all nodes.
Intended Use
- Shell configuration files
- Source code
- Job scripts
- Small-medium datasets
- Results that need long-term persistence
Access Model
- Each user has a private home directory
- Users cannot access other users' home directories
Characteristics
- Mounted via NFS on all nodes
- Backed up
- Persistent (no automatic deletion)
- Not optimized for large-scale or high-throughput I/O
Notes
Large datasets (50-100 GB+), model checkpoints, and high-throughput I/O workloads should not be stored in home directories. Consider requesting Project Storage or using Scratch instead.
Local Scratch
Overview
Local scratch storage is provided on individual H200 GPU nodes and is intended for temporary, node-local workloads. This storage is mounted at:
/localscratch
Intended Use
- Temporary job working directories
- Intermediate files
- Short-lived, node-local data
Access Model
- Available to all users running jobs on a given node
- Data only visible on the node where it was written
- Only available on H200 nodes in partition h200
Characteristics
- Local to the node
- No backups
- Automatically purged
- Highest locality and lowest latency for node-local workloads
Purge Policy
Files in local scratch are automatically deleted after 60 days from the last modification time.
Notes
Local scratch data should be treated as ephemeral. Do not store data that cannot be easily regenerated.
Project Storage
Overview
As of 2026, AI.Panther cluster includes a new, high-capacity DDN ExaScaler (Lustre) storage system providing approximately 1 pedibyte (PiB) of shared research storage. This storage is designed for research group project storage (not individual home directories).
Intended Use
- Shared datasets
- Simulation outputs
- Model checkpoints
- Collaborative research data
- Perormance-sensitive, large-scale workloads
Access Model
Project storage is group-based:
- A project corresponds to a group
- Access is granted via group membership
- Project data resides under a project directory:
Characteristics
- High-performance parallel filesystem (Lustre)
- No backups
- Shared across all nodes
- Designed for large capacity and high throughput
Allocation and Term
- Minimum allocation: 100 GB per project
- Default term: 1 year
- Renewals: renewed annually; renewal requests are submitted by the faculty sponsor (PI)
Notifications and Expiration
To avoid unintentional data loss, the project team will receive notifications for two scenarios:
- 30 days before the project storage is set to expire (data deletion)
- 30 days before the renewal deadline (data deletion)
Failure to renew may result in data deletion.
Deletion and Recoverability
⚠ Warning: Deleted files are not recoverable at this time. Treat all deletions as permanent. Please maintain a backup policy for your data.
Requesting Project Storage
Project storage is requested through TeamDynamix (TDX). Requests should include:
- Project name
- Project description
- Justification for request
- Amount of storage requested
- Expected duration of storage
- List of users in the group
Common User Commands
The following commands may be used by project members to view storage usage, quotas, and ownership:
df -h /shared/projects/<project_name>
lfs quota -h -g <project_group> /shared
ls -l /shared/projects/
ls -l /shared/projects/<project_name>
Shared Scratch
Overview
Shared Scratch storage is provided through the DDN ExaScaler system and is mounted at:
/shared/scratch
This storage is intended for temporary, high-churn workloads that require shared access across nodes.
Intended Use
- Temporary simulation outputs
- Intermediate files for large jobs
- Shared working directories for short-lived workflows
Access Model
- World-writable
- Available to all users
- Users may not delete files owned by other users
Characteristics
- High-performance parallel filesystem
- No backups
- Automatically purged
- Optimized for throughput, not persistence
Purge Policy
Files in local scratch are automatically deleted after 60 days from the last modification time.
Notes
Shared Scratch should never be used for long-term storage or as the sole copy of important data. All data stored here should be considered temporary.