Body
1. Cluster Architecture & Storage Types
AI.Panther is a high-performance computing (HPC) cluster at Florida Tech. It is not a single computer — it is a collection of interconnected nodes managed by a job scheduler.
How Access Works
- Off-Campus Users connect through a VPN Connection, then SSH into the Login Node.
- On-Campus Users SSH directly into the Login Node.
From the Login Node, Slurm dispatches your jobs to the appropriate compute nodes.
Compute Nodes
| Node Group |
Nodes |
| GPU Nodes 09–12 (H200) |
4 nodes |
| GPU Nodes 01–08 (A100) |
8 nodes |
| CPU Nodes 01–16 |
16 nodes |
Storage & Where It's Mounted
| Storage |
Path |
Mount Type |
Accessible From |
| User Home |
/home1 |
NFS Mount |
Login Node, all GPU nodes, all CPU nodes |
| Project Storage |
/shared/projects |
LFS Mount (DDN Servers) |
Login Node, all GPU nodes, all CPU nodes |
| Shared Scratch |
/shared/scratch |
LFS Mount (DDN Servers) |
Login Node, all GPU nodes, all CPU nodes |
| Local Scratch |
/localscratch |
Local Mount |
GPU Nodes 09–12 (H200) only |
| Archive |
/archive |
NFS Mount |
Login Node, CPU nodes |

Storage Details
| Path |
Type |
Description |
Lifecycle |
/home1 |
Network Home Directories |
User home directories, configs, scripts (all users) |
Persistent |
/localscratch |
Local Scratch |
Temporary, high-churn, node-local (all users, H200 nodes) |
Auto-purged |
/shared/projects |
Project Storage |
Shared research project data (requires project approval) |
Time-limited |
/shared/scratch |
Shared Scratch |
Temporary, high-churn, across nodes (all users) |
Auto-purged |
Key takeaway: The Login Node is for light tasks (editing files, submitting jobs). Compute nodes handle your actual workloads. Running heavy processes on the Login Node is prohibited.
📄 KB Article: AI.Panther Storage Types
2. SSH & Connecting
SSH (Secure Shell) is the standard way to log into a remote system over a network. All communication is encrypted. You will use SSH to connect to the AI.Panther login node.
2.1 What You Need
To connect, you need three things:
- Hostname:
ai-panther.fit.edu
- Username: your Florida Tech username (e.g.
aissitt2019)
- Password: your Florida Tech password (or an SSH key later)
2.2 Connecting
Open a terminal (Command Prompt, PowerShell, or macOS/Linux Terminal) and run:
ssh username@ai-panther.fit.edu
Note: On your first connection, you will see a message asking whether to trust the host. This is normal — type yes to continue.
2.3 Verifying Your Connection
Once connected, try these commands to confirm you are on the cluster:
hostname # Should display the login node name
whoami # Should display your username
2.4 Disconnecting
To end your SSH session:
exit
# or press Ctrl + D
3. Linux CLI (Navigating Directories, File Sizes)
After logging in with SSH, you interact with AI.Panther through the Linux command line interface (CLI). Commands you type run on the login node.
Your prompt looks like this:
username@ai-panther:~$
This tells you your username, the system you are on, and your current directory (~ means your home directory).
3.1 Navigation Commands
pwd # Show your current directory
ls # List files in the current directory
ls -lh # List files with human-readable sizes
cd dirname # Change into a directory
cd .. # Go up one level
cd ~ # Go to your home directory
3.2 File Size & Disk Usage
du -sh * # Show size of each file/directory
df -h # Show available disk space
3.3 Try It — Explore the File System
Run each of these commands after logging in and observe the output:
See where you are:
pwd
Expected output:
/home1/username
List what's in your home directory:
ls -lh
Expected output (yours will vary):
total 4.0K
drwxr-xr-x 2 username username 4.0K Feb 4 10:00 Documents
Navigate into a directory and back:
mkdir test_folder # Create a new directory
cd test_folder # Move into it
pwd # Confirm your location
cd .. # Go back up one level
pwd # Confirm you're back
Expected output:
/home1/username/test_folder
/home1/username
Check how much space you're using:
du -sh ~
Expected output:
12K /home1/username
Check available disk space on the system:
df -h /home1
Expected output (values will vary):
Filesystem Size Used Avail Use% Mounted on
server:/home1 5.0T 1.2T 3.8T 24% /home1
Create a file and verify it exists:
echo "Hello AI Panther" > hello.txt
cat hello.txt # Print the file contents
ls -lh hello.txt # Check its size
rm hello.txt # Clean up
Expected output:
Hello AI Panther
-rw-r--r-- 1 username username 17 Feb 4 10:05 hello.txt
4. File Transfers (rsync, VS Code)
You will frequently need to move files between your local machine and AI.Panther. You can use the VS Code GUI (drag-and-drop), or command-line tools like scp and rsync.
4.1 Local → Cluster
Using scp (Windows, macOS, Linux):
# Windows (PowerShell/CMD) — use backslashes for local paths:
scp -r .\local_folder\ username@ai-panther.fit.edu:/home1/username/
# macOS / Linux — use forward slashes:
scp -r ./local_folder/ username@ai-panther.fit.edu:/home1/username/
Using rsync (macOS, Linux only):
rsync -avh local_folder/ username@ai-panther.fit.edu:/home1/username/
Note: rsync is not available natively on Windows. You can install cwRsync to use it, or use scp or VS Code drag-and-drop instead.
4.2 Cluster → Local
Using scp (Windows, macOS, Linux):
# Windows (PowerShell/CMD):
scp -r username@ai-panther.fit.edu:/home1/username/data/ .\data\
# macOS / Linux:
scp -r username@ai-panther.fit.edu:/home1/username/data/ ./data/
Using rsync (macOS, Linux only):
rsync -avh username@ai-panther.fit.edu:/home1/username/data/ ./data/
4.3 VS Code Drag-and-Drop
If you have VS Code connected via Remote-SSH (see Section 5), you can simply drag and drop files between your local file explorer and the VS Code file panel. This is often the easiest option for Windows users.
Tip: You can also use FileZilla (GUI) for file transfers on any platform.
5. VS Code SSH Setup
Visual Studio Code can connect directly to AI.Panther, giving you a full editor, file browser, and integrated terminal on the cluster.
5.1 Install the Remote-SSH Extension
Open Visual Studio Code. Click the Extensions icon in the left sidebar (or press Ctrl+Shift+X). Search for "Remote - SSH" and install the extension by Microsoft.

5.2 Open the Remote Explorer
After installing the extension, a new Remote Explorer icon will appear in the left sidebar. Click it, then make sure the dropdown at the top is set to Remotes (Tunnels/SSH).

5.3 Add a New SSH Connection
In the Remote Explorer panel, expand SSH and click the + (plus) button to add a new remote. When prompted to enter an SSH connection command, type:
ssh username@ai-panther.fit.edu
Replace username with your Florida Tech TRACKS username.

5.4 Select the SSH Configuration File
When prompted to select an SSH configuration file, choose the default option (typically C:\Users\username\.ssh\config on Windows or ~/.ssh/config on macOS/Linux).

5.5 Connect to AI Panther
The ai-panther.fit.edu host should now appear under SSH in the Remote Explorer. Expand it and click Connect in Current Window or Connect in New Window next to your username entry. If prompted for the host operating system, select Linux. Enter your TRACKS password when prompted.
Note: On your first connection, you may see a message stating that the authenticity of the host can't be established. This is normal — type yes and press Enter to continue.
5.6 Open Your Home Directory
Once connected, go to File → Open Folder and enter the path to your home directory:
/home1/username
Click OK. You now have full file explorer access to your files on AI Panther.
Troubleshooting
- Connection timed out — Make sure FortiClient VPN is connected if you are off campus.
- Permission denied — Double-check your TRACKS username and password. Ensure DUO authentication is set up correctly.
- Host key verification failed — Remove the old entry from your
known_hosts file. On Windows: C:\Users\username\.ssh\known_hosts. On macOS/Linux: ~/.ssh/known_hosts.
- Cannot open folder — Verify the path is
/home1/username (note: home1, not home).
📄 KB Article: Setting Up VS Code for SSH on AI.Panther
6. Slurm (Job Scripts, Submitting/Monitoring Jobs)
Slurm is the job scheduler that manages access to AI.Panther's compute nodes. It controls who gets access to CPUs, GPUs, and memory, and in what order.
The workflow is:
- Prepare your job on the Login Node
- Submit it to Slurm
- Slurm places it in a queue and runs it on a compute node when resources become available
Login Node: Edit files, monitor jobs, submit jobs. Compute Nodes: Run your actual workloads (ML training, simulations, etc.).
6.1 Monitoring the Cluster
Before submitting jobs, check what hardware is available and whether nodes are busy.
View running jobs:
squeue
View available partitions and node states:
sinfo
This shows partition names, node availability, and node states (idle, mix, drain).
View detailed node info:
sinfo -N -l
This displays individual node names, CPU counts, memory, and state.
View GPU availability:
sinfo -o "%P %G %D"
Use this to determine which partitions have GPUs, what GPU types exist, and how many nodes are available.
6.2 Partitions
AI.Panther offers several partitions, each with different time limits and node counts. Choose the partition that fits your workload:
| Partition Name |
Max Compute Time |
Max Nodes |
| short |
45 minutes |
16 |
| med |
4 hours |
16 |
| long |
7 days |
16 |
| eternity |
Infinite |
16 |
| gpu1 |
Infinite |
4 |
| gpu2 |
Infinite |
4 |
| h200 |
Infinite |
4 |
📄 KB Article: AI.Panther Partitions
6.3 Job Scripts & Directives
A job script is a regular shell script (.sh) with Slurm directives at the top. These directives, prefixed with #SBATCH, tell Slurm what resources to allocate and what command to run.
Common Directives:
| Directive |
Purpose |
--job-name |
Name for the job |
--partition |
Which partition to submit to |
--nodes |
Number of nodes |
--ntasks |
Number of tasks (use 1 unless using MPI/DDP) |
--cpus-per-task |
CPUs per task |
--mem |
Memory allocation (e.g. 50GB) |
--time |
Max wall time (HH:MM:SS) |
--gres |
Generic resources (e.g. gpu:1) |
--output |
Path for stdout (e.g. job.%J.out) |
--error |
Path for stderr (e.g. job.%J.err) |
Important: If you did not explicitly design your code to run multiple processes (MPI / DDP), set --ntasks to 1.
6.4 Submitting & Managing Jobs
sbatch job.sh # Submit a job
squeue -u $USER # View your running jobs
scancel <jobid> # Cancel a job
6.5 Before You Submit
Before submitting, decide what hardware your job needs (CPU vs. GPU), how long it will run, and which partition to use. If possible, profile your application to estimate resource requirements:
6.6 Try It — Your First Job
Create a test job script:
nano test_job.sh
Type the following into the file:
#!/bin/bash
#SBATCH --job-name TestJob
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --mem=50MB
#SBATCH --time=00:15:00
#SBATCH --partition=short
#SBATCH --error=testjob.%J.err
#SBATCH --output=testjob.%J.out
module load mpich
echo "Starting at $(date)"
echo "Running on hosts: $SLURM_NODELIST"
echo "Running on $SLURM_NNODES nodes."
echo "Running on $SLURM_NPROCS processors."
echo "Current working directory is $(pwd)"
sleep 60
Save and exit (Ctrl+S, then Ctrl+X in nano). Then submit:
sbatch test_job.sh
Monitor your job:
squeue -u $USER # View your job in the queue
cat testjob.<jobid>.out # View the output after completion
📄 KB Article: Slurm Job Submission Examples
7. Virtual Environments (Python venv / Conda)
On AI.Panther, Python and Conda are provided through the module system. You should always work inside a virtual environment to manage your packages.
7.1 Option A: Python venv
module load python # Load Python
python -m venv myenv # Create environment
source ~/myenv/bin/activate # Activate it
pip install jupyterlab # Install packages
jupyter lab --version # Verify installation
In your Slurm job scripts, add these lines to activate the environment:
module load python
source ~/myenv/bin/activate
📄 KB Article: Python Virtual Environments on AI.Panther
7.2 Option B: Conda (Module)
module load anaconda3 # Load Conda
source $(conda info --base)/etc/profile.d/conda.sh # Initialize conda for shell
conda create -n myenv python=3.13 -y # Create environment
conda activate myenv # Activate it
conda install -c conda-forge jupyterlab -y # Install packages
jupyter lab --version # Verify installation
In your Slurm job scripts, add:
module load anaconda3
source $(conda info --base)/etc/profile.d/conda.sh
conda activate myenv
📄 KB Article: Conda Environments on AI.Panther
7.3 Option C: Miniforge3 (User-installed Conda)
If you prefer a user-managed Conda installation:
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
source ~/.bashrc
conda create -n myenv python=3.13.0 # Create environment
conda activate myenv # Activate it
conda install -c conda-forge jupyterlab # Install packages
🔗 Miniforge3 on GitHub | Conda Documentation
8. JupyterLab & Port Forwarding
JupyterLab provides a browser-based IDE for running notebooks on AI.Panther compute nodes. The process involves three steps: install prerequisites, start a compute session, and connect via port forwarding.
8.1 Prerequisites
Install the ipykernel package so JupyterLab can use your virtual environment:
conda install -c conda-forge ipykernel
or
pip install ipykernel
then:
python -m ipykernel install --user --name myenv --display-name "Python (myenv)"
8.2 Start an Interactive Session
Request a compute node (choose the appropriate partition):
# CPU partition:
srun -p med --nodes=1 --ntasks=1 --mem=50GB --time=01:00:00 --pty bash -i
# GPU partition:
srun -p gpu1 --nodes=1 --ntasks=1 --mem=50GB --time=01:00:00 --pty bash -i
8.3 Launch Jupyter
On the compute node, run:
NODE=$(hostname -f)
BASE=$(( 8000 + ($UID % 1000) ))
jupyter lab --no-browser --ip="$NODE" --port=$BASE --port-retries=200
Jupyter will print a URL like: http://node01:8123/lab?token=abc123...
8.4 Port Forwarding
On your local machine, open a new terminal and create an SSH tunnel:
ssh -N -L LOCAL_PORT:COMPUTE_NODE:REMOTE_PORT username@ai-panther.fit.edu
# Example: if Jupyter printed node01:8123
ssh -N -L 8123:node01:8123 username@ai-panther.fit.edu
Note: If you see an error like bind [127.0.0.1]:8123: Permission denied, it means the local port you requested is already in use. Simply choose a different local port.
8.5 Open in Your Browser
Navigate to http://localhost:8123/lab in your browser. On the login page, enter the token from the Jupyter output (all characters after token=).
📄 KB Article: Using JupyterLab on AI.Panther
9. Additional Resources
AI.Panther KB Articles
External Resources
Workshop Survey
AI.Panther Basics Workshop Survey – Fill out form