NCSA Delta GPU (Delta GPU)

2FA/MFA RP account needed

Delta offers a highly capable GPU-focused computing environment designed to support both GPU and CPU workloads. It provides specialized GPU-dense nodes equipped with both NVIDIA and AMD GPUs, enabling large-scale, high-performance computing tasks. In addition, Delta features a high-performance tiered storage system that includes node-local NVMe SSDs and shared Lustre parallel filesystems. Relaxed-POSIX filesystems are currently in development, but have not yet been implemented.

Ask about Delta GPU

File Transfer

Supported Methods Data Transfer Node URL
SCP dt-login[01-04].delta.ncsa.illinois.edu
RSYNC dt-login[01-04].delta.ncsa.illinois.edu
GLOBUS | RECOMMENDED https://www.globus.org/globus-connect-personal

Storage

File System

Directory Path Quota Purge Backup Notes
HOME /u 100 GB 750,000 files per users. None 30 days Area for software, scripts, job files, and so on. Not intended as a source/destination for I/O during jobs.
PROJECTS /projects 500GB. Up to 25TB by request. None None Area for shared data for a project, common data sets, software, results, and so on.
WORK-HDD /work/hdd 1000GB. Up to 100TB by request. None None Area for computation, largest allocations, where I/O from jobs should occur. (this is now your scratch volume)
WORK-NVME /work/nvme NVME space is available upon request. None None Area for computation, NVME is best for lots of small I/O from jobs.
TMP /tmp 0.74 (CPU) or 1.5TB (GPU). Shared or dedicated depending on node usage by jobs. Purged after each job None Locally attached disk for fast small file I/O.

Jobs

Queue specifications

Metrics updated 2026-05-02

Name Purpose CPUs GPUs RAM Jobs
30 days
Wait Time
30-day trend
Wall Time
30-day trend
gpuA100x4 Default A100 queue with 4 GPUs per node for standard GPU jobs 64 4 x NVIDIA A100 per node 256GB 71,204
gpuA100x4 wait time: average 9.1 hours, range 2.3 to 31.5 hours over 30 days 9.1
gpuA100x4 wall time: average 2.1 hours, range 0.1 to 6.2 hours over 30 days 2.1
gpuA100x4-interactive Interactive A100 sessions for debugging and testing 64 4 x NVIDIA A100 per node 256GB 31,250
gpuA100x4-interactive wait time: average 0.9 hours, range 0 to 2.7 hours over 30 days 0.9
gpuA100x4-interactive wall time: average 0.2 hours, range 0.1 to 0.7 hours over 30 days 0.2
gpuA100x4-preempt Preemptible A100 jobs 64 4 x NVIDIA A100 per node 256GB 212
gpuA100x4-preempt wait time: average 8.7 hours, range 0 to 65.6 hours over 30 days 8.7
gpuA100x4-preempt wall time: average 0.2 hours, range 0 to 1.5 hours over 30 days 0.2
gpuA100x8 Large-scale multi-GPU workloads using 8 A100 GPUs 128 8× NVIDIA A100 ~2TB 10,761
gpuA100x8 wait time: average 7.2 hours, range 1.4 to 29.6 hours over 30 days 7.2
gpuA100x8 wall time: average 1.2 hours, range 0.2 to 4.8 hours over 30 days 1.2
gpuA100x8-interactive Interactive large-scale A100 jobs for development/testing 128 8× NVIDIA A100 ~2TB 206
gpuA100x8-interactive wait time: average 0.3 hours, range 0 to 6.3 hours over 30 days 0.3
gpuA100x8-interactive wall time: average 0.7 hours, range 0 to 1 hours over 30 days 0.7
gpuA40x4 Moderate GPU workloads 64 4× NVIDIA A40 256GB 93,233
gpuA40x4 wait time: average 6.0 hours, range 0.4 to 136.5 hours over 30 days 6.0
gpuA40x4 wall time: average 1.3 hours, range 0.1 to 5.9 hours over 30 days 1.3
gpuA40x4-interactive Interactive A40 usage for testing and development 64 4× NVIDIA A40 256GB 21,081
gpuA40x4-interactive wait time: average 3.6 hours, range 0 to 11.3 hours over 30 days 3.6
gpuA40x4-interactive wall time: average 0.2 hours, range 0 to 0.3 hours over 30 days 0.2
gpuA40x4-preempt Preemptible A40 jobs for flexible, lower-priority workloads 64 4× NVIDIA A40 256GB 2,215
gpuA40x4-preempt wait time: average 2.4 hours, range 0 to 37.6 hours over 30 days 2.4
gpuA40x4-preempt wall time: average 0.2 hours, range 0 to 3.9 hours over 30 days 0.2
gpuH200x8 High-end GPU queue for memory-intensive workloads 96 8× NVIDIA H200 ~2TB 3,527
gpuH200x8 wait time: average 12.3 hours, range 1.3 to 30.3 hours over 30 days 12.3
gpuH200x8 wall time: average 5.1 hours, range 0.1 to 8.7 hours over 30 days 5.1
gpuH200x8-interactive Interactive H200 sessions for development and tuning 96 8× NVIDIA H200 ~2TB 2,097
gpuH200x8-interactive wait time: average 1.5 hours, range 0 to 3.7 hours over 30 days 1.5
gpuH200x8-interactive wall time: average 0.3 hours, range 0.1 to 0.7 hours over 30 days 0.3
gpuMI100x8 AMD GPU workloads optimized for MI100 architecture 128 8× AMD MI100 (+1 MI210) ~2TB 692
gpuMI100x8 wait time: average 2.1 hours, range 0 to 48 hours over 30 days 2.1
gpuMI100x8 wall time: average 0.8 hours, range 0 to 24 hours over 30 days 0.8
gpuMI100x8-interactive Interactive AMD GPU sessions for development/testing 128 8× AMD MI100 (+1 MI210) ~2TB 72
gpuMI100x8-interactive wait time: average 0.1 hours, range 0 to 0.2 hours over 30 days 0.1
gpuMI100x8-interactive wall time: average 0.3 hours, range 0 to 0.7 hours over 30 days 0.3

Datasets

Name Description
alphafold

Protein structure prediction datasets used for bioinformatics research with AlphaFold.

models(ollama)

Pretrained large language models used with Ollama for local inference and AI experimentation.