Delta offers a highly capable GPU-focused computing environment designed to support both GPU and CPU workloads. It provides specialized GPU-dense nodes equipped with both NVIDIA and AMD GPUs, enabling large-scale, high-performance computing tasks. In addition, Delta features a high-performance tiered storage system that includes node-local NVMe SSDs and shared Lustre parallel filesystems. Relaxed-POSIX filesystems are currently in development, but have not yet been implemented.
Login to Delta GPU
Direct access to the Delta login nodes is available via SSH. To log in, users must have their NCSA username, password, and complete NCSA Duo multi-factor authentication. Delta provides a total of four login nodes, labeled 01 through 04.
When setting up access, users should be prepared to address components such as their NCSA account, the Duo app for authentication, SSH configuration, and SSH key pairs. It is important to note that the use of SSH key pairs is disabled for general use. Even principal investigators are not permitted to use SSH key pairs in place of two-factor authentication. The only exception applies to principal investigators with a Gateway allocation. This does not apply to most projects, but if Gateway account key pair access is required, a support ticket must be submitted.
In addition to SSH access, Delta also provides access through other portals such as Open OnDemand, which can be used for a more user-friendly interface and additional functionality. For maintaining persistent login sessions, the tmux multiplexer is available on all Delta login nodes. This utility allows users to run multiple programs within a single terminal, detach sessions while keeping processes running, and later reattach them from a different terminal.
SSH Login
$ ssh <your_username>@login.delta.ncsa.illinois.edu
File Transfer
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| SCP | dt-login[01-04].delta.ncsa.illinois.edu | |
| RSYNC | dt-login[01-04].delta.ncsa.illinois.edu | |
| GLOBUS | RECOMMENDED | https://www.globus.org/globus-connect-personal |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| HOME | /u | 100 GB 750,000 files per users. | None | 30 days | Area for software, scripts, job files, and so on. Not intended as a source/destination for I/O during jobs. |
| PROJECTS | /projects | 500GB. Up to 25TB by request. | None | None | Area for shared data for a project, common data sets, software, results, and so on. |
| WORK-HDD | /work/hdd | 1000GB. Up to 100TB by request. | None | None | Area for computation, largest allocations, where I/O from jobs should occur. (this is now your scratch volume) |
| WORK-NVME | /work/nvme | NVME space is available upon request. | None | None | Area for computation, NVME is best for lots of small I/O from jobs. |
| TMP | /tmp | 0.74 (CPU) or 1.5TB (GPU). Shared or dedicated depending on node usage by jobs. | Purged after each job | None | Locally attached disk for fast small file I/O. |
Jobs
See https://docs.ncsa.illinois.edu/systems/delta/en/latest/user_guide/running_jobs.html#sample-scripts for more information.
Queue specifications
Metrics updated 2026-05-02
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| gpuA100x4 | Default A100 queue with 4 GPUs per node for standard GPU jobs | 64 | 4 x NVIDIA A100 per node | 256GB | 71,204 |
|
|
| gpuA100x4-interactive | Interactive A100 sessions for debugging and testing | 64 | 4 x NVIDIA A100 per node | 256GB | 31,250 |
|
|
| gpuA100x4-preempt | Preemptible A100 jobs | 64 | 4 x NVIDIA A100 per node | 256GB | 212 |
|
|
| gpuA100x8 | Large-scale multi-GPU workloads using 8 A100 GPUs | 128 | 8× NVIDIA A100 | ~2TB | 10,761 |
|
|
| gpuA100x8-interactive | Interactive large-scale A100 jobs for development/testing | 128 | 8× NVIDIA A100 | ~2TB | 206 |
|
|
| gpuA40x4 | Moderate GPU workloads | 64 | 4× NVIDIA A40 | 256GB | 93,233 |
|
|
| gpuA40x4-interactive | Interactive A40 usage for testing and development | 64 | 4× NVIDIA A40 | 256GB | 21,081 |
|
|
| gpuA40x4-preempt | Preemptible A40 jobs for flexible, lower-priority workloads | 64 | 4× NVIDIA A40 | 256GB | 2,215 |
|
|
| gpuH200x8 | High-end GPU queue for memory-intensive workloads | 96 | 8× NVIDIA H200 | ~2TB | 3,527 |
|
|
| gpuH200x8-interactive | Interactive H200 sessions for development and tuning | 96 | 8× NVIDIA H200 | ~2TB | 2,097 |
|
|
| gpuMI100x8 | AMD GPU workloads optimized for MI100 architecture | 128 | 8× AMD MI100 (+1 MI210) | ~2TB | 692 |
|
|
| gpuMI100x8-interactive | Interactive AMD GPU sessions for development/testing | 128 | 8× AMD MI100 (+1 MI210) | ~2TB | 72 |
|
|
Datasets
| Name | Description |
|---|---|
| alphafold | Protein structure prediction datasets used for bioinformatics research with AlphaFold. |
| models(ollama) | Pretrained large language models used with Ollama for local inference and AI experimentation. |