NCSA DeltaAI

2FA/MFA RP account needed

DeltaAI tripled NCSA's AI compute power, and offers a combined 152 CPU and GPU nodes. It is designed to target the computational needs of Artificial Intelligence/Machine Learning (AI/ML) workloads.

Ask about DeltaAI

File Transfer

For large transfers, DeltaAI recommends using Globus. For smaller transfers, they recommend SCP or Rsync.

Supported Methods Data Transfer Node URL
GLOBUS | RECOMMENDED https://app.globus.org/?_gl=1*cc427t*_ga*MTUyMjUyMjE3MC4xNzc0MjkwMTQ3*_ga_7ZB89HGG0P*czE3NzQyOTAxNDckbzEkZzAkdDE3NzQyOTAxNDckajYwJGwwJGgw
RSYNC | RECOMMENDED dtai-login.delta.ncsa.illinois.edu https://dtai-login.delta.ncsa.ilinois.edu
SCP | RECOMMENDED dtai-login.delta.ncsa.illinois.edu https://dtai-login.delta.ncsa.ilinois.edu

Storage

File System

Directory Path Quota Purge Backup Notes
HOME /u 90 GB, 600,000 files per user No Area for software, scripts, job files, and so on. Not intended as a source or destination for I/O during jobs.
Projects /projects 500 GB, up to 1-25 TB by allocation No Area for shared data for a project, common data sets, software, results, and so on.
Work - HDD /work/hdd 1000 GB, Up to 1-100 TB by allocation No Area for computation, largest allocations, where I/O from jobs should occur. Will be shared between Delta and DeltaAI.
WORK - NVME /work/nvme 1000 GB, Up to 1-100 TB by allocation No Area for computation, NVME is best for lots of small I/O from jobs. Will be shared between Delta and DeltaAI.
/tmp /tmp 3.9 TB After each job Locally attached disk for fast small file I/O.

External Storage

For large data transfers, Globus is recommended. For smaller transfers, SCP or Rsync are recommended.

 

Helpful Commands: The quota command is useful for viewing your usage of file systems quota The accounts command is useful for mapping ACCESS projects to local projects on DeltaAI accounts


Jobs

Slurm Batch Environment - Additional information available at Running Jobs — UIUC NCSA DeltaAI User Guide 

Queue specifications

Name Purpose CPUs GPUs RAM Jobs
30 days
Wait Time
30-day trend
Wall Time
30-day trend
ghx4* 4 Nvidia super chips and 1 ARM based CPU per node 1 H100 per node 3,500 GB per node