Derecho is a high-performance computing system at NCAR designed for large-scale CPU-based scientific workloads. It consists primarily of CPU-only compute nodes powered by 3rd generation AMD EPYC Milan processors, each with 128 cores and 256 GB of memory per node.
The system is optimized for parallel computing, large-scale simulations, and data-intensive workloads using MPI and OpenMP. Derecho provides high-throughput compute capabilities and is connected by a high-speed HPE Slingshot interconnect for low-latency communication between nodes.
This resource is best suited for traditional HPC workloads that rely on distributed-memory parallelism and large CPU core counts.
File Transfer
Derecho uses NCAR’s GLADE storage system with multiple file spaces.
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| SCP | derecho.hpc.ucar.edu | |
| RSYNC | derecho.hpc.ucar.edu | |
| SFTP | derecho.hpc.ucar.edu | |
| RCLONE | RECOMMENDED | derecho.hpc.ucar.edu | https://rclone.org/downloads/ |
| GLOBUS | RECOMMENDED | https://www.globus.org/ |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| Home | /glade/u/home/<username> | 50 GB | Not purged | Yes | User home directory. Ideal for small scripts, source code, and configuration files that benefit from backup. |
| Scratch | /glade/derecho/scratch/<username> | 30 TB / 10M | 180 Days | No | Temporary space. Derecho's scratch file system also includes a limit of 10 Million on a users' total number of files |
| Work | /glade/work/<username> | 2 TB | Not purged | No | User work space. Ideal for compiled code, conda environments, and similar large holdings that do not require backup. |
| Campaign Storage | /glade/campaign | N/A | Not purged | No | Project space allocations (via allocation request) |
Jobs
Jobs on Derecho CPU resources are submitted using the PBS Professional scheduler and run on CPU-only compute nodes. Each node contains 128 cores, and nodes are typically allocated exclusively to jobs.
Users submit jobs using commands such as qsub, and monitor them with qstat. Jobs are scheduled based on priority, fair-share usage, queue time, and job size.
Workloads typically use MPI or hybrid MPI/OpenMP parallelism. Example resource requests specify CPU cores and threading configuration:
#PBS -l select=10:ncpus=128:mpiprocs=128:ompthreads=1These CPU resources are designed for large-scale simulations, data processing, and high-throughput scientific computing.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| Derecho | AMD EPYC 7763 (Milan) 2.45 Ghz | 2 GB per GPU Core | — | — | — |