ACES is an innovative computing platform that provides a holistic solution for a wide range of users across diverse research communities, accommodating varying levels of computational adoption.
It leverages Liqid’s composable infrastructure framework through PCIe hybrid Gen4 and Gen5 architectures on Intel’s Sapphire Rapids processors. This enables a robust accelerator testbed featuring Intel Ponte Vecchio GPUs, Intel FPGAs, NVIDIA H100 GPUs, NEC Vector Engines, NextSilicon co-processors, and Graphcore IPUs.
These accelerators are integrated with Intel Optane memory and DDN Lustre storage, all interconnected via NVIDIA Mellanox NDR 400 Gbps InfiniBand, enabling high-throughput, low-latency data movement.
The platform supports the convergence of AI and machine learning with traditional simulation and modeling techniques. As edge computing and instrument-driven data collection continue to expand, ACES addresses the growing need to verify, process, store, analyze, and query massive volumes of unstructured data in real time.
Login to ACES
The recommended method is through the ACES OnDemand portal (https://portal-aces.hprc.tamu.edu) which which provides browser-based access to files, terminals, and interactive applications.
For command-line access, users can connect via SSH using a secure jump-host configuration that routes through aces-jump.hprc.tamu.edu to the login node at login.aces.hprc.tamu.edu. Users must go through the ACES portal and download the pubkey pairs and edit the .ssh/config file. More detailed instructions can be found at https://hprc.tamu.edu/kb/User-Guides/ACES/#ssh-login.
Detailed instructions for SSH setup, including key configuration and connection commands, are available in the ACES documentation and linked setup guides.
SSH Login
$ ssh <your_username>@login.aces.hprc.tamu.edu
File Transfer
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| GLOBUS | RECOMMENDED | ACCESS TAMU ACES DTN | https://app.globus.org/dashboard |
| SCP/SFTP | ACCESS TAMU ACES DTN | |
| FTP | ACCESS TAMU ACES DTN | |
| RSYNC | ACCESS TAMU ACES DTN | |
| RCLONE | ACCESS TAMU ACES DTN | |
| PORTAL | ACCESS TAMU ACES DTN | https://portal.hprc.tamu.edu |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| $HOME | `/home/username` | 10 GB ~10,000 files | 6 months after account deactivation | Nightly | Small scripts, config files, not for general use |
| $SCRATCH | `/scratch/user/username` | 1TB ~250,000 files | Not scheduled, but purged when quotas are exceed. | None | Primary working directory for jobs, not for long-term storage. |
| $PROJECT | `scratch/group/projectid` | 5TB ~500,000 files | N/a | No | Shared storage for group members |
Jobs
Queue specifications
Metrics updated 2026-05-02
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| cpu | General CPU-only jobs | Intel Sapphire Rapid: up to 96 cores/node, 64 nodes (6,144 cores max) | 0 | ~488 GB per node | 7,728 |
|
|
| gpu | NVIDIA GPU workloads (AI/ML, CUDA, parallel GPU jobs) | Intel Sapphire Rapid: 96/node | H100 (up to 8 per node) | High-memory GPU nodes (~256–512+ GB per node, varies) | 1,745 |
|
|
| gpu-debug | Short GPU testing/debugging | Intel Sapphire Rapid: 96 cores (1 node max) | A30 (2 per node) | ~488 GB per node | — | — | — |
| pvc | Intel GPU Max (PVC) job | Intel Sapphire Rapid: up to 3,072 cores across 32 nodes | Up to 32 Intel PVC GPUs | ~488 GB per node | 395 |
|
|
| bittware | FPGA-based workloads and hardware acceleration | Intel Sapphire Rapid: up to 96 cores across 2 nodes | 0 (2 FPGA devices) | ~488 GB per node | 1 | 0 bittware wait time: 0 hours |
|
| memverge | Memory-intensive workloads and large dataset processing | Intel Sapphire Rapid: 96 cores (1 node) | 0 | ~488 GB per node | — | — | — |
| nextsilicon | Experimental NextSilicon accelerator workloads (restricted access) | Intel Sapphire Rapid: 96 cores (1 node) | 0 (NextSilicon coprocessor) | ~488 GB per node | — | — | — |
Datasets
| Name | Description |
|---|---|
| pytorch-computer-vision-datasets | A collection of standard computer vision datasets formatted for PyTorch, supporting tasks like image classification and object detection. On ACES, these are used to benchmark GPU performance and test distributed deep learning workflows across accelerators. |
| pytorch-language-modelling-datasets | Text-based datasets for training NLP and language models in PyTorch. In ACES, they support benchmarking of large-scale, memory-intensive workloads and evaluating performance of transformer-based models across hardware. |
| tensorflow-computer-vision-datasets | Computer vision datasets optimized for TensorFlow, covering tasks such as classification and segmentation. Within ACES, they enable framework comparisons and validation of TensorFlow pipelines on heterogeneous accelerators. |
| tensorflow-language-modelling-datasets | NLP datasets prepared for TensorFlow, used for language modeling, translation, and text analysis. On ACES, they help evaluate distributed training performance and accelerator efficiency for sequential data workloads. |
| videollama_dataset | A multimodal dataset combining video and text for tasks like video understanding and captioning. In ACES, it is used to test high-throughput, multi-accelerator workflows and benchmark complex AI pipelines. |