Stampede3 is a NSF-funded supercomputer operated by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, serving as a premier national resource provider for the open science community. The system is designed to accelerate discovery by offering an ecosystem of computing hardware tailored to various scientific needs. Rather than relying on a single type of processor, Stampede3 provides a mix of standard computing resources, specialized GPUs for AI, and high-bandwidth memory systems. Additionally, to support workloads that rely on massive datasets, the system includes specialized high-capacity memory nodes that offer large amounts of RAM.
Login to TACC Stampede3
To log into Stampede3, users will use standard SSH, using their TACC account credentials. Users will receive their project allocation through ACCESS, but Stampede3 login access will go through TACC's systems.
The Stampede3 login nodes are Intel Xeon Platinum 8468 "Sapphire Rapids" (SPR) nodes, each with 96 cores on two sockets (48 cores/socket) with 250 GB of DDR.
Do NOT run ssh-keygen on Stampede3 itself once logged in. It will override internal cluster key pairings and interfere with batch jobs.
Login into a specific node:ssh yourusername [at] login2.stampede3.tacc.utexas.edu
Login with X11 forwarding(GUI applications):ssh -X yourusername [at] stampede3.tacc.utexas.edu
Note: On windows, we found it helpful to use WSL to bypass the Windows native OpenSSH client.
Once you enter the ssh command, you will be prompted with: (yourusername [at] stampede3.tacc.utexas.edu) Password:(yourusername [at] stampede3.tacc.utexas.edu) TACC Token Code:
where you must enter your TACC password and MFA respectively.
SSH Login
$ ssh <your_username>@stampede3.tacc.utexas.edu
File Transfer
Stampede3 supports two primary data transfer technologies: SSH-based tools (scp, sftp, rsync) and Globus.
For smaller transfers less than ~200GB, ssh tools are typically easier to work with. See https://docs.tacc.utexas.edu/datatransfer/ssh/ for more information.
For transfer larger than 200GB, Globus is the recommended transfer method. For more information on Globus on Stampede3, see https://docs.tacc.utexas.edu/datatransfer/globus/.
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| SCP | stampede3.tacc.utexas.edu | |
| GOLBUS | RECOMMENDED | stampede3.tacc.utexas.edu | https://app.globus.org |
| SFTP | stampede3.tacc.utexas.edu | |
| RSYNC | stampede3.tacc.utexas.edu |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| $HOME | 15 GB | After account deactivation | Yes - backed up regularly | Personal scripts, config files; not for high-intensity | |
| $WORK | 1 TB | Not on a schedule | Not backed up | 3,000,000 files across all TACC systems Not intended for parallel or high−intensity file operations. | |
| $SCRATCH | No quota | Subject to purge if access time is more than 10 days old. | Not backed up | Overall capacity ~10 PB. See TACC's Scratch File System Purge Policy. https://docs.tacc.utexas.edu/hpc/stampede3/#scratchpolicy |
External Storage
RANCH (Long-term Archival Tape Storage)
RANCH (Long-term Archival Tape Storage) is an allocated resource that is only available to users who have an allocation on a TACC system such as Stampede3. The default allocation for RANCH is 2 TB, and it is provided at no additional cost, meaning it does not consume ACCESS credits. It is a long-term tape storage system designed specifically for archiving data that is unlikely to change or require frequent access. RANCH can be accessed through the Globus endpoint Ranch3, and files can also be transferred using SCP with a command such as scp myfile ${ARCHIVER}:${ARCHIVE}/myfilepath.
Corral (Online Project Storage)
Corral (Online Project Storage) is a collection of storage and data management resources at Texas Advanced Computing Center, offering 40 PB of online storage in its primary data center, along with a tape-based replica stored in a secondary data center for enhanced security. Corral is available at no charge to University of Texas researchers and is offered at a low annual cost to non-UT researchers. It can be accessed from Stampede3 through Globus, providing a convenient way to manage and transfer project data.
Jobs
Like all TACC systems, Stampede3's accounting system is based on node-hours: one unadjusted Service Unit (SU) represents a single compute node used for one hour (a node-hour). For any given job, the total cost in SUs is the use of one compute node for one hour of wall clock time plus any charges or discounts for the use of specialized queues, e.g. Stampede3's pvc queue, Lonestar6's gpu-a100 queue, and Frontera's flex queue. The queue charge rates are determined by the supply and demand for that particular queue or type of node used and are subject to change.
Stampede3 SUs billed = (# nodes) x (job duration in wall clock hours) x (charge rate per node-hour)
The Slurm scheduler tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you actually use, not those you request. If your job finishes early and exits properly, Slurm will release the nodes back into the pool of available nodes. Your job will only be charged for as long as you are using the nodes.
To display a summary of your TACC project balances and disk quotas at any time, execute:login1$ /usr/local/etc/taccinfo # Generally more current than balances displayed on the portals.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| sku | General CPU jobs on SKX nodes. | 48 | - | 192GB DDR4 | — | — | — |
| skx-dev | Development and testing on SKX. | 48 | - | 192GB DDR4 | — | — | — |
| icx | General CPU jobs in ICX nodes. | 80 | - | 256GB DDR4 | — | — | — |
| spr | High-throughput, memory bandwidth intensive jobs. | 112 | - | 128GB | — | — | — |
| nvdimm | Ultra-high memory jobs. | 80 | - | 4TB NVDIMM | — | — | — |
| pvc | Intel GPU workloads. | 96 | 4xIntel PVC 1550 (124 GB/GPU) | 1TB DDR5 | — | — | — |
| h100 | NVIDIA GPU workloads. | 96 | 4x NVDIA H100 SXM5 (96GB/GPU) | 1 TB DDR5 | — | — | — |
Datasets
| Name | Description |
|---|---|
| /scratch/data/ | Stampede does not have any centrally maintained datasets on the system itself, For shared datasets across TACC, users should check “/scratch/data/” if it exists, or contact TACC support. |