Bridges-2 GPU is a high-performance computing resource at the Pittsburgh Supercomputing Center designed for GPU-accelerated workloads. It provides nodes equipped with NVIDIA GPUs for machine learning, AI, data analytics, and scientific simulations that benefit from parallel processing.
The system is part of the ACCESS program and is intended for researchers who require GPU resources in addition to traditional CPU-based computing.
Login to Bridges-2 GPU
Bridges-2 is accessed via SSH using ACCESS credentials.
Users must:
- Have an active ACCESS allocation on Bridges-2
- Have an account with the Pittsburgh Supercomputing Center
- Use SSH from a local machine (login nodes do not initiate transfers)
File Transfer
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| RSYNC | data.bridges2.psc.edu | |
| SCP | data.bridges2.psc.edu | |
| SFTP | data.bridges2.psc.edu | |
| GLOBUS | RECOMMENDED | PSC Bridges-2 /ocean and /jet filesystems | https://app.globus.org |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| $HOME | $HOME | 25 GB | No automatic purge during allocation; After allocation, accessible for 14 days, deleted after 3 months | Backed up daily | |
| $PROJECT | /ocean/projects/groupname/PSC-username | Defined by allocation | No automatic purge during allocation; After allocation, accessible for 14 days, deleted after 3 months | No back up | |
| $LOCAL | Node-local (no global path) | Varies by node type | Deleted immediately when job ends | No back up | |
| $RAMDISK | Node memory (no filesystem path) | Depends on allocated node memory | Deleted immediately when job ends | No back up |
Jobs
Jobs on Bridges-2 GPU resources are submitted through the Slurm scheduler using either the GPU or GPU-shared partitions. The GPU partition is used for full-node jobs, where all CPUs, memory, and GPUs on one or more nodes are allocated to a single job. The GPU-shared partition allows jobs to request a subset of GPUs (up to 4) on a single node, reducing resource usage and cost.
GPU jobs are submitted using commands such as sbatch for batch jobs or interact for interactive sessions. Users must specify the GPU type and number of GPUs using options like --gpus=type:n or --gres=gpu:type:n. Valid GPU types include h100-80, l40s-48, v100-32, and v100-16.
In the GPU partition, jobs always use entire nodes, and GPU counts must be in multiples of 8 (or 16 for DGX-2 nodes). In the GPU-shared partition, jobs can request between 1 and 4 GPUs on a single node.
Walltime defaults to 1 hour, with a maximum runtime of 48 hours for both partitions. Resource usage is charged based on the number of GPUs used, and full-node allocations incur charges for all CPUs and GPUs on the node.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| h100-80 | Accelerated workloads, deep learning, and GPU-intensive applications using H100 GPUs. | 2× Intel Xeon “Sapphire Rapids” 8470 (52 cores per CPU, 104 cores total) | 8 × NVIDIA H100 (80 GB VRAM per GPU) | 2 TB per node | — | — | — |
| l40s-48 | GPU workloads and AI/ML using L40S GPUs. | 2× Intel Xeon 6740E (96 cores total) | 8 × NVIDIA L40S (48 GB VRAM per GPU) | 1 TB per node | — | — | — |
| v100-32 | General GPU workloads, CUDA applications, and legacy GPU jobs. | 2× Intel Xeon Gold 6248 (40 cores total) | 8 × NVIDIA V100 (32 GB VRAM per GPU) | 512 GB per node | — | — | — |
| v100-16 | GPU workloads with lower memory requirements. | 2× Intel Xeon Gold 6148 (40 cores total) | 8 × NVIDIA V100 (16 GB VRAM per GPU) | 192 GB per node | — | — | — |
| dgx-2 (v100-32 special node) | Large GPU workloads requiring high GPU count per node. | 2× Intel Xeon Platinum 8168 (48 cores total) | 16 × NVIDIA V100 (32 GB VRAM per GPU) | 1.5 TB | — | — | — |
Datasets
| Name | Description |
|---|---|
| 2019nCoVR: 2019 Novel Coronavirus Resource | The 2019 Novel Coronavirus Resource concerns the outbreak of novel coronavirus in Wuhan, China since December 2019. For more details about the statistics, metadata, publications, and visualizations of the data, please visit https://ngdc.cncb.ac.cn/ncov/. Available on Bridges-2 at /ocean/datasets/community/genomics/2019nCoVR. |
| AlphaFold | The AlphaFold protein structure database contains over 990,00 protein structure predictions for the human proteome and other key proteins of interest. For more information, see https://alphafold.ebi.ac.uk/. Available on Bridges-2 at /ocean/datasets/community/alphafold. |
| CIFAR-10 | The CIFAR-10 dataset is a subset of the 8 million tiny images dataset, which contains 60,000 images in ten classes. See https://www.cs.toronto.edu/~kriz/cifar.html for more details. Available on Bridges-2 at /ocean/datasets/community/cifar. |
| COCO | COCO (Common Objects in Context) is a large scale image dataset designed for object detection, segmentation, person keypoints detection, stuff segmentation, and caption generation. Please visit http://cocodataset.org/ for more information on COCO, including details about the data, paper, and tutorials. Available on Bridges-2 at /ocean/datasets/community/COCO. |
| CosmoFlow | CosmoFlow consists of data from around 10,000 cosmological N-body dark matter simulations. Anyone with a Bridges-2 allocation can use CosmoFlow data, but you must request access via the CosmoFlow request form. Please visit the CosmoFlow site at https://portal.nersc.gov/project/m3363/ for more information about this dataset. Available on Bridges-2 at /ocean/datasets/community/cosmoflow. |
| ImageNet | ImageNet is an image dataset organized according to WordNet hierarchy. See the ImageNet website for complete information https://image-net.org/. Available on Bridges-2 at /ocean/datasets/community/imagenet. |
| MNIST | Dataset of handwritten digits used to train image processing systems. Available on Bridges-2 at /ocean/datasets/community/mnist. |
| Natural Languge Tool Kit Data | NLTK comes with many corpora, toy grammars, trained models, etc. A complete list of the available data is posted at: http://nltk.org/nltk_data/. Available on Bridges-2 at /ocean/datasets/community/nltk. |
| OpenWebText | Available on Bridges-2 at /ocean/datasets/community/openwebtext. |
| PREVENT-AD | The PREVENT-AD (Pre-symptomatic Evaluation of Experimental or Novel Treatments for Alzheimer Disease) cohort is composed of cognitively healthy participants over 55 years old, at risk of developing Alzheimer Disease (AD) as their parents and/or siblings were/are affected by the disease. These ‘at-risk’ participants have been followed for a naturalistic study of the presymptomatic phase of AD since 2011 using multimodal measurements of various disease indicators. Two clinical trials intended to test pharmaco-preventive agents have also been conducted. The PREVENT-AD research group is now releasing data openly with the intention to contribute to the community’s growing understanding of AD pathogenesis. Available on Bridges-2 at /ocean/datasets/community/prevent_ad. |
| TCGA Images | Available on Bridges-2 at /ocean/datasets/community/tcga_images |
| Genomics datasets | These datasets are available to anyone with an allocation on Bridges-2. They are stored under /ocean/datasets/community/genomics. AUGUSTUS, BLAST, CheckM, Dammit, Homer, Kraken2, Pfam, Prokka Repbase
|