Jetstream2 GPU is a cloud-based computing resource that provides on-demand virtual machines with GPU acceleration for research and education. Unlike traditional HPC systems, users launch and manage their own virtual machines rather than submitting jobs to a shared scheduler.
GPU-enabled instances are equipped with NVIDIA GPUs and are designed for workloads such as machine learning, AI model training, GPU-accelerated simulations, and visualization. These resources allow users to run interactive or long-running workloads with full control over the software environment.
Jetstream2 is built on OpenStack and provides flexible, scalable infrastructure that can be resized, restarted, or reconfigured as needed.
Login to Jetstream2 GPU
Jetstream2 is primarily accessed through a web-based portal called Exosphere. Users log in using ACCESS CI credentials via CILogon authentication. Multi-factor authentication is required, and users must have an active Jetstream2 allocation before accessing resources.
After logging in, users can:
Launch virtual machines
Manage storage volumes
View instance IP addresses
Upload SSH keys
File Transfer
Jetstream2 does not use dedicated data transfer nodes. All transfers occur directly between the user’s local machine and the virtual machine.
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| GLOBUS | RECOMMENDED | https://www.globus.org/data-transfer | |
| SCP | ||
| SFTP |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|
External Storage
Storage Filesystems
Jetstream2 does not use shared HPC file systems. Storage is attached to virtual machines.
Instance Storage
- Local disk tied to the VM
- Deleted when the instance is deleted
- Used for temporary data and runtime files
Persistent Volumes
- Independent block storage attached to VMs
- Survives instance restarts
- Can be detached and reattached to different instances
- Used for long-term datasets
Typical mount path:
/media/volume/<volume-name>
File Shares
- Shared storage accessible across multiple VMs
- Implemented using OpenStack Manila
- Used for collaboration and shared datasets
Typical mount path:
/media/share/<share-name>
Jobs
Jetstream2 GPU does not use a centralized job scheduler. Workloads are executed directly on GPU-enabled virtual machines.
Users run jobs interactively or through scripts within the VM environment. For example:
python train_model.pyIf batch scheduling is required, users may deploy their own scheduler (e.g., Slurm, Kubernetes) within a virtual cluster.
Resources are consumed based on the size and runtime of the GPU instance rather than queued jobs.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| Indiana Jetstream2 GPU | AMD Milan 7713 2 Ghz | (4) NVIDIA A100 40gb GPUs per node. | 512 GB | — | — | — |