Expanse is a dedicated ACCESS cluster designed by Dell and SDSC delivering 5.16 peak petaflops, and will offer Composable Systems and Cloud Bursting.
Expanse's standard compute nodes are each powered by two 64-core AMD EPYC 7742 processors and contain 256 GB of DDR4 memory, while each GPU node contains four NVIDIA V100s (32 GB SMX2) connected via NVLINK and dual 20-core Intel Xeon 6248 CPUs. Expanse also has four 2 TB large memory nodes.
Members get updates about announcements, events, and outages.
Associated Resources
SDSC Expanse Projects Storage
SDSC Expanse GPU
Expanse will be a Dell integrated compute cluster, with AMD Rome processors, NVIDIA V100 GPUs, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. The GPU component of Expanse features 52 GPU nodes, each containing four NVIDIA V100s (32 GB SMX2), connected via NVLINK, and dual 20-core Intel Xeon 6248 CPUs. They will feature 1.6TB of NVMe storage and 256GB of DRAM per node. There is HDR100 connectivity to each node. The system will also feature 7PB of Lustre based performance storage (140GB/s aggregate), and 5PB of Ceph based object storage.
SDSC Expanse CPU
Expanse will be a Dell integrated compute cluster, with AMD Rome processors, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. The compute node section of Expanse will have a peak performance of 3.373 PF. Full bisection bandwidth will be available at rack level (56 compute nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. Compute nodes will feature 1TB of NVMe storage and 256GB of DRAM per node. The system will also feature 7PB of Lustre based performance storage (140GB/s aggregate), and 5PB of Ceph based object storage.
CI Links
Title | Tags | Skill Level |
---|---|---|
![]() |
||
![]() |
