Bridges-2

Unlocking the power of data to accelerate discovery

Introducing Bridges-2

 

Bridges-2, PSC's newest supercomputer, will debut in the fall of 2020.  It will be funded  by a $10-million grant from the National Science Foundation. 

Bridges-2 will provide transformative capability for rapidly evolving, computation-intensive and data-intensive research, creating opportunities for collaboration and convergence research. It will support both traditional and non-traditional research communities and applications. Bridges-2 will integrate new technologies for converged, scalable HPC, machine learning and data; prioritize researcher productivity and ease of use; and provide an extensible architecture for interoperation with complementary data-intensive projects, campus resources, and clouds.

Bridges-2 will be available at no cost for research and education, and at cost-recovery rates for other purposes. 

 

Early User Program

The Bridges-2 Early User Program is an opportunity for you to port, tune and optimize your applications early, and make progress on your research. There is no cost for the Early User Program, just as there will be no cost for XSEDE allocations when Bridges-2 enters production.

 

Learn more about the Early User Program

 

Core Concepts

  • Converged HPC + AI + Data
  • Custom topology optimized for data-centric HPC, AI and HPDA
  • Heterogeneous node types for different aspects of workflows
  • CPUs and AI-targeted GPUS
  • 3 tiers or per-node RAM: 256GB, 512GB, 4TB
  • Extremely flexible software environment
  • Community data collections & Big Data as a Service

Innovation

  • AMD EPYC 7742 CPUs: 64-core2.25–3.4 GHz 
  • AI scaling to 192 V100-32GB SXM2 GPUs 
  • 100TB, 9M IOPs flash array accelerates deep learning training, genomics, and other applications 
  • Mellanox HDR-200 InfiniBand doubles bandwidth & supports in-network MPI-Direct, RDMA, GPUDirect, SR-IOV, and data encryption 
  • Cray ClusterStor E1000 Storage System 
  • HPE DMF single namespace across disk and tape for data security and expandable archiving
  • Converged HPC + AI + Data

Hardware Highlights

Three node types: Regular Memory, Extreme Memory, and GPU

 

 

Regular Memory

Regular Memory (RM) nodes will provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing.

488 RM nodes will have 256GB of RAM, and 16 will have 512GB of RAM. All RM nodes will have:

  • Two AMD EPYC "Rome" 7742 CPUS:
    • 64 cores, 128 threads
    • 2.25-3.40GHz
    • 256MB L3
    • 8 memory channels
  • NVMe SSD (3.84TB)
  • Mellanox ConnectX-6 HDR Infiniband 200Gb/s Adapter

Extreme Memory

Bridges-2 Extreme Memory (EM) nodes will provide 4TB of shared memory for genome sequence assembly, graph analytics, statistics, and other applications requiring a large amount of memory for which distributed-memory implementations are not available. Each of Bridges-2’s 4 EM nodes will consist of:

  • Four Intel Xeon Platinum 8260M “Cascade Lake” CPUs:
    • 24 cores, 48 threads
    • 2.40–3.90GHz
  • 35.75MB LLC 6 memory channels
  • 4TB of RAM: DDR4-2933
  • NVMe SSD (7.68TB)
  • Mellanox ConnectX-6 HDR InfiniBand 200Gb/s Adapter

GPU

Bridges-2 24 GPU nodes provide exceptional performance and scalability for deep learning and accelerated computing, with a total of 40,960 CUDA cores and 5,120 tensor cores. Each GPU node will contain:

  • Eight NVIDIA Tesla V100-32GB SXM2 GPUs
  • 1 Pf/s tensor
  • Two Intel Xeon Gold 6248 “Cascade Lake” CPUs:
    • 20 cores, 40 threads, 2.50–3.90GHz, 27.5MB LLC, 6 memory channels
  • 512GB of RAM: DDR4-2933
  • 7.68TB NVMe SSD
  • Two Mellanox ConnectX-6 HDR Infiniband 200Gb/s Adapter
 

Data Management

A unified, high-performance filesystem for active project data, archive, and resilience

 

The data management system for Bridges-2, named Ocean, consists of two tiers, disk and tape, transparently managed as a single, highly usable namespace.

Ocean's disk subsystem, for active project data, is a high-performance, internally resilient Lustre parallel filesystem with 15PB of usable capacity, configured to deliver up to 129GB/s and 142GB/s of read and write bandwidth, respectively.

 Ocean's tape subsystem, for archive and additional resilience, is a high-performance tape library with 7.2PB of uncompressed capacity, configured to deliver 50TB/hour. Data compression occurs in hardware, transparently, with no performance overhead.

 

 

Important dates for 2020

PEARC20 July 26 - 30
Early User Program Late summer
Grant applications open June 15 - July 15
Production operations begin Fall

Previous events