Bridges-2, PSC's newest supercomputer, will debut in the fall of 2020. It will be funded by a $10-million grant from the National Science Foundation.
Bridges-2 will provide transformative capability for rapidly evolving, computation-intensive and data-intensive research, creating opportunities for collaboration and convergence research. It will support both traditional and non-traditional research communities and applications. Bridges-2 will integrate new technologies for converged, scalable HPC, machine learning and data; prioritize researcher productivity and ease of use; and provide an extensible architecture for interoperation with complementary data-intensive projects, campus resources, and clouds.
Bridges-2 will be available at no cost for research and education, and at cost-recovery rates for other purposes.
Early User Program
The Bridges-2 Early User Program is an opportunity for you to port, tune and optimize your applications early, and make progress on your research. There is no cost for the Early User Program, just as there will be no cost for XSEDE allocations when Bridges-2 enters production.
From Day One you will be able to:
- Consult the Bridges-2 User Guide
- Avail yourself of advanced user support
- Access commonly used software packages and datasets
What will you expect from me?
First and foremost, we expect that you will acheive real scientific progress during the Early User Program. We also expect you to provide us with feedback on how your experience is going. We will check in with you weekly to ask about your progress and experience, and at the end of the EUP, you will be asked to complete a short survey. You can send us questions, issues or comments at any time through the website.
What software and datasets will be available?
We are interested in what software and datasets would be useful to you in the Early User Program.
The list of software to be installed for the EUP is here.
The datasets that are installed on Bridges will also be installed in Bridges-2 for the EUP. You can see that list here.
If you need something that is not in those lists, you can ask that it be installed. Please use these forms to let us know what you need.
How can I apply?
- Converged HPC + AI + Data
- Custom topology optimized for data-centric HPC, AI and HPDA
- Heterogeneous node types for different aspects of workflows
- CPUs and AI-targeted GPUS
- 3 tiers or per-node RAM: 256GB, 512GB, 4TB
- Extremely flexible software environment
- Community data collections & Big Data as a Service
- AMD EPYC 7742 CPUs: 64-core2.25–3.4 GHz
- AI scaling to 192 V100-32GB SXM2 GPUs
- 100TB, 9M IOPs flash array accelerates deep learning training, genomics, and other applications
- Mellanox HDR-200 InfiniBand doubles bandwidth & supports in-network MPI-Direct, RDMA, GPUDirect, SR-IOV, and data encryption
- Cray ClusterStor E1000 Storage System
- HPE DMF single namespace across disk and tape for data security and expandable archiving
- Converged HPC + AI + Data
Three node types: Regular Memory, Extreme Memory, and GPU
Regular Memory (RM) nodes will provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing.
488 RM nodes will have 256GB of RAM, and 16 will have 512GB of RAM. All RM nodes will have:
- Two AMD EPYC 7742 CPUS, each with:
- 64 cores, 128 threads
- 256MB L3
- 8 memory channels
- NVMe SSD (3.84TB)
- Mellanox ConnectX-6 HDR Infiniband 200Gb/s Adapter
Bridges-2 Extreme Memory (EM) nodes will provide 4TB of shared memory for genome sequence assembly, graph analytics, statistics, and other applications requiring a large amount of memory for which distributed-memory implementations are not available. Each of Bridges-2’s 4 EM nodes will consist of:
- Four Intel Xeon Platinum 8260M “Cascade Lake” CPUs:
- 24 cores, 48 threads
- 35.75MB LLC 6 memory channels
- 4TB of RAM: DDR4-2933
- NVMe SSD (7.68TB)
- Mellanox ConnectX-6 HDR InfiniBand 200Gb/s Adapter
Bridges-2 24 GPU nodes provide exceptional performance and scalability for deep learning and accelerated computing, with a total of 40,960 CUDA cores and 5,120 tensor cores. Each GPU node will contain:
- Eight NVIDIA Tesla V100-32GB SXM2 GPUs
- 1 Pf/s tensor
- Two Intel Xeon Gold 6248 “Cascade Lake” CPUs:
- 20 cores, 40 threads, 2.50–3.90GHz, 27.5MB LLC, 6 memory channels
- 512GB of RAM: DDR4-2933
- 7.68TB NVMe SSD
- Two Mellanox ConnectX-6 HDR Infiniband 200Gb/s Adapter
A unified, high-performance filesystem for active project data, archive, and resilience
The data management system for Bridges-2, named Ocean, consists of two tiers, disk and tape, transparently managed as a single, highly usable namespace.
Ocean's disk subsystem, for active project data, is a high-performance, internally resilient Lustre parallel filesystem with 15PB of usable capacity, configured to deliver up to 129GB/s and 142GB/s of read and write bandwidth, respectively.
Ocean's tape subsystem, for archive and additional resilience, is a high-performance tape library with 7.2PB of uncompressed capacity, configured to deliver 50TB/hour. Data compression occurs in hardware, transparently, with no performance overhead.