Bridges for COVID-19 Research

Bridges and Bridges-AI are available at no cost for COVID-19 research. To apply, submit your request at the COVID-19 HPC Consortium website. Contact us at bridges@psc.edu with any questions.

Submit Request

accessing bridges and bridges-ai for covid-19 research

The Pittsburgh Supercomputing Center’s Bridges supercomputer, including its Bridges-AI platform, enables high performance computing (HPC), scalable artificial intelligence (AI), high performance data analytics (HPDA), and workflows requiring advanced accelerators, large memory, web portals, and high performance access to Big Data. Bridges and Bridges-AI emphasize a flexible software environment and interactive access for user productivity. Bridges and Bridges-AI are supported by NSF award number 1445606.

How bridges and bridges-ai can help covid-19 research

Bridges and Bridges-AI offer exceptional capabilities rarely found elsewhere for critical areas of research. PSC’s expert User Support staff are available to promptly help get new projects started. Areas where Bridges and Bridges-AI offer specific, unique strengths are:

  • Artificial Intelligence: Bridges-AI delivers scalable deep learning for memory- and compute-intensive networks (e.g., SciBERT) through an NVIDIA DGX-2 (16 tightly-coupled Volta GPUs and 1.5TB RAM) and nine HPE servers with 8 Volta GPUs each.
  • Genomics: Bridges’ large-memory servers with 12TB and 3TB of RAM are the premier resource for de novo sequence assembly, and Bridges as a whole is well-suited to variant calling and bioinformatic workflows. The 2019 Novel Coronavirus Resource, which concerns the outbreak of novel coronavirus in Wuhan, China, is now available on Bridges. This dataset contains the genomic and proteomic sequences shared by labs all over the world. For more details about the statistics, metadata, publications, and visualizations of the data, please visit https://bigd.big.ac.cn/ncov.
  • Collaborative workflows: Secure web portals can be configured using Bridges’ servers that are dedicated to serving persistent databases and websites, allowing efficient coordination among distributed teams. See https://covid19.galaxyproject.org for information on how the assembly portions of workflows on usegalaxy.org will be running on Bridges’ LM nodes.

Bridges and Bridges-AI are available at no cost for COVID-19 research. To apply, please submit your request at the COVID-19 HPC Consortium website. If you have any questions, please contact us at bridges@psc.edu.

bridges and bridges-ai overview

The Pittsburgh Supercomputing Center’s Bridges supercomputer pioneered the convergence of high performance computing, artificial intelligence, and Big Data. Bridges includes 752 dual-CPU HPC servers, 4 extreme-memory servers each with 12TB of RAM, 34 large-memory servers each with 3TB of RAM, and 64 GPU-accelerated servers for HPC and AI. Bridges-AI, a recent expansion of Bridges, delivers extreme scalability artificial intelligence with an NVIDIA DGX-2 (16 Volta GPUs, NVswitch, 1.5TB RAM) and 9 HPE Apollo 6500 servers, each with 8 NVLink-connected NVIDIA Volta GPUs. Bridges prioritizes user productivity and flexibility and is supported by PSC’s User Support experts.

For complete information on Bridges and Bridges-AI, please see PSC’s Bridges page and the Bridges User Guide.

The Bridges and Bridges-AI hardware and software configurations are as follows. Each hardware resource type includes examples of applications that it optimally supports. 

Bridges-AI: Deep learning, machine learning, graph analytics

  • 1 NVIDIA DGX-2 node with 16 NVIDIA V100 32GB SXM2 GPUs, 2 Intel Xeon Platinum 8168 CPUs, 1.5TB RAM, and 30TB NVMe SSD
  • 9 HPE Apollo 6500 Gen10 nodes, each with 8 NVIDIA V100 16GB SXM2 GPUs, 2 Intel Xeon Gold 6148 CPUs, 192GB RAM, and 8TB NVMe SSDs

Bridges-RM: HPC and HTC

  • 752 Regular Memory nodes, each with 2 Intel Xeon E5-2695v3 CPUs (14c, 2.3/3.3 GHz), 128GB RAM, and 4TB local HDD

Bridges-LM: Genomics (de novo assembly), graph analytics, data analytics, memory-intensive HPC

  • 2 Extreme Memory nodes, each with: 12TB DDR4-2400 RAM, 16 Intel Xeon E7-8880v4 CPUs (22c, 2.2/3.3 GHz), and 56TB local HDD
  • 2 Extreme Memory nodes, each with 12TB DDR4-2133 RAM, 16 Intel Xeon E7-8880v3 CPUs (18c, 2.3/3.1 GHz), and 56TB local HDD
  • 34 Large Memory nodes, each with 3TB DDR4-2400 RAM, 4 Intel Xeon E7-8870v4 CPUs (20c, 2.1/3.0 GHz), and 16TB local HDD
  • 8 Large Memory nodes, each with 3TB DDR4-2133 RAM, 4 Intel Xeon E7-8860v3 CPUs (16c, 2.2/3.2 GHz), and 16TB local HDD

Bridges-GPU: Deep learning, machine learning, GPU-accelerated HPC

  • 32 GPU-P100 nodes, each with 2 NVIDIA Tesla P100 16GB GPUs, 2 Intel Xeon E5-2683 v4 CPUs (16c, 2.1/3.0 GHz, 40MB LLC), 128GB RAM, and 4TB local HDD
  • 16 GPU-K80 nodes, each with 4 NVIDIA Tesla K80 GPUs (2 cards), Intel Xeon E5-2695 v3 CPUs (14c, 2.3/3.3 GHz, 35MB LLC), 128GB RAM, and 4TB local HDD

Bridges-DB and Bridges-Web: Persistent databases, portals, web interfaces

  • 6 DB-s nodes, each with 2 Intel Xeon E5-2695 v3 CPUs (14c, 2.3/3.3 GHz, 35MB LLC), 128GB RAM, and 2TB SSD
  • 6 DB-h nodes, each with 2 Intel Xeon E5-2695 v3 CPUs (14c, 2.3/3.3 GHz, 35MB LLC), 128GB RAM, and 18TB HDD
  • 6 Web nodes, each with 2 Intel Xeon E5-2695 v3 CPUs (14c, 2.3/3.3 GHz, 35MB LLC), 128GB RAM, and 12TB HDD

Interconnect

  • Intel Omni-Path Architecture (100Gbps), custom leaf-spine topology

Persistent Data Storage

  • 10PB, Lustre

Relevant Community Datasets

  • The 2019 Novel Coronavirus Resource concerns the outbreak of novel coronavirus in Wuhan, China. This dataset contains the genomic and proteomic sequences shared by labs all over the world. For more details about the statistics, metadata, publications, and visualizations of the data, please visit https://bigd.big.ac.cn/ncov.
  • CORD-19
  • Other community datasets as required

Software Environment

  • CentOS 7.6 (except NVIDIA DGX-2) and Ubuntu 18.04 (NVIDIA DGX-2)
  • Interactive access
  • Jupyter, Anaconda, R, MATLAB, including on LM (12TB, 352c) nodes
  • CUDA 10.1
  • NGC Container Singularity repo
  • Intel, PGI, and GNU compilers
  • Slurm
  • Singularity, VMs