Pittsburgh Research Computing Initiative
Exclusive access to Bridges for CMU and Pitt faculty and staff
Bridges is a unique high-performance computing (HPC) system designed to enable applications, including those that have not traditionally used HPC, to integrate HPC with Big Data and artificial intelligence and to help researchers facing challenges in Big Data to work more intuitively. Through the Pittsburgh Research Computing Inititative, faculty and staff at CMU and Pitt who haven't used Bridges yet can try it free of charge. Applying is simple. We hope that you will try Bridges and see how valuable it can be to your research. And when you do, you can expand your allocation, still at no charge.
Is Bridges right for you?
Capability: We are often asked whether a given problem is “big enough for a supercomputer.” Bridges is a new kind of supercomputer – merging HPC capability with cloud-like flexibility – designed to do many tasks at once. If your analytics or simulation exceed the capacity of your current resources, such as laptops, servers, or local clusters, then Bridges is probably a good fit.
Applications: Many popular applications for simulation, machine learning and data analytics are already installed and running on Bridges.
No charge: Access to Bridges is available at no charge to the open research community. There is no charge for this starter program, and there is no charge for larger allocations for those who need even greater resources.
Support: In most cases, applications that run on Linux simply work on Bridges. If you would like help in getting started, PSC has front-line consultants plus domain specialists in the physical sciences, computer science, engineering, and other fields who can help.
Bridges has the tools you need
Bridges prioritizes usability and flexibility. It supports a high degree of interactivity, science gateways, and a very flexible user environment. Widely-used languages and frameworks such as Hadoop, Spark, Python, R, MATLAB, and Java benefit transparently from Bridges’ unique hardware. Virtualization and containers enable reproducibility, support interoperation with clouds and enable hosting web services, NoSQL databases and application-specific environments.
Bridges features large memory – 4 compute nodes with 12 TB of RAM, 42 with 3 TB, and 800 with 128 GB – and powerful new Intel® Xeon CPUs and NVIDIA Tesla K80 and P100 GPUs for exceptional performance. Bridges also includes database and web servers to support gateways, collaboration, and data management, and it includes 10 PB usable of shared, parallel storage, plus local storage on each of its compute nodes. Bridges is the first production deployment of the Intel Omni-Path Architecture (OPA) Fabric, which interconnects its compute, storage, and utility nodes.
A complete description of Bridges’ configuration is available at https://www.psc.edu/bridges/user-guide/system-configuration
Customize your allocation to fit your research
A Bridges allocation can include storage, 128GB nodes, 3TB and 12TB nodes, GPUs, or any combination of those. Choose exactly what you need. Under the Pittsburgh Research Computing Initiative, you can request:
Bridges Regular: Request up to 50,000 core-hours on Bridges’ RM (28-core, 128GB RAM) nodes
Bridges Large: Request up to 1,000 TB-hours on Bridges’ LM (64- to 80-core, 3TB RAM) and ESM (256- to 320-core, 12TB RAM) nodes
Bridges GPU: Request up to 250 GPU-hours on Bridges’ GPU (NVIDIA K80 or P100) nodes
Bridges Pylon: Request up to 1TB of storage on Bridges’ persistent file systems - not provided independently of compute resources, must have a corresponding request for computing time
In the first year of this initiative, we plan to make available up to 4M core-hours on RM nodes, 34k TB-hours on LM nodes, 25k GPU-hours on GPU nodes, and 90TB of storage.
To offer access as broadly as possible, each PI is limited to one request. Introductory allocations run for up to one year.
Applying is simple
Fill out a simple form and one of our consultants will be in touch with account and access details.
Your initial allocation is only the start