Get access to Bridges

You can get access to Bridges in two ways: apply for an allocation if you are faculty, a researcher or educator with a US institution, or be added to an existing account if you work with a PI who has an existing Bridges allocation.


Apply for an allocation

We offer three programs to provide access to Bridges. Complete allocation and affiliation information can be found on the allocations page, but in a nutshell:

  • Researchers and educators at a US academic or non-profit research institutions can apply for Bridges access through the NSF's XSEDE program. The FAQ below provides more information on how to apply to Bridges through XSEDE. Three types of access are available:
    • Start-up allocations allow you to explore high-performance computing and Bridges.  The application process is streamlined and you can request up to 2500 GPU-hours on Bridges GPU nodes, up to 50,000 core-hours on Bridges' "regular" (128GB memory) nodes, up to 1000 TB-hours on Bridges "large" (3 and 12TB memory) nodes, or any combination of those resources.  See for details on applying.
    • Research allocations provide expanded resource limits when you are ready to scale up your work.  See for details on applying.
    • Coursework allocations supplement educational activities from workshops to semester courses that benefit from the use of high-performance computing.   See for details on applying.


Get added to an existing allocation

You can be added to an existing allocation by the PI.  You must have an XSEDE Portal account (create one here if you don't already have one); once you do, the PI can submit a request through the Portal to add you to the allocation.  The request must specify all resources which you should be given access to.   

Every Bridges user needs access to Pylon storage, but it is not granted automatically; it must be specifically requested in the Add User process.  

In addition, the PI must specify any other Bridges' resources granted to the allocation that you should have access to.  These can include RM nodes, GPU nodes and/or LSM nodes.

Choose your tools

Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. It is designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. Its richly-connected set of interacting systems offers exceptional flexibility for data analytics, simulation, workflows and gateways, leveraging interactivity, parallel computing, Spark and Hadoop.

A sampling of Bridges' tools which will enable your work includes:

  • Compute nodes with hardware-supported shared memory ranging from 128GB  to 12TB per node to support genomics, machine learning, graph analytics and other fields where partitioning data is impractical 
  • GPU nodes to accelerate applications as diverse as machine learning, image processing and materials science
  • Database nodes to drive gateways and workflows and to support fusion, analytics, integration and data management
  • Webserver nodes to host gateways and provide access to community datasets
  • Data transfer nodes with 10 GigE connections to enable data movement between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure


Get more information

If the following FAQ doesn't answer your questions about what Bridges is and how you can use this unique resource in your research, or if you would like to hear more about Bridges' capabilities, call us at 412-268-4960 or email This email address is being protected from spambots. You need JavaScript enabled to view it..

Frequently asked questions


  • Is Bridges a good fit for my research?

    Bridges was built to facilitate research ranging from traditional HPC areas like astronomy and physics  through emerging fields like genomics to decision science, natural language processing and digital humanities.   Bridges could be a good fit for you if:

    You want to scale your research beyond the limits of your laptop, using familiar software and user environments.

    You want to collaborate with other researchers with complementary expertise. 

    Your  research can take advantage of any of the following:

    • Rich data collections - Rapid access to data collections will support their use by individuals, collaborations and communities.
    • Cross-domain analyses - Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions.
    • Gateways and workflows - Web portals will provide intuitive access to complex workflows that run "behind the scenes". 
    • Large coherent memory - Bridges' 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
    • In-memory databases  - Bridges' large-memory nodes will be valuable for in-memory databases, which are important due to their performance advantages.
    • Graph analytics - Bridges' hardware-enabled shared memory nodes wil execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
    • Optimization and parameter sweeps - Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
    • Rich software environments - Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding. 
    • Data-intensive workflows - Bridges' filesystems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound.  One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory. 
    • Contemporary applications - Applications written in Java, Python, R, MATLAB, SQL, C++, C, Fortran, MPI, OpenACC, CUDA and other popular languages will run naturally on Bridges.
  • What is Bridges' hardware architecture?

    View the Bridges virtual tour
    The Bridges virtual tour depicts Bridges architecture and illustrates features which could be used in various research models.

    Bridges comprises 4 classes of compute nodes with additional dedicated nodes for databases, webservers and data transfer.  Several types of filesystems with different functions will be available.  Bridges components will be interconnected by the Intel Omni-Path Fabric.

    Bridges has 4 classes of compute nodes: 

    • 4 Extreme Shared Memory (ESM) nodes, HP Integrity Superdome X servers with 16 Intel Xeon EX-series CPUs and 12TB of RAM
    • Several tens of Large Shared Memory (LSM) nodes, HP DL580 servers with 4 Intel Xeon EX-series CPUs and 3TB of RAM
    • Many hundreds of Regular Shared Memory (RSM) nodes, each with 2 Intel Xeon EP-series CPUs and 128GB of RAM
    • Several tens of RSM nodes with GPUs (RSM-GPU), each with 2 Intel Xeon EP-series CPUs, 128GB of RAM and either NVIDIA Tesla K80 GPUs, 4 GPUs each, or NVIDIA Tesla P100 GPUs, 2 GPUs each. 

    Bridges' database nodes aee dual-socket Xeon servers with 128GB of RAM. Some  contain solid-state disks to deliver high IOPs for latency-sensitive workloads and others contain banks of hard disk drives for capacity-oriented workloads.

    Bridges' webserver nodes  are dual-socket Xeon servers with 128GB of RAM and are connected to PSC's wide-area network, including XSEDE and commodity Internet. They are implemented in virtual machines to provide security, allow maximum use of Bridges' resources and grant project-specific customization of the web server configuration.

    Bridges' data transfer nodes are dual-socket Xeon servers with 128Gb of RAM and 10 GigE connections to PSC's wide-area network, enabling high-performance data transfers between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure.

    Bridges supports a shared parallel filesystem for persistent storage, node-local filesystems and memory filesystems.

    The shared parallel filesystem, named Pylon, is a high-bandwidth, high-capacity centralized, parallel system cross-mounted across Bridges' nodes.  Pylon is modeled on other PSC production filesystems, including the one on the Data Exacell.  It is entirely disk-based with high-level RAID providing data safety.  Pylon will have approximately 10PB of storage and bandwidth to the system of approximately 180GB/s.

    Node-local filesystems are available on each compute node. They provide natural support for Hadoop and software layers that need it, applications, frameworks, etc. that distribute data or are written to shard, and applications that benefit from local "scratch" storage.  Node-local filesystems also improve bandwidth and performance consistency to Pylon.

    Memory filesystems are supported on Bridges' compute nodes, especially the ESM and LSM nodes.  They provide maximum IOPs and bandwidth to improve the performance of applications such as pipelined workflows, genome sequence assembly and in-memory instantiations of otherwise disk-based databases.

    Bridges' components are interconnected by the Intel Omni-Path Fabric, which delivers 100Gbps line speed, low latency, excellent scalability and improved tolerance to data errors. A unique two-level "island" topology, designed by PSC,  maximizes performance for the intended workloads.  Compute islands provide  full bi-section bandwidth communication performance to applications spanning up to 42 nodes.   Storage islands take advantage of the Intel Omni-Path Fabric to implement mutiple paths and  provide optimal bandwidth to the Pylon filesystem.  Storage switches are cross-linked to all other storage switches and connect management nodes, database nodes, web server nodes and data transfer nodes.

  • How can I apply?

    If you are faculty at Carnegie Mellon University or the University of Pittsburgh, you can get access to Bridges through the Pittsburgh Research Computing Initiative.  

    If your research is proprietary, you can get access to Bridges through the Corporate Affiliates program.

    If you are a researcher or educator at a US academic or non-profit research institution you can apply for Bridges access through the NSF's XSEDE program.  To apply through XSEDE, request an allocation on Bridges through the XSEDE User Portal. You will need to create an XSEDE portal account before you can apply if you don't already have one.

  • When can I apply?

    Applications for time through the Pittsburgh Research Computing Initiative and the Corporate Affiliates program are accepted at any time.

    Requests for Startup XSEDE allocations are accepted at any time.  They are the easiest way to get started with XSEDE resources and are recommended for all new XSEDE users.  See for more information.

    Requests for Research XSEDE allocations are accepted four times a year:
    Mar 15 - Apr 15
    Jun 15 - Jul 15
    Sep 15 - Oct 15
    Dec 15 - Jan 15
    See for more information.  

    More information on the types of allocations available can be found on the allocations page.

  • How do I prepare an XSEDE allocation request?

    There are detailed instructions on the XSEDE User Portal at explaining how to prepare an allocation request.  These resources may be helpful:

    A video on writing and submitting a successful XSEDE allocation proposal.  

    A sample resource request for Bridges

    Examples of successful allocation requests for other XSEDE resources

  • What do I ask for in an XSEDE allocation request?

    At a minimum, you will request computing time and a storage allocation on Pylon, Bridges' persistent storage system. If you want to use GPU nodes, you must request a GPU allocation.

      See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

    Computing time:  You will request computing time on Bridges "Regular memory" nodes, Bridges "Large memory" nodes, Bridges GPU nodes, or any combination, depending on what your application needs. 

    Bridges "Regular memory" nodes are appropriate for applications needing up to 128GB of cache-coherent shared memory. "Large memory" nodes can accommodate applications requiring up to 12TB of cache-coherent shared memory.

    Computing time allocations are given in terms of Service Units (SU).

    If you will use Bridges' "Large Memory" nodes, SUs are defined in terms of memory-hours:

    1TB-hour = 1 SU

    For Bridges' "Regular Memory" nodes, SUs are defined in terms of core-hours:

    1 core-hour = 1 SU

    For Bridges' GPU nodes, SUs are defined in terms of GPU-hours.  Because of the difference in the performance between the K80 and P100 nodes, the charges are different for the two types of nodes.

    For Bridges' K80 GPU nodes, 1 GPU-hour = 1 SU.  

    For Bridges' P100 GPU nodes, 1 GPU-hour = 2.5 SU.  

    Storage allocation: You must also request a storage allocation on Pylon, a parallel filesystem shared across all of Bridges' nodes. The default Pylon allocation is 512GB.  If you need more than that, you should justify it in your allocation request.

    Other: You must also note in your request if you need big data frameworks like Hadoop, or virtual machines (if your research requires persistent databases or webservers, e.g., for gateways or community datasets).

  • How should I estimate the resources I will need for an XSEDE allocation request?

    If  you need experience with large memory HPC to help you estimate the resources your research requires, you can request a Start-up allocation on Bridges for benchmarking.  You can also ask for  help from XSEDE's Extended Collaborative Support Service.

    If you have questions about resources for persistent databases, gateways or other types of distributed applications, please contact PSC User Services.  

      See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

  • Do I need to make any special requests in addition to computing time on Bridges?

    If your research requires any of the following, be sure to specifically ask for (and justify) them in your allocation request:

    • Big data frameworks like Spark and Hadoop
    • Virtual machines
    • More than 512GB of storage
  • Where can I get more information?

    If you would like to hear more about Bridges' capabilities or discuss how you can take advantage of Bridges in your research, call us at 412-268-4960 or email This email address is being protected from spambots. You need JavaScript enabled to view it..



System status

  Bridges status:

Bridges is running normally.

Featured Projects

Data Exacell (DXC)


The Data Exacell (DXC) is a research pilot project to create, deploy, and test software and hardware building blocks to enable data analytics in scientific research.

XSEDE Service Provider

image gallery

PSC is a service provider of the Extreme Science and Engineering Discovery Environment (XSEDE).