Bridges FAQ

If you can't find the answer to your question here, please check the Bridges' User Guide or email This email address is being protected from spambots. You need JavaScript enabled to view it..


How to apply


  • Is Bridges a good fit for my research?

    Bridges is being built to facilitate research ranging from traditional HPC areas like astronomy and physics  through emerging fields like genomics to decision science, natural language processing and digital humanities.   Bridges could be a good fit for you if:

    You want to scale your research beyond the limits of your laptop, using familiar software and user environments.

    You want to collaborate with other researchers with complementary expertise. 

    Your  research can take advantage of any of the following:

    • Rich data collections - Rapid access to data collections will support their use by individuals, collaborations and communities.
    • Cross-domain analyses - Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions.
    • Gateways and workflows - Web portals will provide intuitive access to complex workflows that run "behind the scenes". 
    • Large coherent memory - Bridges' 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
    • In-memory databases  - Bridges' large-memory nodes will be valuable for in-memory databases, which are important due to their performance advantages.
    • Graph analytics - Bridges' hardware-enabled shared memory nodes wil execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
    • Optimization and parameter sweeps - Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
    • Rich software environments - Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding. 
    • Data-intensive workflows - Bridges' filesystems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound.  One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory. 
    • Contemporary applications - Applications written in Java, Python, R, MATLAB, SQL, C++, C, Fortran, MPI, OpenACC, CUDA and other popular languages will run naturally on Bridges.
  • What is Bridges' hardware architecture?

    View the Bridges virtual tour
    The Bridges virtual tour depicts Bridges architecture and illustrates features which could be used in various research models.

    Bridges comprises 4 classes of compute nodes with additional dedicated nodes for databases, webservers and data transfer.  Several types of filesystems with different functions will be available.  Bridges components will be interconnected by the Intel Omni-Path Fabric.

    Bridges has 4 classes of compute nodes: 

    • 4 Extreme Shared Memory (ESM) nodes, HP Integrity Superdome X servers with 16 Intel Xeon EX-series CPUs and 12TB of RAM
    • Several tens of Large Shared Memory (LSM) nodes, HP DL580 servers with 4 Intel Xeon EX-series CPUs and 3TB of RAM
    • Many hundreds of Regular Shared Memory (RSM) nodes, each with 2 Intel Xeon EP-series CPUs and 128GB of RAM
    • Several tens of RSM nodes with GPUs (RSM-GPU), each with 2 Intel Xeon EP-series CPUs, 128GB of RAM and either NVIDIA Tesla K80 GPUs, 4 GPUs each, or NVIDIA Tesla P100 GPUs, 2 GPUs each. 

    Bridges' database nodes will be dual-socket Xeon servers with 128GB of RAM. Some will contain solid-state disks to deliver high IOPs for latency-sensitive workloads and others will contain banks of hard disk drives for capacity-oriented workloads.

    Bridges' webserver nodes will be dual-socket Xeon servers with 128GB of RAM and be connected to PSC's wide-area network, including XSEDE and commodity Internet. They will be implemented in virtual machines to provide security, allow maximum use of Bridges' resources and grant project-specific customization of the web server configuration.

    Bridges' data transfer nodes will be dual-socket Xeon servers with 128Gb of RAM and 10 GigE connections to PSC's wide-area network, enabling high-performance data transfers between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure.

    Bridges will support a shared parallel filesystem for persistent storage, node-local filesystems and memory filesystems.

    The shared parallel filesystem, named Pylon, will be a high-bandwidth, high-capacity centralized, parallel system cross-mounted across Bridges' nodes.  Pylon is modeled on other PSC production filesystems, including the one on the Data Exacell.  It is entirely disk-based with high-level RAID providing data safety.  Pylon will have approximately 10PB of storage and bandwidth to the system of approximately 180GB/s.

    Node-local filesystems will be available on each compute node. They will provide natural support for Hadoop and software layers that need it, applications, frameworks, etc. that distribute data or are written to shard, and applications that benefit from local "scratch" storage.  Node-local filesystems will also improve bandwidth and performance consistency to Pylon.

    Memory filesystems will be supported on Bridges' compute nodes, especially the ESM and LSM nodes.  They will provide maximum IOPs and bandwidth to improve the performance of applications such as pipelined workflows, genome sequence assembly and in-memory instantiations of otherwise disk-based databases.

    Bridges' components wil be interconnected by the Intel Omni-Path Fabric, which delivers 100Gbps line speed, low latency, excellent scalability and improved tolerance to data errors. A unique two-level "island" topology, designed by PSC, will maximize performance for the intended workloads.  Compute islands will provide  full bi-section bandwidth communication performance to applications spanning up to 42 nodes.   Storage islands will take advantage of the Intel Omni-Path Fabric to implement mutiple paths and  provide optimal bandwidth to the Pylon filesystem.  Storage switches will be cross-linked to all other storage switches and connect management nodes, database nodes, web server nodes and data transfer nodes.

  • When can I apply?

    Requests for Startup allocations are accepted at any time.  They are the easiest way to get started with XSEDE resources and are recommended for all new XSEDE users.  See for more information.

    Requests for Research allocations are accepted four times a year:
    Mar 15 - Apr 15
    Jun 15 - Jul 15
    Sep 15 - Oct 15
    Dec 15 - Jan 15
    See for more information.

  • How can I apply?

    Request an allocation on Bridges through the XSEDE User Portal. You will need to create an XSEDE portal account before you can apply if you don't already have one.

  • How do I prepare an allocation request?

    There are detailed instructions on the XSEDE User Portal at explaining how to prepare an allocation request.  These resources may be helpful:

    A video on writing and submitting a successful XSEDE allocation proposal.  

    A sample resource request for Bridges

    Examples of successful allocation requests for other XSEDE resources

  • What do I ask for in an allocation request?

    At a minimum, you will request computing time and a storage allocation on Pylon, Bridges' persistent storage system.  If you want to use GPU nodes, you must request a GPU allocation.

      See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

    Computing time:  You will request computing time on Bridges "Regular memory" nodes, Bridges "Large memory" nodes, Bridges GPU nodes, or any combination, depending on what your application needs. 

    Bridges "Regular memory" nodes are appropriate for applications needing up to 128GB of cache-coherent shared memory. "Large memory" nodes can accommodate applications requiring up to 12TB of cache-coherent shared memory.

    Computing time allocations are given in terms of Service Units (SU).

    If you will use Bridges' "Large Memory" nodes, SUs are defined in terms of memory-hours:

    1GB-hour = 1 SU

    For Bridges' "Regular Memory" nodes, SUs are defined in terms of core-hours:

    1 core-hour = 1 SU

    For Bridges' GPU nodes, SUs are defined in terms of GPU-hours.  Because of the difference in the performance between the K80 and P100 nodes, the charges are different for the two types of nodes.

    For Bridges' K80 GPU nodes, 1 GPU-hour = 1 SU.  

    For Bridges' P100 GPU nodes, 1 GPU-hour = 2.5 SU.  

    Storage allocation: You must also request a storage allocation on Pylon, a parallel filesystem shared across all of Bridges' nodes. The default Pylon allocation is 512GB.  If you need more than that, you should justify it in your allocation request.

    Other: You must also note in your request if you need  big data frameworks like Hadoop or virtual machines (if your research requires persistent databases or webservers, e.g., for gateways or community datasets).

  • How should I estimate the resources I will need?

    If  you need experience with large memory HPC to help you estimate the resources your research requires, you can request a Start-up allocation for benchmarking.  You can also ask for  help from XSEDE's Extended Collaborative Support Service.

    If you have questions about resources for persistent databases, gateways or other types of distributed applications, please contact PSC User Services.  

      See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

  • Do I need to make any special requests in addition to computing time on Bridges?

    If your research requires any of the following, be sure to specifically ask for (and justify) them in your allocation request:

    • Big data frameworks like Spark and Hadoop
    • Virtual machines
    • More than 512GB of storage
  • Where can I get more information?

    If you would like to hear more about Bridges' capabilities or discuss how you can take advantage of Bridges in your research, call us at 412-268-4960 or email This email address is being protected from spambots. You need JavaScript enabled to view it..


Account administration


  • How do I add users to my grant?

    Adding (or removing) users to an active XSEDE account is done through the XSEDE User Portal.   Note that it may take up to 48 hours for a user to be added to your Bridges account after you have submitted a request through XSEDE.
     See instructions here

  • How do I move SUs between Bridges Large, Bridges Regular and Bridges GPU?

    Request a transfer of SUs through the XSEDE User Portal.
     See instructions here

  • How I request more SUs or more disk space? (In other words, how do I request a Supplement?)

    Submit a request for supplemental Service Units through the XSEDE User Portal.
     See instructions here



New on Bridges

GPUs to be allocated separately
Read more

Upgraded scratch file system installed
Read more

Omni-Path User Group

The Intel Omni-Path Architecture User Group is open to all interested users of Intel's Omni-Path technology.

More information on OPUG