PSC Receives NSF Award for Bridges Supercomputer

$9.65-Million Supercomputer Will Enable Analysis of Vast Data Sets, Ease Entry for New Research Communities into high performance Computing

Read the NSF announcement here.

Monday, Nov. 24, 2014

The Pittsburgh Supercomputing Center (PSC) has received a National Science Foundation (NSF) award to create a uniquely capable supercomputer designed to empower new research communities, bring desktop convenience to supercomputing, expand campus access and help researchers needing to tackle vast data to work more intuitively. Called Bridges, the new supercomputer will consist of three tiered, memory-intensive resources to serve a wide variety of scientists, including those new to supercomputing and without specialized programming skills.

Bridges will offer new computational capabilities to researchers working in diverse, data-intensive fields such as genomics, the social sciences and the humanities. A $9.65-million NSF grant will fund the acquisition, to begin in October 2015, with a target production date of January 2016. The system will be delivered by HP® based on an architecture designed by PSC, and will feature advanced technology from Intel® and NVIDIA®.

“The name Bridges stems from three computational needs the system will fill for the research community,” says Nick Nystrom, PSC director of strategic applications and principal investigator in the project. “Foremost, Bridges will bring supercomputing to nontraditional users and research communities. Second, its data-intensive architecture will allow high performance computing to be applied effectively to big data. Third, it will bridge supercomputing to university campuses to ease access and provide burst capability.”

Bridges represents a new approach to supercomputing that helps keep PSC and Carnegie Mellon University at the forefront of high performance computing,” says Subra Suresh, president, Carnegie Mellon University. “It will help researchers tackle and master the new emphasis on data that is now driving many fields.”

“The ease of use planned for Bridges promises to be a game-changer,” says Patrick D. Gallagher, chancellor, University of Pittsburgh. “Among many other applications, we look forward to it helping biomedical scientists here at Pitt and at other universities unravel and understand the vast volume of genomic data currently being generated.”

Bridging to today’s vast data

Bridges represents a new approach to supercomputing, emphasizing research problems that are limited by data movement and analysis in addition to computational performance as measured by floating-point operations per second (“flops”).  This shift in emphasis will allow Bridges to better serve new communities of scientists.

Bridges represents a technological departure that offers users speed when they need it, but powerful ways to handle large data as well,” says Michael Levine, PSC scientific director and professor of physics, Carnegie Mellon University. “In addition to serving traditional supercomputing researchers requiring large memory, Bridges will offer the ability to solve scientific problems that hinge more on analyzing vast amounts of data than on solving differential equations.”

In what could be called “traditional” supercomputing, in fields such as physics, fluid dynamics and cosmology, arithmetic speed is paramount, and tightly coupled calculations span many thousands of computational cores. Bridges targets problems that on other computers are constrained by processors’ limited ability to draw from large amounts of data. Bridges’ large memory will allow those problems to be expressed efficiently, using applications and familiar, high-productivity programming languages that researchers are already using on their desktops.

“Bridges will help expand the capabilities of the NSF-supported computational infrastructure, pushing the frontiers of science forward in biology, the social sciences and other emerging computational fields by exploiting interactive and cloud-based computing paradigms,” says Irene Qualters, division director for advanced cyberinfrastructure at NSF.

Bridges will feature multiple nodes with as much as 12 terabytes each of shared memory, equivalent to unifying the RAM in 1,536 high-end notebook computers. This will enable it to handle the largest memory-intensive problems in important research areas such as genome sequence assembly, machine learning and cybersecurity.

“First and foremost, Bridges is about enabling researchers who’ve outgrown their own computers and campus computing clusters to graduate to supercomputing with a minimum of additional effort,” says Ralph Roskies, PSC scientific director and professor of physics, University of Pittsburgh. “We expect it to empower researchers to focus on their science more than the computing.”

Bridging to new research communities

To help research communities that have not traditionally used supercomputers, Bridges has also been designed for ease of use by scientists without a supercomputing background.

  • Interactivity will allow substantial capacity for interactive, on-demand access, as users are used to having on their personal computers. Providing interactive access is a disruptive change relative to the traditional way of using a supercomputer, which entails logging in, manually transferring data, submitting jobs to a batch queuing system, waiting for their turn to run and eventually receiving their results. With interactivity, researchers can test their hypotheses immediately.
  • Gateways and tools for gateway building will provide easy-to-use access to Bridges’ high performance computing and data resources, allowing users to launch jobs, orchestrate complex workflows and manage data from their web browsers—all without having to learn to program supercomputers.
  • Virtualization will allow users who have developed or acquired environments for solving their computational problems on their local computers to import their entire environments into Bridges, as if they were still using their own systems.

Together, Nystrom adds, these innovations will enable researchers to use the system at their own levels of computing expertise, ranging from new users who wish to have a PC-like experience without having to learn parallel programming to supercomputing experts wanting to tailor specific applications to their needs.

Bridges will be a part of the NSF’s XSEDE nationwide network of supercomputing resources, a facet of NSF’s Advanced Cyberinfrastructure (ACI). Bridges will complement other resources in the ACI ecosystem to provide effective high performance computing (HPC) resources for all fields of science.

Bridging to university campuses

The Bridges project will include a concerted effort to address surges of computational need at universities. While many universities have their own computational resources, including “clusters” that bank many commodity processors to achieve high levels of capacity, user demand on these systems can vary greatly day to day and hour to hour. At peak times, users often swamp those resources.

A pilot project with Temple University will connect that campus with Bridges to give their researchers additional computing capacity at times of unusually high use. Thanks to hardware and software innovations developed at PSC and its industrial partners (see “Bridging to new technologies,” next), Bridges will be able to supply Temple with additional computational capacity on-the-fly, providing additional computing capacity when needed.

Bridging to new technologies

Bridges’ major components are:

  • Several nodes with 12 terabytes of shared memory each for the largest-memory applications in, for example, genomics, machine learning and graph analytics.
  • Tens of nodes with 3 terabytes of shared memory each for many other memory-intensive applications, virtualization and interactivity, including large-scale visualization.
  • Hundreds of nodes with 128 gigabytes of shared memory apiece for a variety of uses such as executing components of workflows, additional interactivity, Hadoop and capacity computing.
  • Dedicated, specialized nodes for persistent databases to support sophisticated data management and data-driven workflows, along with Web server nodes to support distributed applications.
  • NVIDIA GPUs that will enable Bridges to exploit accelerated applications and libraries for computationally intensive processing.
  • A shared, flash-based array to accelerate Hadoop-based applications and databases.
  • A shared, parallel filesystem, together with distributed, node-local storage, to provide maximum flexibility and performance for data-intensive applications.

Bridges will achieve unprecedented flexibility through innovative scheduling of its data-intensive components,” says J. Ray Scott, PSC director of systems and operations and co-principal investigator. “Together with its large memory, these innovations will allow us to accommodate interactivity and heterogeneous workflows as well as regular HPC jobs.”

“We are excited that PSC has chosen the latest Intel Xeon® Processors and Intel Omni-Path® Architecture fabric to power their Bridges supercomputer,” says Charles Wuischpard, VP Intel Datacenter Group, GM of HPC and Workstations. “Their innovative system architecture and usage model will help democratize high performance computing by bringing the benefits of supercomputing to new disciplines in traditional and social sciences.”

Bridges’ capabilities stem from a number of technological innovations developed at PSC and elsewhere and which will see some of their first applications in the Bridges system:

  • Hardware and software “building blocks” developed at PSC through its Data Exacell pilot project, funded by NSF’s Data infrastructure Buliding Blocks (DIBBs) program, will enable convenient, high performance data movement between Bridges and users, campuses and instruments.
  • Bridges will be composed of four types of HP servers integrated into a high performance compute cluster:
    • HP Apollo® 6000 Servers (some with integrated GPGPUs), providing scalable performance for interactivity and capacity computing.
    • HP ProLiant® DL580 Systems, which will enable memory-intensive applications, virtualization, and interactivity, including large-scale visualization.
    • HP DragonHawk® mission-critical shared-memory systems will provide maximum internal bandwidth and capacity for the most memory-intensive applications.
    • HP Storage Servers will support the PSC Data Exacell data movement.

    “While the research demands for high performance computing resources are growing, they are also expanding to a mix of compute-centric, data-centric, and interaction-centric workloads,” said Scott Misage, general manager, High Performance Computing, HP. “HP has the HPC leadership and experience, breadth of portfolio, services and support to partner with PSC in delivering the high performance computing solution that will empower its varied research communities to achieve new scientific breakthroughs.”

  • HP Storage Servers will support high performance data movement.
  • The Intel Omni-Path Architecture fabric will provide Bridges with the highest-bandwidth internal network, valuable optimizations for MPI and other communications, and provide NSF users with early access to this important new technology Intel Xeon-based servers. “The Intel Omni-Path Architecture will help PSC and the Bridges system provide a new level of performance and flexibility for Xeon-based solutions,” says Barry Davis, Fabrics GM at Intel’s Technical Computing Group.
  • Next generation NVIDIA Tesla® GPUs will accelerate a wide range of research through a variety of existing accelerated applications, drop-in libraries, easy-to-use OpenACC directives and the CUDA parallel programming platform model. “NVIDIA GPU accelerators enable innovation and discovery across a broad range of scientific domains,” says Sumit Gupta, general manager of Accelerated Computing at NVIDIA. “Providing breakthrough performance, large, ultra-fast memory and high memory bandwidth, they enable researchers to quickly crunch through massive volumes of data generated by their scientific applications.”

PSC will hold a launch event for Bridges in January 2016. Check this page for more information and updates on the project.