Red Storm Comes to Pittsburgh

LeMieux, PSC’s terascale computing system, has been an impressive performer for three years, but PSC will soon have a Red Storm with even more impressive computing credentials.

PSC WILL ACQUIRE AND INSTALL THE LATEST, MOST ADVANCED CRAY SYSTEM

On September 29, NSF announced a $9.7 million award to PSC to acquire and install the newest high-performance system from Cray Inc. Called Red Storm by Cray, the new system has processors twice as powerful as LeMieux along with a state-of-the-art internal network that allows the processors to communicate and share data more than 10 times faster than any similar system.

Photo of Jim
                  Kasdorf.

Jim Kasdorf, PSC director of special projects

PSC’s Red Storm will comprise 2,000 AMD Opteron processors and have a peak performance of nearly 10 teraflops — 10 trillion calculations per second. It will be the first prototype of this highly capable system to be made available to NSF scientists and engineers. PSC will evaluate this innovative architecture on representative applications, which will include blood-flow modeling, protein simulations, storm forecasting, global climate modeling, and simulations of earthquake ground vibration.

Because the network is an integral design feature of the system, Red Storm will occupy much less floor space than LeMieux, about as much as a spacious living room (12 x 28 feet) compared to the basketball-court size space used by LeMieux.

PSC’s Red Storm will employ many of the same technologies as a larger, 40 teraflop Red Storm now being installed at Sandia National Laboratories in Albuquerque. Although that system has complex features required for classified research — unnecessary in PSC’s open research environment — PSC and Sandia will pool their knowledge and experience with Red Storm to maximize its productivity as a scientific resource.

People are Talking About Red Storm

Red Storm.

Artist's rendering of PSC's Red Storm System

“The Red Storm system in Pittsburgh will enable researchers to explore the limits of high-performance computing and to demonstrate the potential of this architecture for a wide range of scientific applications,” says Peter Freeman, head of NSF’s Computer and Information Science and Engineering directorate. “The system will complement other systems already provided by NSF to the national community and will strengthen the growing high-end computing partnership between NSF and the Department of Energy.”

“We’re extremely gratified to be able to introduce Red Storm for the NSF,” said PSC scientific directors Michael Levine and Ralph Roskies in a joint statement. “PSC has unmatched experience in deploying new systems for the national research community. Going back to the CRAY Y-MP in 1990, we have installed over a half-dozen first and early systems of diverse architectures.”

“Cray is very pleased to partner with PSC and the NSF to deliver a system, built from the ground up for high-end computing, to the broader academic research community,” said Peter Ungaro, vice president of sales and marketing for Cray, Inc. “Bringing the Red Storm system to PSC will provide researchers with incredibly high bandwidth and usability while leveraging the best in microprocessor technology and price/performance. We are excited to imagine how this Cray technology will be used to push the bounds of science.”

Red Storm & LeMieux: Star "Scaling" Performers

Lemieux scaling graph.

Red Storm & LeMieux: Star 'Scaling' Performers

Because of its superior interprocessor communication, Red Storm will provide a powerful platform for research applications designed for efficient "scaling" -- using hundreds or thousands of processors simultaneously on the same problem. It will succeed LeMieux as the prime NSF resource for the most complex, demanding projects in computational science and engineering. But Red Storm has a tough act to follow. LeMieux, NSF's first terascale system, has fulfilled this role extremely well. For 2003, it provided more than 60 percent of the computing time used for NSF science and engineering research.

Usage of LeMieux also reflects that PSC training and workshops on "scaling" have had a solid payoff. Good scaling requires careful programming, and PSC's workshops "Scaling to New Heights" and "Terascale Code Development" have trained researchers in these techniques.

LeMieux has spectacular scaling credentials -- as evidenced by this graph. For the past year, more than 50 percent of LeMieux's computing hours have gone to jobs using more than 512 processors, and from May through August, to jobs using more than 1,024 processors. This shows that many PSC researchers have learned the tools of scaling. This is the most impressive use of massive parallelism among U.S. supercomputing centers.

TeraGrid Goes Live

Like LeMieux, PSC’s Red Storm will be integrated into the TeraGrid, a multi-year NSF effort to build and deploy the world’s largest, most comprehensive distributed infrastructure for open scientific research. In January the first phase of TeraGrid entered production mode and in October TeraGrid entered full production, making 40 teraflops of distributed computing power available to scientists across the country.

WITH THE GRID, SCIENCE HAS BECOME GLOBAL

“We are pleased to see scientific research being conducted on the initial production TeraGrid system,” said Peter Freeman, head of NSF’s Computer and Information Sciences and Engineering directorate. “Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF’s commitment to providing high-end, innovative resources.”

The TeraGrid offers storage, visualization, database, and data collection capabilities. Hardware at multiple sites is networked through a 40-gigabit per second backplane - the fastest research network on the planet. This Chicago-Los Angeles backplane links with Pittsburgh via a 30 Gbps light pipeline.

One of the most impressive feats of Grid computing to date has been the TeraGyroid project ( Ketchup on the Grid with Joysticks), which linked Grid resources on two continents and relied heavily on LeMieux as well as other TeraGrid sites.

“The TeraGyroid Project exemplifies what’s possible with Grid technologies,” said Rick Stevens of Argonne National Laboratory and the University of Chicago, TeraGrid project director. “It’s a major success for the NSF vision of integrated national cyberinfrastructure, and it helps us to appreciate that — just as the economy is global — with the Grid, science too has become global.”

© Pittsburgh Supercomputing Center, Carnegie Mellon University, University of Pittsburgh
300 S. Craig Street, Pittsburgh, PA 15213 Phone: 412.268.4960 Fax: 412.268.5832

This page last updated: May 18, 2012