Science, the XT3 and TeraGrid:

An Interview with PSC Scientific Directors Michael Levine and Ralph Roskies
This interview originally appeared in the June 9 & June 16, 2006 issue of HPCwire. Reprinted with permission.

The Pittsburgh Supercomputing Center (PSC) is well-known for its cutting-edge research and its ability to transform new technologies into useful scientific tools. Within the past year, PSC’s new Cray XT3 supercomputer has been used for some exciting new work and has proved to be one of the most powerful computational resources on the TeraGrid.

HPCwire recently got the opportunity to talk with the two PSC scientific co-directors, Michael Levine and Ralph Roskies, and ask them about new developments at PSC and about what’s in store for the center’s future. In part one of this two-part interview, Roskies and Levine discuss the significance of PSC’s Cray XT3 supercomputer.

PHOTO: Michael Levine (l) and Ralph Roskies

PSC scientific directors Michael Levine (l) and Ralph Roskies

hpcwire
PSC’s 10-teraflop Cray XT3, which became a production resource on the TeraGrid last October, was the first Cray XT3 anywhere and is the only one available to NSF researchers. What led you to decide on this system and what advantages does it have as a resource for computational science?
roskies

We have discovered in the past that if we can bring a substantially new technical capability into production we can open up new fields of science. In particular, we seek systems that when used as a whole make it possible to tackle problems that were previously infeasible.

One particular technical strength of the XT3 that attracted us is its interconnect. Like LeMieux, our HP terascale system that preceded it, the XT3 is a tightly coupled system with a very strong interconnect. The XT3 interconnect is a significant advance in interconnect technology since LeMieux, and it’s substantially better than competing systems.

The superior interconnect is a large advantage for projects that demand hundreds or thousands of processors working together. Because of the advanced interconnect, the processors share information much more quickly than they otherwise would, and this makes a very meaningful difference for many of the most demanding kinds of science that can be attacked with supercomputing.

The other feature that attracted us to the XT3 was the excellent balance between processor speed and memory bandwidth that the Opterons display. To realize a larger fraction of peak performance on real scientific applications, one has to be sure that one can supply the processors with enough operands to keep busy.

hpcwire
On a processor-clock basis, the XT3 is 2.4 times faster than LeMieux, your six-teraflop system, yet reports are that the XT3 boosts performance more than ten-fold on some applications. How is this accomplished?
levine

We’ve run dozens of codes on the XT3 over the past year, and sometimes we’re seeing performance increases of an order of a magnitude and more. There are several factors involved in this. First is the interconnect. As Ralph pointed out, the XT3 interconnect is a substantial improvement over LeMieux. That factor alone represents about an order of magnitude for large-scale parallel applications. This is over and above the speedup from faster processors.

The XT3 also has better memory bandwidth than LeMieux. The interconnect provides the means for each processor to communicate with other processors. Memory bandwidth is the ability of each processor to communicate with its own local memory. Even correcting for the faster processor speed, the memory bandwidth of the XT3 is 33 percent better than LeMieux.

A third factor is the software. LeMieux’s operating system is more intrusive. The operating system in LeMieux and in most clusters resides in each processor and is meant to support that processor as an independent entity. This is good if the objective is for each processor to operate independently, but that’s not our objective.

The features that allow the processors to be independently supported get in the way when you have a large number of processors working together. They take up space in memory. They also make requests to the processor at inopportune times. Depending on the application, the operating system can severely reduce efficiency.

The operating system of the XT3 is designed to avoid both of these problems. It has only what it needs to run calculations. It doesn’t have to support the system as a whole. That’s done by a small fraction of the processors that support the entire machine.

hpcwire
How does the XT3 as a system differ from Linux clusters, which, at least nominally, offer more capacity per dollar?
roskies

Essentially, it’s what we just talked about. The network between processors performs at a much higher level, and the operating system is better designed to facilitate large-scale parallelism. The fact that there isn’t a full operating system on each node gives you much more reliability. The XT3 is easier to manage as a unified system because it’s designed to operate that way — as opposed to hundreds or thousands of stand-alone processors connected without careful attention to how they interact.

In terms of raw capacity per dollar, it’s certainly true that clusters are less expensive. But you don’t have the advanced interconnect, the manageability or the reliability.

There are projects that fit well with clusters, projects that are loosely coupled, or what can be called “pleasantly parallel” — in effect, a task that breaks down to many individual jobs running independently of each other, without the need for interprocessor communication. For jobs like that, the XT3 isn’t a cost-effective use of resources.

For many important large-scale scientific applications, however, the parallelized parts of the overall task need to closely coordinate and communicate with each other as the computation proceeds. Many of the major projects we’ve worked on at Pittsburgh are of this tightly coupled nature. The storm forecasting work of Kelvin Droegemeier, for example. Earthquake simulation. Molecular dynamics, especially the NAMD application developed in Klaus Schulten’s group, which scales well to a large number of processors, and other kinds of molecular modeling.

Basically, any kind of application that scales efficiently to hundreds or thousands of processors and that requires a high degree of inter-processor communication is going to perform more efficiently on the XT3 by a large factor.

hpcwire
What particular strengths does the XT3 add to the TeraGrid repertoire of resources?
levine

It brings all the advantages we’ve already talked about in terms of interconnect, memory bandwidth, reliability, and operating system and provides a resource for applications that can exploit these advantages. Although the compute portion of the XT3 is specialized for computation in the ways we’ve talked about, the XT3 also has a “public face” carried by Linux nodes that allow us to smoothly integrate the XT3 with other components of the TeraGrid.

The predecessor system, LeMieux, a less evolved version of tightly coupled architecture, has been a major production resource for NSF and the TeraGrid since 2001. LeMieux had demonstrated excellent scaling — the ability to add-on a large number of processors without substantially degrading the per-processor performance. For this reason, researchers with large-scale parallel projects fairly quickly caught on to the advantages of this system, and most of the computing time was devoted to jobs using at least 512 processors and many using 1,024 processors and more.

The XT3 represents newer, better technology than LeMieux, and it succeeds LeMieux as the best TeraGrid resource for the most demanding highly parallel projects — the kind of projects often referred to as “capability computing.” The ones that stretch the envelope of what can be done in scientific computing.

hpcwire
What scientific results to date have come from availability of the XT3?
roskies

One of our early successes with the XT3 is Paul Woodward’s work on turbulence. He and his colleagues in Minnesota turned to the XT3 specifically because of its superior interconnect, which for them has enabled interactive steering of turbulent flow simulations in real time. Nobody has done this before.

They want to be able to represent the large-scale effects of small-scale turbulence, a problem that comes up in many kinds of flow, from pipes to internal-combustion engines to atmospheric weather patterns. Their focus is turbulent convection in giant stars. From small-scale runs they can define parameters they can then use with large-scale models.

They demonstrated their ability to do smaller-scale, interactive runs with the XT3 twice last year— at iGrid in San Diego and at SC|05. They relied not only on the XT3’s very fast interconnect, but also on software, called PDIO, that our staff developed. PDIO expands on the basic IO capabilities of XT3, making it possible for an application to route data from the XT3 compute sector in real time to remote users on the wide-area network. This makes it possible for Paul and his team to visualize the data live at the other end of the TeraGrid pipe and adjust parameters on the fly to see how it affects the simulation.

Some of our PSC scientists have also used the XT3 to good effect. Yang Wang, a physicist here, has deployed software he helped develop called LSMS, which performs astonishingly well on the XT3. It sustains more than 8 teraflops on 2,048 processors — 82 percent of theoretical peak. Yang used LSMS for an ab initio quantum calculation of the magnetic and electronic structure of an iron nanoparticle of more than 4,400 atoms. This size of nanoparticle hasn’t been modeled before at the quantum level, and the XT3 makes this possible. Being able to do these calculations at this particle size and larger is going to be important in developing next-generation data-storage technologies.

A couple of our scientists, Troy Wymore and Shawn Brown, used the XT3 for quantum mechanical/molecular mechanics simulation of aldehyde dehydrogenase, a major family of enzymes. They used 900 processors with software called Dynamo, and they looked at proton tunneling effects in the enzyme’s active site. These interactions are involved in a couple of metabolic diseases, and they affect how well chemotherapy drugs work to fight cancer.

There’s also been substantial work in Michael Klein’s group at the University of Pennsylvania, which does molecular modeling using classical and quantum molecular dynamics codes. Their codes need high bandwidth, both interprocessor and to memory, and they’ve found that the XT3 is the best machine available for a large proportion of their work. It has stretched scalability of their codes — by a factor of two with NAMD and also with Car-Parinello molecular dynamics — and dramatically increased productivity.

hpcwire
Cray, Inc. has undergone personnel changes within the past year. Has that affected their ability to support the XT3 or changed their relation with PSC?
levine

We have been working on integrating XT3 into the scientific community and into the operational environment at PSC for over two years. We’ve benefited a great deal from cooperation with Cray and also with Sandia National Labs.

The XT3 — as many people know — is the product version of Red Storm, a machine commissioned from Cray by Sandia. Red Storm has some features that are important in a classified environment, and that aren’t important to us and aren’t part of the XT3, otherwise it’s the same machine. Despite these differences, this machine was fundamentally architected by people at Sandia, with much of the detailed design by Cray.

The XT3 is off to a strong start in the HPC market, with large-scale installations in addition to ours either installed or due soon at other major sites, including Oak Ridge National Laboratory, the United Kingdom’s AWE plc, the Swiss National Supercomputer Center, the Japan Advanced Institute of Science and Technology, the Japan Science and Technology Agency and the Western Australia Supercomputing Program.

The personnel changes at Cray respond to this marketplace success and support a stronger focus on XT3 and its follow-on products. This improves their ability to maintain strong relations with all their customers, including PSC, to support the XT3, and it improves our ability to interact with them. This is quite important because, as with any fundamentally new machine such as this, there’s a great deal of close work that has to go on between the early adopters, which has been our role with many systems, and the vendor.

hpcwire
LeMieux, your 3,000-processor six-Tflop HP system, which came into service in 2001, was the first NSF terascale system and for several years was the most powerful system available to NSF researchers. At soon-to-be five-years old, it’s still one of the most used TeraGrid resources. What are your plans for this system and how much longer can it be useful?
levine

It can be useful for a very long time. It’s a question of how long it will continue to be cost effective. If Moore’s Law holds, the amount of computing you can get from initial dollar capitalization keeps improving. On a monthly cost basis, this is a matter of maintenance costs. Likewise with the amount of computing per watt. Power is a large cost factor.

roskies

At some point, it will no longer be cost effective and we will by then have transitioned the users to the XT3. No one will be left hanging.

levine

Technically, LeMieux turned out to be a very good machine and continues to be a very good machine, very useful; it’s not at a breaking point in any serious sense.

hpcwire
PSC has gained a reputation for its ability to take the leap with new technologies and transform them quickly into productive research tools. Going back to the CRAY Y-MP through half-a-dozen systems up to the XT3, you’ve received early, if not the first, models of new systems. What are the advantages of this approach? Are there disadvantages?
roskies

The advantage is the payoff to the scientific community — because new machines will soon enough be sunsetted, as determined by the pace of technological development. So if you can get machines early in their cycle, it means you can use them longer. The earlier you get it, the more science you can get done in the useful lifetime of that machine.

levine

Also, you bring that capability to the scientific community earlier. You could, of course, wait to introduce any new system into the open research community until it’s more mature. But we can get productive use out of this early period, which means it’s producing science that much sooner. And it allows us to have more influence with the vendors for the course of development of the system and its application to the NSF research community. This has certainly been the case for our involvement with the XT3 at the Sandia stage.

roskies

The disadvantage is that there’s more work by our systems staff than if we simply waited until the bugs get worked out. The machine would be better understood and it would be less effort to make it available. Of course, we’re a major force in making it better understood, so not only are we improving things for our own users, we’re improving it for everybody else’s XT3 users. Somebody would have to discover these bugs. You can’t avoid them.

A benefit to PSC is the cumulative aggregation of knowledge and experience that our staff gain in the process of birthing new systems, over and over, with various vendors and architectures.

levine

This is a large amount of work, but there’s a learning curve. We’ve learned a great deal about what to look for and how to go about this process, and we have designed our ancillary and support systems to be able to deal with a variety of machines.

For example, the scheduling software that users interact with to send jobs to the XT3 has features we need to make effective use of the machine. Essentially we’re re-using technology that we’ve developed for earlier machines. We have to make it work with this architecture, but the effort of getting it going in the first place was substantial. The fact that we’ve done this before, that we’ve made the investment in education and training ourselves, puts us in a position where it makes good sense for us to be doing this work. For us to be doing this — doing the shakedown work with a new architecture and introducing it to the scientific community — is an efficient use of human resources.

hpcwire
Is there a “PSC” way of being a supercomputing center, and if so, what is it?
roskies

To work with the TeraGrid, the scientific community, and the vendors to maximize scientific output. What we’ve discovered over the years is that to make this happen effectively entails working very closely with users, the researchers themselves, and it’s certainly the case that an emphasis at PSC that has grown and evolved over time is for us to have a strong staff of people who work closely with our users to coordinate and optimize leading-edge applications, and who also contribute to improving the system’s behavior.

hpcwire
TeraGrid last year embarked on its second five-year round of funding, this time — unlike the first — with the PSC as a major player. What is PSC’s contribution?
roskies

We are involved and actively participating in TeraGrid in a number of ways. First, we participate directly in overall direction. I’m on the executive steering committee for the Grid Infrastructure Group, the GIG. Michael is the principal contact for PSC as one of the eight resource-provider sites.

We also have a number of our key staff people committed heavily to TeraGrid work. Sergiu Sanielevici, our Director for Scientific Applications and User Support, is one of our most experienced computational scientists. He is now the TeraGrid area director for User Support. He oversees everything that has to do with coordinating between the users who have TeraGrid allocations and getting their work done on the TeraGrid. He coordinates the TeraGrid’s ASTA program — Advanced Support for TeraGrid Applications.

Jim Marsteller, who heads our security efforts, leads the TeraGrid effort in security and oversees security policy and implementation. One of our other staffers, Laura McGinnis, has made significant contributions to the TeraGrid accounting system, which — as you can imagine — is quite an undertaking, to keep track of allocations at eight sites with many different systems.

levine

Beyond this, I would add that with the TeraGrid there’s, first, the computational science value to the research community in having a variety of resources and the facility to move between them. Secondly, there’s the value of the individual components. In the first category, PSC’s contribution is the things Ralph mentioned. In addition to that, at PSC we fulfill a unique function in providing two tightly coupled capability machines, LeMieux and the XT3, which between them over the last half-year or so have provided about 40 percent of overall TeraGrid usage.

hpcwire
What do you see as significant challenges in the years ahead for TeraGrid?
levine

Many technical challenges are involved in creating value added to the resources if you take them as separate, individual pieces. With all the technical challenges, however, the biggest challenge is to not become overly absorbed in these many technical challenges for their own sake, but to keep our eyes on the prize — that is, to keep the focus clearly on making it easier for scientists to do science.

We also want to be able to expand the provider community both in size and in the nature of services provided, to include for example greater data-handling and archiving ability, and we need to continue the effort to broaden the community of users who can benefit from the TeraGrid, such as we’re doing with Science Gateways, creating interfaces that provide easy entry to research communities within a field or discipline that share related scientific goals.

hpcwire
What’s ahead for PSC?
roskies

It’s a very exciting time for computational science. We’re pleased and lucky to be, in a sense, voyeurs — to be able to see wonderful and important scientific accomplishments made possible and happening because the technology is progressing at an astounding pace. To be a computational scientist is to live in very interesting times.

We’re pleased that NSF is pushing ahead toward petascale computing, and we plan to contribute toward making this happen.

Dr. Michael Levine is Professor of Physics at Carnegie Mellon University, specializing in theoretical particle physics. He is also a founder and Co-Scientific Director of the Pittsburgh Supercomputing Center (PSC). He is the author of numerous papers in computational, theoretical, and particle physics. His physics research over the last few years has been in high order quantum electrodynamics. His earlier work in Physics includes a series of papers applying symbolic computation methods and computational systems devised by him, to fundamental problems in electrodynamics done in collaboration with Professor Ralph Roskies. Professor Levine initiated Carnegie Mellon’s degree program in Computational Physics and continues to teach courses in that program. In 1984, together with Ralph Roskies and James Kasdorf of Westinghouse Electric Company, he wrote the proposal to the National Science Foundation for what was eventually to become the PSC. As Scientific Director at PSC, he continues to oversee operations, plan its future course, and concern himself with its scientific impact. He also serves as the Associate Provost for Scientific Computing for Carnegie Mellon University.


Dr. Ralph Roskies is Professor of Physics at the University of Pittsburgh and a founder and Co-Scientific Director of the Pittsburgh Supercomputing Center (PSC). He is the author of over 60 papers in theoretical elementary particle physics. In 1984, together with Professor Michael Levine of Carnegie Mellon University and James Kasdorf from Westinghouse, he developed the proposal to the National Science Foundation for what became the PSC. As Scientific Director, Roskies oversees operations, plans its future course, and concerns himself with its scientific impact. The PSC has been a national leader in providing the highest capability computing to the US national research community. It has pioneered developments in file systems, heterogeneous computing, parallel algorithms and scientific visualization. It currently fields the Terascale Computing System and the first Cray XT3, two of the world’s most powerful academically-based computing facilities dedicated to open scientific research. Roskies’ pivotal role in developing and implementing the NSF allocation process has given him a very broad overview of leading computational science and close ties to its most prominent practitioners. He has served as advisor to and as reviewer of a large number of U.S. and international supercomputing centers.