First Phase of TeraGrid Goes into Production
PITTSBURGH, January 27, 2004 The first computing systems of the National Science Foundation’s TeraGrid project are in production mode, making 4.5 teraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.
The TeraGrid is a multi-year effort to build and deploy the world’s largest, most comprehensive distributed infrastructure for open scientific research. The TeraGrid also offers storage, visualization, database, and data collection capabilities. Hardware at multiple sites across the country is networked through a 40-gigabit per second backplane — the fastest research network on the planet.
The systems currently in production represent the first of two deployments, with the completed TeraGrid scheduled to provide over 20 teraflops of capability. The phase two hardware, which will add more than 11 teraflops of capacity, was installed in December 2003 and is scheduled to be available to the research community this spring.
“We are pleased to see scientific research being conducted on the initial production TeraGrid system,” said Peter Freeman, head of NSF’s Computer and Information Sciences and Engineering directorate. “Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF’s commitment to providing high-end, innovative resources.”
The TeraGrid sites are: Argonne National Laboratory; the Center for Advanced Computing Research (CACR) at the California Institute of Technology; Indiana University; the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign; Oak Ridge National Laboratory; the Pittsburgh Supercomputing Center (PSC); Purdue University; the San Diego Supercomputer Center (SDSC) at the University of California, San Diego; and the Texas Advanced Computing Center at The University of Texas at Austin.
“This is an exciting milestone for scientific computing — the TeraGrid is a new concept and there has never been a distributed computing system of its size and scope,” said NCSA interim director Rob Pennington, the TeraGrid site lead for NCSA. “In addition to its immediate value in enabling new science, the TeraGrid project is a tool for the development of a national cyberinfrastructure, and the cooperative relationships forged through this effort provide a framework for future innovation and collaboration.”
“The TeraGrid partners have worked extremely hard during the two-year construction phase of this project and are delighted that this initial phase of what will be an unprecedented level of computing and data resources is now online for the nation’s researchers to use,” said Fran Berman, SDSC director and co-principal investigator of the TeraGrid project. “The TeraGrid is one of the foundations of cyberinfrastructure that will provide even more computing resources later this year.”
"The Teragrid interoperates across diverse platforms," said PSC scientific directors Michael Levine and Ralph Roskies in a joint statement, "and linking these platforms has already enabled researchers to do record-breaking calculations that could not be as effectively done at a single site."
The computing systems that entered production this month consist of more than 800 Itanium-family IBM processors running Linux. NCSA maintains a 2.7-teraflop cluster, which was installed in spring 2003, and SDSC has a 1.3-teraflop cluster. The 6-teraflop, 3,000-processor HP AlphaServerSC Terascale Computing System (TCS) at PSC is also a component of the TeraGrid infrastructure.
“The launch of the National Science Foundation’s TeraGrid project provides scientists and researchers across the nation with access to unprecedented computational power,” said David Turek, vice president of Deep Computing with IBM. “Working with the NSF, IBM is committed to the continued development of breakthrough Grid technologies that benefit our scientific/technical and commercial customers.”
Allocations for use of the TeraGrid were awarded by the NSF’s Partnerships for Advanced Computational Infrastructure (PACI) last October. Among the first wave of researchers to use the TeraGrid are scientists studying the evolution of the universe and the cleanup of contaminated groundwater, simulating seismic events, and analyzing biomolecular dynamics.
“With the TeraGrid, we can solve a much bigger problem,” Minsker said. “It enables us to look at real-world problems that no one has been able to solve before.”
To learn more about the TeraGrid, go to www.teragrid.org.
TeraGrid phase one running, Pittsburgh Post-Gazette.
Pittsburgh supercomputing center joining powerful national grid, Pittsburgh Post-Gazette
Supercomputer linking with four sites, Pittsburgh Tribune-Review
Pittsburgh Connects to Ultrafast Grid, Pittsburgh Supercomputing Center.
The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh together with the Westinghouse Electric Company. It was established in 1986 and is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry.
Sarah Emery Bunn
© Pittsburgh Supercomputing Center.