Putting the Gulf Stream on Course with MPP

For years, the sticking point for modeling the North Atlantic has been Cape Hatteras, the bump on the U.S. east coast between North and South Carolina. The Gulf Stream hugs the coast as it comes up from Florida until at Cape Hatteras it veers off on a line, east-to-northeast, toward the open Atlantic. In computerized oceans, however, the Gulf Stream has preferred turning left past Cape Hatteras, clinging to the shoreline rather than taking to the open sea.

"People have suspected grid-size is the problem," explains Bleck, referring to the mesh-like grid that divides the ocean into segments to compute the variables of interest -- temperature, salinity, current direction and speed, etc. With a finer grid, smaller segments, chances improve that the results mirror reality. Until now, however, modeling the entire Atlantic at high enough resolution to get the currents right required a prohibitive amount of computing.

Availability of the CRAY T3D at Pittsburgh provided an opportunity. O'Keefe and Sawdey set themselves the task of adapting Bleck's model, the Miami Isopycnic Coordinate Ocean Model (MICOM), to massively parallel processing (MPP). They used MICOM's grid structure as a natural way to divide the modeling into discrete, independent chunks -- each of which could be parceled out to a separate T3D processor. "This approach requires communication between areas close together," says O'Keefe, "and the T3D mesh network is so fast that data transfer is not a factor."

In October 1994, PSC provided computing time for a dedicated run on 256 of the T3D's 512 processors. With grid-size at 0.08 degrees longitude (roughly six kilometers at mid-latitude), and with the model covering the North Atlantic and extending south of the equator, MICOM ran for 10 days, carrying out 10 months of simulation. The results, captured in a video animation, show that the Gulf Stream immediately begins establishing itself on the right path.

"This model was an attractive test for MPP," says O'Keefe, and the T3D has done well. Performance data suggests MICOM will hum along at about 9.6 billion calculations a second (Gflops) on a fully configured T3D (1024 processors) as compared to 4.4 Gflops on a 16-processor CRAY C90, the current top-of-the-line vector system. The availability of a large amount of memory (16 gigabytes) on the T3D, says O'Keefe, is a big factor in achieving this performance: "We've crossed a threshold. This kind of code actually runs faster on MPP than on current vector machines."

go back to the main screen