This spring, for the first time, forecasts correctly predicted the details of thunderstorms 24 hours in advance

If anything is certain in 2005 — not counting death and taxes — it’s that we’re at the mercy of forces we don’t control. Despite incredible advances in understanding nature — leading to amazing technologies our forebears couldn’t imagine — our planet still unleashes furious energies that devastate communities and lives.

Even before Katrina, the U.S. loss due to extreme weather — hurricanes, floods, winter storms, tornados — averaged $13 billion annually. The human cost — nearly 1,000 people each year — is incalculable. Would better forecasting make a difference? No doubt. More to the point, is better forecasting possible?

PHOTO: Kelvin Droegemeier

Kelvin Droegemeier, University of Oklahoma, Norman. “For nearly 20 years,” says Droegemeier, “PSC has provided outstanding personalized support to large and challenging projects such as this spring program.”

You bet, says Kelvin Droegemeier, who directs the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma, Norman. Take thunderstorms, the nasty ones, with rotating updrafts — called supercells. They surge across the Great Plains each spring with the potential to spawn deadly tornados. How much would it be worth to know six hours in advance — instead of, as with current forecasting, a half-hour to an hour — that one of these storms is headed your way? And to have not just an ambiguous “storm warning” but precise information about when and where it will hit, how severe it is, how long it will last?

“We want to be able to say,” says Droegemeier, “that over Pittsburgh this afternoon at 3:30 there’ll be a thunderstorm with 30 mile-per-hour wind, golfball-sized hail, two-and-a-half inches of rain, and it will last 10 minutes, and to give you that forecast six hours in advance.”

Since the 90s, CAPS has taken several strides toward demonstrating that, with sufficient resources in data-gathering and computing, it’s possible to do this. This spring, they took another stride. In a major one-of-a-kind collaboration with NOAA (the National Oceanic and Atmospheric Administration), CAPS used resources of the NSF TeraGrid, in particular LeMieux, PSC’s terascale system, to produce the highest-resolution storm forecasts yet attempted. On several occasions, CAPS predicted the occurrence of storms within 20 miles and 30 minutes of where and when they actually happened, and they did it 24 hours in advance.

“That type of result,” says Droegemeier, “pretty much sets conventional thinking on its ear.”

Are Thunderstorms Predictable?

In contrast to the daily weather reports on TV, which are generated from large-scale models that predict atmospheric structure over the continental United States, storm-scale forecasting involves a tighter focus — at the scale of a county or city. It requires observational data — temperature, pressure, humidity, wind speed and direction, and other variables — at a corresponding finer spatial resolution, and it demands the most powerful computing available, and then some, to run the models.

When CAPS began in 1988, the prevailing view about storm-scale forecasting was skepticism. Numerical weather prediction was not in question. Computers programmed with equations that represent the atmosphere and initialized with observational data were proven — since the 1970s — to be the best way, by far, to forecast weather. The question was more fundamental. Are thunderstorms predictable?

“From a modeling, computational and communications perspective, this has never been done before”

“The challenge we set ourselves to,” says Droegemeier, “was, if you take the concept of computer forecast technology and apply it at this smaller scale, does the atmosphere possess any intrinsic fundamental predictability, or is at all turbulence? We had hopes, but we didn’t know. With big help from the Pittsburgh Supercomputing Center, we resolved that question.”

CAPS developed groundbreaking new techniques to gather atmospheric data from Doppler radar and to assimilate this data with other meteorological information. And they developed a computational model that uses this data to predict weather at thunderstorm scale.

“It all starts with observations,” says Droegemeier, “because to predict we need to know what’s going on right now.” Data to feed weather models comes from many sources — upper air balloons, the national Doppler radar network, satellites, sensing systems on commercial airplanes. From these sources, a huge amount of information, computationally processed and spread across a 3D grid representing the atmosphere, becomes the initial conditions for National Weather Service forecasts. With grid spacing at 10 to 30 kilometers, the NWS operational models do well at showing high and low pressure areas and storm fronts that develop from them — weather that happens, roughly speaking, on the scale of states. Individual thunderstorms originate at smaller scales and to forecast them, says Droegemeier, requires much finer spacing, down to at least one to two kilometers.

The foundation of a storm-forecast model is 15 to 20 non-linear differential equations. They represent the physical phenomena of the atmosphere and how it interacts with the surface of the Earth. To make a forecast involves feeding the 3D grid with initializing data, solving these equations at each position on the grid, and then doing it over again for the next time step, every five to ten seconds for 24 hours of weather. For a single forecast, this means solving trillions of equations. Each doubling of the number of grid points in 3D requires eight times more computing. If you also halve the time step — to capture corresponding finer detail in time — it’s a 16-fold computing increase. For this reason and others, storm-scale forecasting poses an enormous computational challenge.

CAPS forecast, June 6, 2005 RADAR, June 6, 2005
Predicting Storms 24 Hours in Advance

The CAPS computer forecast (above, top) for the night of June 4 & 5, 2005 (run at PSC on June 3) predicted a storm line moving from Kansas into Missouri extending into Iowa. This forecast for June 4, 10 p.m. CDT, compares well in structure and placement of the developing storms with the subsequent radar image (above). Both graphics show radar reflectivity (increasing from blue through red), which is proportional to precipitation intensity. The smaller frames (below) show forecast vs. radar over a time span from 2 p.m. June 4 until 1 a.m. June 5. All the images shown here represent only a sub-domain of the complete Rockies to Appalachian computational domain.


capsradar capsradar capsradar capsradar capsradar capsradar capsradar capsradar capsradar capsradar capsradar capsradar

CAPS Radar Sequence

Since 1993, CAPS has run forecasting experiments during spring storm season. In 1995 and ’96, using PSC’s CRAY T3D — a leading-edge system at the time — for a limited region of the Great Plains, they successfully forecast location, structure and timing of individual storms six hours in advance — a forecasting milestone. For this accomplishment, CAPS and PSC won the 1997 Computerworld-Smithsonian award for science, and CAPS garnered a 1997 Discover Magazine award for technological innovation.

If there were lingering doubts about storm forecasting, that the question has shifted from scientific and technological feasibility to national policy — whether sufficient resources can be made available and when — this spring’s storm forecast experiment should erase them.

Watershed Forecasts

As they have during many storm seasons over the past dozen years, CAPS and PSC this spring collaborated to produce real-time storm forecasts. The difference this year was that the forecasts covered two-thirds of the continental United States — from the Rockies east to the Appalachians. Using LeMieux, they successfully produced an on-time, daily forecast from mid-April through early June. “This was an unprecedented experiment,” says Droegemeier, “that meteorologists could only dream of several years ago.”

Conducted in collaboration with NOAA, the program included about 60 weather researchers and forecasters from several NOAA organizations — the Storm Prediction Center and the National Severe Storms Laboratory, both in Norman, and the Environmental Modeling Center in Maryland — and the NSF-sponsored National Center for Atmospheric Research in Boulder, Colorado along with CAPS.

This experiment offered an unprecedented chance for forecasters — as well as researchers — to work with advanced technology on a daily basis, technology that, says Droegemeier, may be five years from being incorporated in daily forecast operations at the resolutions used. Each evening, meteorologists in Norman transmitted new atmospheric conditions to Pittsburgh. By the next morning, LeMieux produced a forecast that covered the next 30 hours, and transmitted the forecast back to SPC and NSSL in Norman, where researchers turned the model output-data into images corresponding to what they see on radar. These model runs were conducted daily with virtually no problems.

Using several different versions of the Weather Research and Forecasting Model, an advanced model designed for research as well as operational use, the partners generated forecasts three times daily. EMC and NCAR used grid spacing of from four to 4.5 kilometers. With LeMieux at its disposal, running on 1,228 processors, CAPS went a step further. With grid spacing of two kilometers, more than five times finer than the most sophisticated NWS operational model — and requiring 300 times more raw computing power — their forecasts are the highest-resolution storm forecasts to date.

“Our daily WRF-model forecasts,” said Droegemeier, “had twice the horizontal resolution and nearly 50-percent greater vertical resolution than the other two experimental products.” This higher resolution meant that the forecasts were able to capture individual thunderstorms, including their rotation. On several occasions, when the 24-hour forecast showed development of thunderstorms, it proved to be accurate within 20 miles and 30 minutes.

Just as importantly, the computer model produced images that matched well in structure with what forecasters saw later on radar. “The computer forecasts looked very similar to what we see on radar,” said Steven Weiss, SPC science and operations officer. “The structure you see on the screen is important in judging whether the storm is likely to produce tornadoes, hail or dangerous wind. These results were an eye-opener in many respects.”

“Real time daily forecasts over such a large area and with such high spatial resolution,” says Droegemeier, “have never been attempted before, and these results suggest that the atmosphere may be fundamentally more predictable at the scale of individual storms and especially organized storm systems than previously thought.” Such results could lead potentially, he adds, to a revision of classical predictability theory put forth by Edward Lorenz, the now retired MIT professor, whose pioneering research led to chaos theory. Spring 2005 — the forecasting community is still absorbing the findings, but it may mark a watershed in the understanding of atmospheric predictability.

© Pittsburgh Supercomputing Center, Carnegie Mellon University, University of Pittsburgh
300 S. Craig Street, Pittsburgh, PA 15213 Phone: 412.268.4960 Fax: 412.268.5832

This page last updated: May 18, 2012