conversations jane lubchenco

NOAA's tsunami warning system

Wednesday, March 3, 2010

Jane Lubchenco, a marine ecologist and environmental scientist, became the ninth administrator of the National Oceanic and Atmospheric Administration a year ago. She taught at Harvard University and Oregon State University and also served as president of the American Association for the Advancement of Science. NOAA's responsibilities include detecting and then warning of tsunamis such as the one created last weekend by the 8.8-magnitude earthquake in Chile. Lubchenco talked to The Post on Tuesday.

-- David Brown

Q. How well did the tsunami detection work, in your opinion?

I think that the tsunami detection and warning operation that we saw over the weekend worked spectacularly well. Especially in comparison to 2004, when the Indian Ocean tsunami hit, there's a world of difference. The Indian Ocean tsunami was a real wake-up. It provided significant motivation for NOAA and for members of Congress to get to a point where we could protect lives and property by having much more effective detection, modeling and warning systems, coupled with communities being prepared so that they knew what to do.

The Pacific Tsunami Warning Center issued a limited-area warning 12 minutes after the earthquake was detected. Four hours later, the warning was expanded to the majority of the Pacific basin. Within five hours of the earthquake, the West Coast and Alaska Tsunami Warning Center issued a tsunami advisory to the West Coast. The fact that it was an advisory, not a warning, was right on target and saved an awful lot of unnecessary evacuation.

The prediction for Hawaii was for a six-foot wave, and it actually came in as a three-foot wave. Was that a measuring error, modeling error or an intentional erring on the high side?

Predicting any particular tsunami is a challenge; they are obviously very complex. The predictions were made with the best available information, but it was not complete. The tsunami that hit Hawaii was fortunately less than was anticipated. The currents that were in the water were complex. They could have been very, very dangerous had the warnings not been issued and had people not responded as they did.

Nevertheless, the predicted rise was off by 100 percent in magnitude. Are you at all fearful that the next time that people get a warning they will assume it is exaggerated?

Decades ago, if an earthquake happened, it might have been natural to have everybody evacuate everyplace. We are now in a much better position so that we were able to issue only an advisory in some places. That was helpful.

A key piece of technology in predicting tsunamis is the system known as Deep-ocean Assessment and Reporting of Tsunami. How many DART stations are there, and how do they work?

We have 39 -- 32 in the Pacific, and seven in the Atlantic.

The station itself has a "bottom pressure recorder" and a surface buoy. The bottom pressure recorder is on the ocean floor, and it measures the pressure of the tsunami wave as it passes over. That pressure measurement is transmitted up to the surface buoy, which in turn transmits it to the tsunami warning centers. It is also critical to have real-time information about the earthquake itself. In 2004, only 80 percent of the global seismographic network data were transmitted in real time. Today, it is 100 percent. Another critical piece is the water-level stations. In 2004, we only had a very limited number; today we have 164. They verify the presence of the tsunami and provide information that enables us to update the models as the tsunami is traveling across the ocean.

Detection of movement of water is only half the game. The rest is putting information into models for predicting what is actually going to happen when the water reaches the shore. How good is the modeling?

Much of the behavior of a tsunami depends on the angle at which it hits the coast, the steepness of the shore and sea bottom topography, as well as the shape of the coast -- whether it is a bay or a linear coast. You want the model to tell you what is going to happen at a particular place and not just at a generic coast.

If we look at the earthquake of last Saturday, the models were highly reliable and very accurate for the West Coast. If we look at Hawaii, the timing was pretty close. The tsunami appeared about 10 minutes after the predicted time, but the height was about half of what was predicted.

What are some of the variables that have proved hardest to get?

Because tsunamis are relatively rare, it is a challenge to make the models better and better. As unfortunate as it was that there was serious damage and loss of life in Chile, the information from that tsunami will be fed into the models. I just cannot overestimate how important that is in making the models better.

© 2010 The Washington Post Company