wpostServer: http://css.washingtonpost.com/wpost2

Most Read: Local

Posted at 11:06 AM ET, 11/23/2011

Why are snowstorm forecasts sometimes so wrong? Part one

Almost every year, at least one snow forecast ends up busting in our region. Many readers probably remember last year’s December 26 bust (when we called for 3-6” of snow, and little fell). The fallout elicited remarks like “weather forecasting is the only job where you can be wrong 90 percent of the time and still keep your job.” While that’s a huge overstatement about the state of weather forecasting, it certainly captures the frustration that many feel when a forecast fails.

A number of factors that can contribute to a poor forecast: 1) many of the physical processes that govern the atmosphere act non-linearly, 2) uncertainty about the initial state of the atmosphere, 3) certain part of a model’s physics have to be approximated, 4) there is often more than one stream of flow that has to be handled correctly by the models, 5) we live close to a huge heat and energy source (the ocean), 6) forecaster making poor decisions.

Any one of these factors can help lead to a poor forecast and a perceived bust. In the following discussion I’ll attempt to explain how these first three factors can sometimes negatively impact upon a forecast and how meteorologists try to mitigate them. Next week, I’ll tackle the last three factors in part two of this series.

The non linear nature of weather

The non-linear nature of the atmosphere comes into play in causing forecasting problems in several ways.

Forecasters cannot just extrapolate features as they come eastward expecting them to move and change in a linear manner. Weather systems cannot be anticipated to change in a steady manner.

Imagine a series of numbers as representing the development of a storm system. A linear extrapolation of the developing of such a system would be 2, 4, 6, 8, essentially a steady measure of increase in the system’s strength.

However, a non-linear change is represented by a number sequence like 2, 5, 15, 60. Weather systems can and do change and develop rapidly. These non -linear changes not only impact the strength of the system but also how it tracks. That’s why computer models are of such value. They can often anticipate the rapid changes.

Without physically-based computer models, it is doubtful that forecasters would have been able to predict the massive October storm that hit the northeast. The non-linear nature of weather is what makes it possible to get monster snowstorms but also is partly the reason why forecasting them is so difficult. Because atmospheric responses are non-linear, errors in a model can sometimes grow quickly.

Uncertain initial conditions

MIT scientist Edward Lorenz published two seminal papers in the 1960s that discussed that small differences in two models simulating the initial state of the atmosphere can grow non-linearly when projected forward and produce two diametrically opposed solutions. Steve Tracton has previously written about Lorenz’s work and its implications to forecasting.

Unfortunately, there is no way to measure atmospheric variables (the temperature, winds, moisture, etc) accurately at every point on the globe. Furthermore, atmospheric measurements from various sources (balloons, satellite, radar, ships, planes) are imperfect. So models never have a 100% accurate representation of the actual atmosphere.

The incomplete set of imperfect observations have to be brought into a model in such a way as to minimize errors that might later grow and contaminate a simulation. This quality control and assimulation process somewhat smooths the data. Therefore, the initial state of the atmosphere is always somewhat uncertain and that uncertainty can and does sometimes lead to major forecast problems.

The two forecasts below are from the exact same model with identical physics but with slightly different (probably not discernable to the naked eye) initial fields (sets of data). Note that one has a strong low (left hand panel) located north of D.C. implying a rain storm while the other has a much weaker low farther to the south suggesting the storm would either miss us to the south or would produce snow.


Two models forecasting for the same time period show very different results.
The non-linear nature of the physics makes such model differences possible. It not only can impact upon the intensity of a weather system but also can affect the track.

Any errors in the initial fields grow faster in some patterns than in others. That is the basis for developing ensemble forecasting systems. The National Centers for Environmental Prediction (NCEP) runs a number of simulations four times each day in which they perturb (tweak slightly) the initial conditions to try to get an idea of the probabilities associated with any storm system. The resultant array of solutions provides information that can be used to assess the probabilities of getting a snowstorm. However, even if every ensemble member is forecasting a snowstorm on day 5 projection, that is no guarantee that a snowstorm will occur. Occasionally, the actual truth lies outside of any of spread of any of the solutions.

Approximations of some physical processes

Another source of error is that certain atmospheric processes (convection, clouds, radiation, boundary layer processes, etc) are either too small to be represented in the model, not well understood, and/or too computationally expensive to simulate.

Probably the most problematic process to deal with is convection. Convection occurs on a scale too small for models to simulate and must be parameterized, a procedure for representing it on a scale that the model resolves. Parameterization requires approximations, which can lead to forecast problems.

The uncertainty of the initial conditions, possible errors introduced by the approximations of the physics and the non-linearity of the physical processes are a dynamic mix. Together they are the factors that lead to the models jumping from solution to solution leading up to a storm. In the longer ranges, the differences between solutions can be quite large. In shorter time ranges, the differences are not as large but our location near the ocean makes small differences in the track and intensity of a storm crucial to getting a snow forecast right.

The differences between two operational models, the GFS and NAM, prior to the October 29 storm are a case in point. The NAM suggested that the D.C. area would see accumulating snow while at least one run of the GFS suggested almost all the precipitation in the area would be rain. Because there is always some uncertainty about any forecast, meteorologists are evolving towards issuing probability based forecasts.

Stay tuned to Part II where I’ll discuss additional factors that can mess up a winter weather forecast.

UPDATE:, here it is: Part II on the difficulties in snowstorm forecasting....

By  |  11:06 AM ET, 11/23/2011

Categories:  Latest, Winter Storms

 
Read what others are saying
     

    © 2011 The Washington Post Company