The Presidents’ Day Snowstorm of Feb. 18-19, 1979. (Washington Weather)

Winter storm forecasting has changed significantly since the 1970s, for the better. Technology and a better fundamental understanding of the atmosphere have been the main drivers behind the improvement, but there’s another reason your favorite meteorologist — on TV or on the Web — can nail a winter storm forecast five days out: the Internet. With the click of a mouse, anyone can access the same weather models that in the past were only available to government forecasters.

With all of these advances, a winter storm forecast three days out in 2017 is arguably as accurate as one that was around one day away in the 1980s.

Meteorologists anywhere can now access weather models that in the past were only available to government employees and TV stations that had special weather data fax machines. Everyone can look at the models, which have advanced light years in the past two decades.

In the 1970s and 1980s, weather models were coarse. It would be like comparing television resolution in 1960 to the 4k-flat screen models that are available today. The two primary models at the time, the Limited Fine-Mesh Model (LFM) and Primitive Equation Model (PE), only had seven vertical layers to describe 40,000 to 50,000 feet of atmosphere. Horizontally, the smallest disturbance these models could “see” was 500 miles across.

Let’s compare that to their replacements. Today’s North American Mesoscale Forecast System (NAM) and Weather Research and Forecasting (WRF) models have 60 vertical layers and can “see” a disturbance as small as 10 miles across. They can forecast some of those pesky small-scale bands of heavy snow that the old LFM or PE models had no hopes of ever seeing.

The lack of vertical and horizontal resolution made the small things hard to forecast, but it also made low-level cold air very difficult to predict. The atmosphere is like a cake with different temperature layers. When you have only seven vertical layers, there’s no way for the models to see some of the individual layers or important features, even though we know they exist.

Before personal computers were available, forecast fields were received either by a fax machine or a plotter. Once we received the model data, forecasting would commence. Our forecasts were made on acetates with grease pencils, then traced to a paper copy and transmitted by fax.

Most meteorologists would start the forecast process with a hand analysis of the available weather observations to glean whether the model fields looked like they were handling the various waves in the atmosphere correctly. For example, if a dip in the jet stream looked quite a bit sharper on the hand analysis than on the model analysis, it meant the model’s simulation of the storm was probably too weak. Occasionally, these differences would lead to a forecaster modifying the model forecast.

A forecaster would also note where the pressure was falling fastest compared to a model simulation to help get a feel for if the model was handling the movement of a storm correctly. These pressure falls often pointed to the short-term direction that the storm would move.

Forecasts were often grounded in not just the models but also on pattern recognition and rules of thumb. For example, Rosenbloom’s Rule avowed that model forecasts of rapidly intensifying storms almost always were too far to the east. Therefore, a forecaster had to adjust not only the storm’s forecast track but also had to realign where the model was predicting features such as the rain-snow line and the heaviest snowfall. Of course, if the model had the storm track wrong, your heavy snow forecast might go down in flames.

One of the trickiest winter weather problems was and still is where to forecast the rain-snow line. In the 1970s and early 1980s, there was no way to look at the vertical structure of the atmosphere in enough detail to parse whether there was a warm layer located somewhere above the ground.

Forecasters relied on model forecasts of the depth between two pressure levels: 1,000 millibars, a level near the ground, and 500 millibars which is located at around 18,000 feet. The depth between those two layers is not necessarily 18,000 feet, as it varies based on the average temperature in that layer. When it is cold, the distance between them shrinks, and it expands when it is warm. For the D.C. area, for snow to fall, the thickness of this layer was thought to be 5,400 meters (about 17,700 feet) or less.

But that didn’t always work. In the middle of winter, when low-level temperatures are really cold, it can snow when the layer’s thickness is 5,460 meters. Or, when it’s warm near the ground, it can rain with a thickness less than 5,340 meters. Given the limitations of this, forecasters started to also look at the thickness between other layers, which worked better but was still much less accurate than using a top-to-bottom temperature profile of the atmosphere, which is available today.

Satellite image of the President's Day Storm of 1979. (NOAA) Satellite image of the Presidents’ Day Storm of 1979. (NOAA)

The lack of model resolution played a role in the LFM’s under-prediction of the infamous Presidents’ Day Snowstorm of 1979, which buried Washington and Baltimore in more than 20 inches of snow. The 36-hour LFM and PE drastically underplayed the strength of the upper-level disturbance approaching the coast as well as the surface low pressure center off the North Carolina coast.

The lack of vertical resolution in model forecasts also wreaked havoc when trying to predict how far south an Arctic air mass might push. Both the LFM and particularly the NGM, a model introduced in 1987, consistently held Arctic cold fronts too far north. The 36-hour LFM forecast below (left panel), from Feb. 3, 1989, is a case in point.

Hist_fig_1.v2.fw
Forecast surface weather map on left, actual weather map on right.

Note how the model (left) erroneously has a low pressure and front crossing through central Illinois while observations show the front (annotated blue line on the right) crossing through Arkansas and Kentucky at the same time. Forecasters had to learn to correct for these errors.

The LFM’s rather crude horizontal and vertical resolution also led to problems trying to resolve small-scale weather features. For example, they produced notoriously poor forecasts of the position of high pressure systems over the Northeast United States.

In developing winter storms, the exact location of high pressure is critical for precipitation-type forecasts.

If high pressure is over the land, it supplies cold air that spills southward east of the Appalachians in a process known as cold air damming, and precipitation in the Mid-Atlantic region frequently falls as snow or ice. But if the high pressure ends up positioned over the Atlantic Ocean, it directs mild air inland — which favors rain.

In a January 1988 case, the LFM model predicted a surface high pressure system would slide off the Northeast U.S. coast quickly, and snow in Washington would change to rain.

hist_fig_2

In reality, the high pressure system ended up parked over New England in an ideal location for cold-air damming. Washington was pasted by more than six inches of snow.

Forecasters tended to use the LFM for short-term prediction, and the PE model for forecasts two to five days into the future. But correct model predictions of major storms beyond 72 hours were seldom seen.

In a rare moment of glory, the PE nailed the forecast for the Feb. 7, 1978, Boston blizzard in an 84-hour forecast. Usually, though, storms would track much farther west than the PE model predicted. That led one forecaster to proclaim “all lows go to Chicago.”

Today, forecasts from three days on are, on average, better than one-day forecasts from the 1970s and 1980s. Forecasts now routinely extend to seven days, and today’s seven-day forecasts are better on average than five days forecasts from the late 1980s.

By the 1990s, workstations and personal computers became commonplace and started revolutionizing how we looked at model data. Model resolution was increasing and forecasters now could look for important large-scale features that often play a role in storm development as well small-scale features that can lead to heavy snow.

Forecasters were increasingly confident in their abilities by the start of the new millennium. The National Weather Service proclaimed that the introduction of a new supercomputer “puts us closer to reaching our goal of becoming America’s no-surprise weather service.” But one week later, the surprise snowstorm of January 2000 dumped 10 inches on Washington when a dusting was predicted.

In post-storm analyses, forecasters realized that if they had run their models of the January 2000 storm with small tweaks to the initial conditions, a major snowstorm would have emerged as a possibility. That helped pave the way for developing what are known as ensemble forecasts — in which forecasters don’t just examine one simulation from a model, but an entire group of simulations — which offers a fuller range of forecast possibilities.

Last winter, ensemble forecasts allowed meteorologists to start crowing about the potential of a major snowstorm five days in advance of the Jan. 22-23 blizzard, known as Snowzilla. By early on Jan. 19, the European model ensemble forecast system gave the portions of the D.C. area a greater than 90 percent probability of having 12 inches of snow on the ground by 7 a.m. Jan. 24. The American (Global Forecast System) model also produced an excellent long-range prediction for this storm.

hist_fig_3
European model ensemble probability of 12 inches of snow 132 hours before the Jan. 22-23, 2016 snowstorm. (WeatherBell.com)

Winter weather forecasting has come a long way and is much more grounded in science than it was in the 1970s and 1980s. But models are still not perfect and forecast busts still happen.

While models are much better at forecasting the rain-snow line now than they were in the past, they’re still not perfect. The models have a hard time determining where highly localized bands of heavy snow will develop within storms. They tend to erode cold air too quickly during cold-air damming situations. And they still have difficulty simulating weather along the edge of a storm, which has important consequences with respect to where significant snow and ice start and stop.

And, of course, we know model forecasts for winter storms beyond three or four days in the future are very changeable and uncertain.

In other words, there is still plenty of room for gains in forecasting accuracy. It is exciting to think about how much improved our forecasts could be 30 years from now.

(Jason Samenow and Angela Fritz contributed to this post.)