We hear it every winter: “The American model says …” or “But, isn’t the European model most accurate?” But what’s behind the computer weather models that guide a forecast?
Computer models are, by far, the most important tools weather forecasters use for making predictions. They are able to process infinitely more information than the human brain in a fraction of the time and keep improving.
Because of progress made in computer modeling, weather forecasts have improved by about one day per decade. In other words, a five-day forecast today is about as accurate as a three-day forecast was in the 1990s.
Forecasters generally rely on two primary types of models: those that cover the globe and high-resolution models that key in on smaller areas to capture more detail.
Global models, it comes as no surprise, are run globally. They’re the models that capture the sprawling weather systems that can stretch across a continent, such as cold fronts and massive storms. If you’re looking to accurately forecast systems of the largest scale, then you have to go big. So big, in fact, that your model spans the entire planet.
The two global models you hear the most about are likely the American and European models. Each has its own strengths and weaknesses.
The American model is officially known as the Global Forecast System model or GFS. It is created and operated by the U.S. National Weather Service. It’s run four times a day and churns out predictions up to 16 days in the future.
The computing power behind the American model has grown tenfold in the past four years, with the model now able to process eight quadrillion calculations per second. The supercomputer running it is one of the 30 fastest in the world, according to the National Oceanic and Atmospheric Administration.
The European model is officially known as the European Center for Medium-Range Weather Forecasts model or ECMWF. It is named after its operating agency in Europe, stemming from a partnership between 34 different nations with a need for weather modeling.
The European model is more computationally powerful than the American and is generally regarded as an all-around better model. That’s due to the way data is organized and processed by the model’s “under-the-hood” math and physics, in addition to the raw power of the supercomputer running it.
The European model earned particular fame in 2012 when it accurately predicted Hurricane Sandy would make a hard turn into the northeast coast of the United States before the American model.
In 1979, the first ECMWF forecast rolled off a “supercomputer” about a tenth as powerful as a modern-day smartphone, according to the center, and the current computing array is about as powerful as a stack of smartphones more than 20 miles tall.
Instead of running to 16 days into the future, like the American model, the European makes predictions only 10 days out. The nine- to 10-day range has been shown as the “practical limit” of accurate weather forecasts. Model forecasts are most accurate one or two days into the future, moderately accurate three to five days out, and become increasingly less reliable beyond.
Other global models that forecasters frequently review include Canada’s Global Environmental Multiscale Model (GEM), which runs out to 16 days, as well as the U.K. Met model, which runs out to a week. On occasion, the German ICON model and Australia’s model enter the conversation as well.
Both the American and European models have shown substantial forecast improvement over the years, although — evaluated objectively — the European has consistently demonstrated somewhat superior performance.
While the European is, on average, the more accurate model, the American sometimes produces better forecasts. Skilled meteorologists review the forecasts of both models, along with others, understand their strengths and weaknesses, and understand in what circumstances to place more or less weight on a specific prediction.
When forecasts between models disagree, meteorologists can look at even more simulations of weather, known as ensemble forecasts, to gain further insight into the range of possibilities. These are additional simulations from the European and American modeling systems, with small tweaks to the data feeding them from the primary model run.
When these ensemble forecasts differ by a lot, this tells forecasters that there is high uncertainty in the model predictions. In such a situation, forecasters are wise to project low confidence and communicate a range of possibilities.
Sometimes, you might see more detailed-looking models such as the high-resolution NAM (North American Mesoscale model) or the HRRR (High-Resolution Rapid Refresh model). These are “convective-allowing models.”
Think thunderstorms. They’re pretty small. Too small, oftentimes, to be adequately resolved in the global models’ six-to-10-mile-wide grid boxes. A thunderstorm updraft may be only two or three miles across.
As such, high-resolution models focus on smaller, more intricate processes over smaller time scales and finer distances. While they’re useful in forecasting the structure of thunderstorms and the specific hazards associated with them, they can also lend a hand in predicting areas of snowfall enhancement in sprawling winter storms and small features such as snow showers and squalls.
The bottom line
While they keep improving and are the state-of-the-art tools for weather forecasting, there are no perfect computer models.
This winter, you might see a specific model forecast posted on social media many days into the future, when these predictions aren’t that reliable.
As a general rule, wait until at least a few days before a winter storm to make decisions based on specific predictions and find a trusted meteorologist to help you interpret model forecasts.