In a model, the atmosphere is divided into a three-dimensional grid and each grid point is given the assimilated data. These are called initial conditions. Then at each grid point, the mathematical equations are applied and stepped forward in time. The outputs over many time steps specify future weather at all grid points.
The two most well-known weather models are the European Center for Medium-Range Weather Forecast (ECMWF) model and the National Weather Service’s Global Forecast System (GFS) model. They are more commonly known as the European and the American models, respectively. They are global models and can provide predictions all over the world.
Then there are mesoscale (fine scale) models, which hone in on more specific regions and tend to be able to forecast really small weather features, like thunderstorms, better than the global models. The two most popular U.S. mesoscale models are known as the North American Mesoscale Forecast System (NAM) and the High-Resolution Rapid Refresh (HRRR) model.
How can these models predict different outcomes? Each model assimilates data differently and uses different equations. Additionally, each model handles weather processes that occur in between grid cells — such as turbulence and small cloud growth — differently. If you put good initial data “in,” you are more likely to get good “out.”
The models also use different interpretations of the fundamental equations, and apply different assumptions, which can result in errors. Meteorologists still have more to learn about the physics of the atmosphere.
Geographic area is another factor. Unfortunately, meteorologists don’t know what the weather is like over the entire planet: Absent is surface data from areas such as oceans and large uninhabited regions, such as rain forests and deserts. And computer models still struggle with various terrains, particularly mountains. There are occasions when forecasters estimate what isn’t known which may be problematic.
Among the global models, the European model has long produced the most accurate forecasts in the world, on average. Famously, it was the first to correctly predict Hurricane Sandy would make a hard turn into the Northeast United States rather than go out to sea in 2012.
After Sandy, Congress appropriated money to the National Weather Service to improve the American model, which caught on to Sandy’s track later than the European. The Weather Service received additional funding to improve the American model following the 2017 Atlantic hurricane season.
But the European model is not the best model in every situation and the American model has outperformed it in some significant situations. In the blizzard of January 2015 the European model predicted that New York City would get hit with two feet of snow. But the storm moved east of the city as predicted by the American model and the city ended up getting around eight inches.
To get a sense of the uncertainty in a forecast, meteorologists are increasingly relying on what are known as model ensemble systems. These use various simulations of the same model to develop a family of alternative predictions by tweaking the initial conditions. This helps capture the range of uncertainty in the forecasts. By using this method, forecasters gain a better understanding of the range of possible outcomes in a certain situation.
Each model has pros and cons. Smart forecasters look at the entire universe of models together, and take their strengths and limitations into account when making predictions, while communicating uncertainty when models disagree. As computer technology and scientific knowledge improves, models will become more sophisticated leading to better forecasts.
Samantha Durbin is a Capital Weather Gang intern, studying meteorology at the University of Maryland Baltimore County.