It’s tempting for meteorologists to gloat about how superior weather forecasts are in light of Tuesday’s outcome. And they are. Meteorologists can tell what the next day’s forecast will be with about 85 to 90 percent accuracy. A short-term weather forecast bust of the magnitude of Tuesday’s election is extraordinarily rare.
But while weather and political forecasting share some similarities, comparing them directly isn’t appropriate. Political forecasts are much more difficult to make because they involve people. Whereas the weather is governed by physical laws, human behavior follows no such hard and fast rules.
With all due respect, huge difference between predicting the wx and predicting behavior. The age of false equivalence needs to end.— Ian Livingston (@islivingston) November 9, 2016
When we make a weather forecast, we have thousands of observations of temperature, wind and moisture at different levels of the atmosphere that feed into our models. These are objective data points. Unlike political data, they’re not subject to distortion from current events.
“You know almost exactly what the state of the atmosphere is at time zero,” explained Ryan Maue, a meteorologist with WeatherBell Analytics. “Pollsters are starting with initial conditions, which already have a lot of error, and extrapolating that out.”
Consider the behavior of the atmosphere simulated by models is more or less fixed, while ever-shifting demographic and socioeconomic trends increase the potential for the output of political models to be skewed.
“If the physics of the atmosphere changed every few years, what would that do to our [weather] models?” said Matt Lanza, an operational forecast meteorologist for the energy industry and occasional contributor at Nate Silver’s fivethirtyeight.com. “I don’t envy political pollsters for the job that they have. We deal with very different realms of prediction.”
Weather models have benefited from decades of forecast analysis which have made them better, said Roger Pielke Jr., a professor of environmental studies at the University of Colorado who has published scholarly articles on prediction in science and policy.
“There are hundreds of millions of unique forecasts that have been made for weather that gives an enormous body of statistics that we can evaluate,” Pielke Jr. said. “These forecasts you can explore, quantify, and examine and get robust scientific knowledge on the predictions themselves.”
The same cannot be said for political predictions, because human voter dynamics are always changing.
Political predictions can even alter the outcome of an election they are attempting to forecast, something that can’t happen with weather predictions, Pielke Jr. noted. This adds another layer of complexity to such prognostications. “Your prediction of the weather isn’t going to change the weather,” he said. “A fair question to ask is: Did all of the predictions of a landslide Clinton victory lead to the depressed Democratic turnout compared to 2012?”
What weather and political forecasts share in common are enormous challenges in characterizing and communicating uncertainty.
Forecasters in both disciplines use ensemble prediction methods to evaluate uncertainty. That is, they analyze a range of models with different inputs and assumptions to see how well they agree or disagree. If the group of model simulations are all very similar, that suggests confidence in their output. But if they differ, that suggests large uncertainties.
A dangerous trap forecasters relying on ensemble prediction can fall into is accepting a result when all models agree but are catastrophically wrong. In both weather forecasting and political forecasting, a tiny error in modeling — largely opaque to forecasters — can prove hugely consequential. This is essentially what happened in Tuesday’s election.
“Many folks have been calling these scenarios black swans,” said WeatherBell Anayltics’ Maue. “You have a false sense of confidence that you’re on the right track.”
Short-term weather forecasting has reached the point where such “black swan” scenarios are mostly a thing of the past. (Note, however, meteorologists still have difficulty forecasting and communicating the range of effects along the edge of storms and there have been notable misses recently. And, at longer range, we see cases where model ensembles are wrong.)
But political forecasting, which is not as mature and ever more complex, has proved such a horribly bad prediction is possible within hours of the outcome. Political scientists will surely study Tuesday’s election for years to learn from avoiding a repeat in the future.