I kept a spreadsheet for three cities that compared AccuWeather’s 45-day forecast to the actual observations for each day during that time period, and the results were unsurprising: their forecasts were, by and large, off the mark. Most of the time they weren’t even “somewhat accurate” according to my criteria.
The discipline of meteorology is far from an exact science. For the most part, the field is about the art of understanding a fluid atmosphere and using its past movement to predict what it will do in the future. Predicting the future is surprisingly hard, regardless of how easy Miss Cleo made it look.
Weather models do a relatively good job at helping meteorologists predict what the weather will do three to five days in advance, but anything beyond 7 days is stretching the limits of accuracy and science. As CWG meteorologist Dr. Steve Tracton was quoted in Jason’s post back in August, AccuWeather’s hyper-extended forecasts “[undermine] the credibility of the science of meteorology. There cannot be skill at those ranges – it goes back to chaos theory.”
AccuWeather, for its part, argued in an August 7, 2013 blog post that the long-range forecasts aren’t intended as strict guides, but rather as a reference point to judge if the weather will exhibit trends of warmth, coolness, or if there will be an extended period of rainy weather. However, their Web site conveys the exact opposite intention by publishing very specific forecasts. For example, AccuWeather’s forecast for Washington, D.C. on November 8 (41 days out as of September 30, 2013) is “Cloudy; a shower or thunderstorm in spots in the evening, then late-night showers and thunderstorms.”
That’s not a trend, that’s a guess masquerading as a scientific forecast.
Since AccuWeather appears to be presenting these “trends” as actual forecasts, I decided I’d put them to the test.
Evaluation of AccuWeather 45-day forecast: method
When I took weather forecasting courses while completing my meteorology minor at the University of South Alabama in Mobile, the meteorology department used a grading rubric to grade how well (or poorly) your forecasts performed when compared to the actual observations in the city for which you developed a forecast.
I adopted a very loose version of the meteorology department’s grading rubric to judge AccuWeather’s hyper-extended forecasts. It’s rather simple:
- AccuWeather gets 1 point for every 1 degree Fahrenheit they’re off on the high and low temperatures. If they predict a high of 70, and the actual high is 68 (or 72), they get 2 points.
- AccuWeather gets 1 point for every 10% they’re off on the rain chances. If they predict a 20% chance of rain and it actually rains, they get 8 points. On the other hand, if they predict a 70% chance of rain and it doesn’t rain, they get 7 points.
I chose these two variables (temperature and rain chances) because that’s usually what most people care about when they check the weather. A score of 0 points means that AccuWeather got the high temp, low temp, and rain chances spot on. The more points they get on a certain day, the more inaccurate their forecast was.
By my criteria, a score of 15 indicates an inaccurate forecast, 25 extremely inaccurate, and 40 embarrassingly inaccurate. Between 5 and 15, a forecast is in a somewhat accurate-inaccurate gray area.
Starting on August 7, I recorded their 45 day forecast for three cities in the United States through September 20. The three cities I chose were Mobile, Alabama (KMOB); Denver, Colorado (KDEN); and San Francisco, California (KSFO). I chose Mobile and Denver because both cities have notoriously fickle weather and can present a challenge to forecasters. The third city, San Francisco, was chosen because of its reputation for extremely stable and predictable weather most of the year.
It’s worth noting that this was not by any means a scientific study, but rather a direct observation of AccuWeather’s forecasts compared to the actual weather recorded for each day. The project was done for illustrative purposes. In the cases I reviewed, AccuWeather’s long-term forecasts were not only less than accurate, but also regularly failed to capture general trends in weather.
Evaluation of AccuWeather 45-day forecasts: results
Starting with Mobile, Alabama, the forecasts were generally less than “somewhat accurate” – but not at the “embarrassing” level – throughout the 45-day period.
Showers and thunderstorms on the northern Gulf Coast are almost a daily occurrence from May until October. The observations at Mobile Regional Airport reflected the hit-or-miss quality of the storms in the area, recording at least a trace of rain 21 out of the 45-day period. AccuWeather did what most forecasters would do by playing it safe on rain chances, usually predicting between a 40-60% chance of rain each day through the period. Given that it rained on nearly half of the 45 days in the forecast period, they were in the ballpark about half of the time.
They missed the mark on the temperatures, though. Towards the end of August and through the middle of September, Mobile experienced a warm spell. Even as temperatures topped out above average each day during the first half of September, AccuWeather predicted the exact opposite.
The extremely unusual weather in the Mile High City over the period wreaked havoc with AccuWeather’s 45-day forecast. The deluge experienced in eastern Colorado in the middle of September ran completely against AccuWeather’s forecast of a dry spell during the same time frame. The forecasters were so confident in Denver not seeing a drop of rain that they issued a 0% or 1% chance of rain almost the entire week that the region saw historic flooding.
Denver experienced a heat wave for a week and a half before the “biblical” rains struck, again almost diametrically opposing AccuWeather’s forecast of a general cool spell followed by average temperatures.
Also, the company’s forecast of 73 degrees on August 20 was blown away by the actual high of 99 degrees, a 26 degree difference. Similarly, they predicted 72 degrees on September 3, falling 22 degrees short of the actual high of 94.
Overall, after the first week of the 45-day period, the forecast performance hovered in the extremely inaccurate range and, at times, reached embarrassing levels.
San Francisco is famous for having mild weather all year with little variation from climatological means. The city saw virtually no rain during the period, which AccuWeather predicted fairly well, getting it wrong only on the two days it actually rained (predicting a 4% and 2% chance each day, respectively).
The temperature forecasts were mixed.
They were generally in the ballpark when it came to high temperatures, predicting trends relatively well when one excludes the abnormally toasty high of 88 degrees on September 7. The low temperature predictions, on the other hand, were mostly wrong. The company predicted below-average temperatures through almost the entire period, which was the opposite of what actually occurred.
Evaluation of AccuWeather 45-day forecast: conclusions
Overall, AccuWeather’s 45-day forecasts were inconsistent: occasionally right, often in the gray area between somewhat right and wrong, and occasionally spectacularly wrong. They missed key trends as often as they picked up on them.
In my limited sample, the forecasts did not get worse with time but suffered similar deficiencies whether it was day 5 or day 45. The company might want to spin this as evidence that their 45 day forecasts somewhat resemble science, but, in my view, it’s poor forecasting consistently performing poorly.
AccuWeather is a for-profit company and they have every right to pass off less-than-accurate forecasts as they wish, but the public deserves to know that these 45-day forecasts are not rooted in any science currently available to meteorologists and have not demonstrated value. Caveat emptor.