Now that we’re into meteorological fall, I thought it timely to review the rationale behind why the National Weather Service prefers using the meteorological instead of the astronomical seasons.
Growing up in the 1950s, I don’t recall the “meteorological seasons” being mentioned at all, so I had thought that the concept was instituted by the National Climatic Data Center (NCDC) during the last few decades. However, that’s not exactly the case.
Derek Arndt, of the National Climatic Data Center (NCDC), tells me that usage of the “meteorological seasons has been around since the early-to-mid 20th century, when it really took root in the applied weather and climate communities.” Arndt explains that although some NCDC products (the weekly drought monitor, for example) are issued with greater frequency, “dealing with whole-month chunks of data rather than fractions of months was more economical and made more sense – and still does, in many ways. We organize our lives more around months than astronomical seasons, so our information follows suit.”
Today, although the public may question it, usage of the meteorological seasons is quite useful to meteorologists and climatologists when trying to compare individual seasons to one another.
Prior to usage of the meteorological seasons, it had always been difficult for the National Weather Service to make exact seasonal comparisons. This is because the traditional astronomical seasons begin on varying dates during the 3rd week of March, June, September, and December.
But some time ago, meteorologists decided that, with the civil calendar in mind, they would define the seasons in a much simpler way — spring: March, April May; summer: June, July, August; fall: September, October, November; winter: December, January, February. In other words, except for every fourth winter, when there is an extra day, the meteorological seasons always have 90 days of winter, 92 days of spring, 92 days of summer, and 91 days of fall. When using the astronomical seasons, these numbers can vary between 89-93 days, depending on the year.
As we’ve talked about before, aside from its statistical value, the other major benefit of using the meteorological season definition is that “it portrays a more accurate reflection of the seasons, since the 90 coldest and 90 hottest days of the year usually, but not always, fall closer to the meteorological seasons than the astronomical ones.”
So how do seasonal temperature averages really compare when calculated both astronomically and meteorologically? There can be a significant difference.
I calculated the differences for the winter of 2010-2011. As you can see in that year’s post, all of the local reporting stations showed below average temperatures for the (meteorological) winter as a whole, even though December and January were a few degrees below average and February above.
However, when calculating the 2010-11 astronomical winter, the tables are reversed, with the winter as a whole averaging somewhat above normal.
In case you thought that “astronomical” and “meteorological” were the only seasonal definitions used worldwide, think again. As discussed in the earlier post, many other systems exist elsewhere. One is the “traditional reckoning system” in which solar insolation determines each season; and the Celtic calendar system.
(For the purists, is there an error in this NCDC diagram above? The first day of spring in the Northern Hemisphere is sometimes March 20, as it was this year, and yet the diagram overlooks that possibility.)