“The signal and the Noise: Why So Many Predictions Fail — But Some Don’t” by Nate Silver
By John Allen Paulos,
Nate Silver is best known as a statistician and election analyst (psephologist) who correctly predicted the winner in 49 of the 50 states during the 2008 presidential race and called all 50 states correctly this past week. He quietly persevered in his election analyses despite a torrent of criticism and invective from a variety of commentators who called the race a tossup or even insisted that Mitt Romney would win handily.
Notwithstanding his track record, however, his book “The Signal and the Noise” is a much more general tome about predictions good, bad and ugly, whose basic outline is straightforward. In the first half, he examines predictions by experts in the fields of finance, baseball, politics, health, weather and the economy. In the second half, he discusses ways in which these predictions might be improved and how they might help clarify issues such as global warming, terrorism and market bubbles.
The strength of the book lies in the abundance of relevant detail Silver provides about each field and his analysis of why predictions are generally much better in some fields than others. He interviews a wide variety of knowledgeable people; an especially prominent source is psychology professor Philip Tetlock, who has amassed considerable evidence that most prognostication by professors, journalists and government officials is close to worthless, a conclusion with which Silver clearly agrees (as do I).
Even in the political domain, however, there are some bright spots. Silver and Tetlock mention essayist Isaiah Berlin’s reference to a poem by the ancient Greek poet Archilochus, who wrote that “the fox knows many things, but the hedgehog knows one big thing.” The predictions of pundits who are more fox-like, who stick to little facts, telling observations and small-bore issues, are usually somewhat more accurate than those of pundits whose approach is more like that of a hedgehog. The latter tend to try to fit everything they hear into the same tidy, overarching narrative, a tendency Silver notes is particularly prevalent on pundit-laden shows such as “The McLaughlin Group.” Silver’s book and his deservedly popular and impressively accurate FiveThirtyEight.com election blog for the New York Times reveal him to be a confirmed fox.
The ideological approach is related to other characteristics of poor predictions: ignorance of probability and, especially, overconfidence. In fact, in all the fields discussed, overconfidence is associated with underperformance and, I would add, with another common personality type: the extreme hedgehog, otherwise known as the hot dog.
Not surprisingly, data-rich fields lend themselves to better predictions. Baseball is one, as Silver well knows. His PECOTA system, which analyzed and predicted the career development of major league players, proved quite successful. In discussing statistician Bill James, general manager Billy Beane and others, he states that their successes were possible in large part because they analyzed reams of data, supplemented them with scouting reports, and were not loath to revise their criteria when they found some that worked better and were more predictive.
Weather predictions, including hurricane trajectories and flood levels, have also improved because of the data available, a more impressive feat because, unlike baseball pundits, weathermen must contend with dynamic, rapidly changing systems. Silver tells the well-known story of meteorologist Edward Lorenz, whose toy 1960 model for the atmosphere behaved very strangely. Accidentally inserting a number into his model that differed minusculely from the previously inserted number, Lorenz discovered that the two predictions that resulted began diverging immediately and after a brief time bore almost no discernible relation to each other. Even today, when short-term weather predictions have improved markedly because of better models and more powerful computers, this sensitive dependence on initial conditions — the proverbial butterfly flapping its wings in Brazil leading to a tornado a month later in Texas — makes longer-term forecasts not much better than chance.
Sprinkled through the book are anecdotes and asides that leaven the sometimes overly earnest narrative. There is, for example, the “wet bias” of many weathermen who forecast a higher probability of rain than is warranted because most people prefer expected rain that doesn’t materialize to an expected sunny day that is marred by rain. Silver also suggests that Derek Jeter appears to be a better defensive shortstop than Ozzie Smith only because Jeter’s fielding and diving range is less than that of Smith, who makes the same plays more effortlessly.
Keen to emphasize the gradual improvements in forecasting that have occurred, Silver links these incremental changes to 18th-century English minister Thomas Bayes’s theorem, a result in probability theory that allows one to continually update the probability of some event when new information becomes available. The theorem is extremely useful, but it is controversial in areas where the initial estimate of probability can vary considerably. Oddly, given the prominence that Bayes’s theorem plays in the book, Silver doesn’t really elucidate it but simply gives an algebraic formula for it. He shies away as well from developing other germane notions, such as expected value, statistical independence and simulation. This contrasts with the deeper analysis he provides on the mortgage crisis, epidemics, the economy and even poker.
Helping to unify the book are the similarities that Silver points out between forecasts in seemingly disparate fields. Earthquakes and terrorist attacks are a good example. Although predicting individual earthquakes is still impossible, the fact that the intensity of quakes follows what’s called a power law does enable us to say something significant: Quakes of bigger and bigger magnitude occur less and less frequently in a quite regular sort of way, so we can gauge how often to expect titanic quakes based on the steadily decreasing frequency of larger quakes. Computer scientist Aaron Clauset pointed out to Silver that a similar conclusion holds for terrorist attacks. Since the magnitude of terrorist attacks also follows a power law, we can predict roughly how often to expect mammothly destructive terrorist attacks based on the quite regularly decreasing frequency of bigger and bigger attacks. It follows that the probability of such a catastrophic attack is extremely small, but, alas, not zero.
All in all, “The Signal and the Noise” provides an appealing and instructive compendium of the state of predictability in different domains, and, based on my personal sample of one, I predict readers in all 50 states will agree.
John Allen Paulos is a professor of mathematics at Temple University and the author of “Innumeracy,” “A Mathematician Reads the Newspaper” and, most recently, “Irreligion.”
THE SIGNAL AND THE NOISE Why So Many Predictions Fail — But Some Don’t By Nate Silver Penguin Press. 534 pp. $27.95