Since the Redskins moved to Washington in 1937, there have been 19 presidential elections. And over time, people began to notice a funny, and surprisingly strong, pattern. If the Redskins won their last home game before the election, the incumbent party wins the presidential election.
This "rule" held in 17 of the 19 elections -- which is a better record than most any other continuous political indicator has held. So there you have it. The data shows the Redskins can predict presidential elections.
But of course they can't. The "Redskins rule" isn't a genius method of predicting elections. It's a parable of the dangers of believing data when there's no compelling theory or story that explains the data. Matt Yglesias puts it well:
The entire Reinhart-Rogoff fiasco reminded me of my post from last year on how empirical evidence is overrated at times by a chart-happy blogosphere. Reading Nate Silver's book gave me a better way of putting this. In his chapter on climate change, he makes the point that one reason climate change skepticism is so tenacious is that the statistical data about climate patterns really is a bit on the noisy and ambiguous side. The reason you can know that the skeptics are wrong isn't so much because the data is so overwhelmingly persuasive, it's that the data is overwhelmingly persuasive in light of the underlying science of how greenhouse gas emissions would cause climate change. Absent the causal theory about the greenhouse effect, simply looking at a chart of world temperatures and the correlation with CO2 emissions wouldn't prove very much. The empirical data is important because it's in line with the predictions of a persuasive theoretical account.
The R&R situation is basically the opposite. In the absence of a plausible account of why a high debt:GDP ratio would cause slow real growth even in the absence of high interest rates, you would want to see overwhelming empirical evidence for the existence of such an effect before you believed it. And they just didn't have the goods.
In a world where no one is forced to present real data, good writers and debaters can make all manner of theories sound persuasive. Wonkblog tries to push against this tendency by testing a lot of rhetorical arguments against available data.
But the problem can also run in the reverse: The trappings of data and charts can be used to make bad arguments sound persuasive. The fact that an argument comes with numbers, or with a graph, doesn't mean it's true. Or, to put it another way, permit me to turn the mic over to XKCD:
Update: 19 elections, not 21. Theories are good, data are good, and so is counting correctly.