Earlier this week, I reported that the Romney campaign’s internal polls actually suggested that they would lose.  But there were some within the campaign who clung to hope based on poor predictors like how the Romney rallies “felt.”  (Journalists were not immune from this either.)  The problem wasn’t bad data so much as bad thinking.

Political scientists Ryan Enos and Eitan Hersh have found that this is a chronic problem.  Enos sent me the graph above and the following explanation:

Eitan and I a surveyed almost 4000 campaign workers from Democratic campaigns from the Obama campaign all the way down to local races, with Congressional, Governor, and state house in between.  One question we asked them was about their prediction of the final vote outcome in their race.
The results from 127 down-ballot races are included in the attached graphic.  This graph plots the actual vote-share for the Democratic candidate (horizontal axis) against the predicted vote share by the campaign workers (vertical axis).   The blue dots are non-incumbent campaigns and the red dots are the incumbent campaigns.  Dots in the top right quadrant represent campaigns that predicted a win and won; the top left are campaigns that predicted a win and lost; bottom left are campaigns that predicted a loss and lost; and bottom right are campaigns that predicted a loss and won (no campaigns we surveyed actually did this).  A dot on the 45 degree line means a perfect prediction.  Dots above the line are over-confident.  Dots below the line are under-confident.
Two things stand out from this graphic: campaigns in general are over-confident: most of the dots are above the lines and challenger campaigns are especially over-confident with 93% over-predicting their final outcome and most predicting a win when they eventually lost.

And note that this was an anonymous survey.  The campaign workers had no reason to say what they thought the campaign would want to hear. You can read more in Enos and Hersh’s paper (pdf), including about the factors that make these estimates more or less accurate.

Here’s why I think this finding is important.  Typically speaking, I discount what campaigns say about their chances of winning because I assume that (a) they will selectively release polls to manipulate commentators and reporters or (b) they’ll lie if the data don’t look good.

What the Romney example and Enos and Hersh’s research shows is that campaigns may simply fail to make accurate judgments, even when they do have good data and information at their fingertips.