I am about to commit an act of blasphemy, for which I hope I can gain your eventual forgiveness.
Here we go: Analysis of who won or lost a debate is often disconnected from how the evaluated candidates actually fare in the polls.
Before you bother tweeting me "lol of course omg u r dum" or "you are wrong and also u r dum," I will note that I have numbers to back me up. And then I will quickly note that the numbers are diaphanous, because this is a very hard thing to judge.
Think about it. Our Chris Cillizza does the yeoman's work of picking winners and losers for the debates, as he did with Thursday night's. So let's say he said Marco Rubio won (which is what he said). What do we expect to happen? Clearly, if Rubio had a good debate, we figure his campaign would get a boost. And therefore, that his poll numbers should rise.
Now, the best means of tracking poll numbers is to look at polling averages over time. If Rubio won, we should see his poll numbers increase, right? Except polls, of course, reflect a lot of things involved in a campaign: TV ads, mail, word-of-mouth, whims. It's essentially impossible to extract how much of a polling rise is due to any one factor. Polling averages, which are generally a better indicator than individual polls because they aggregate findings, only update when there's a new poll. So we need to wait for a poll to be conducted, from Friday on, and then be released, and then worked into the average.
And that's depending on how we define "won"! Is the "winner" of a debate the one that rose the most in the polls at some to-be-determined distance from the debate? Is a winner anyone that saw any increase? You tell me.
You can see why this is a tricky thing to assess. Particularly because Cillizza, bless him, calls the winners "winners." Most pundits often use the more weaselly "had a good debate."
This is what's known in the business as "throat-clearing," my effort to outline all of the gray areas that surround my argument. The argument, again, being that pundits are often wrong -- or at least can never be proven right. Which is to their advantage.
I went back and looked at 2012 Republican primary debates and compared Real Clear Politics polling averages day-of to 14 days later (enough time to field a poll and see its results in an average). Then, I looked at what Cillizza and other pundits said about the winners -- relying heavily on the useful aggregations of punditry that Taegan Goddard has compiled for years. (I used debates up until the January 7, 2012 debate in which at least seven candidates participated.) If the pundit winner matched the person who saw the biggest gains in the polls, that was counted as a correct call.
Giving me this graph.
But but but. That includes instances in which pundits picked multiple winners (including Cillizza) -- the non-winners were counted as misses. Only fair.
Let's say, though, that you want to be more generous in your assessment of what counts as a win. If a "winner" needs only to gain in the polls, even if it's not the biggest gain, the numbers shift.
For an example of how this works, let's look at the Sept. 22, 2011 debate, in which nine candidates participated. By our most objective standard, the winner was Herman Cain, who saw a 9-point jump in the polls over the next two weeks. Precisely zero pundits said he was the winner of the debate. Below, a comparison of how many people picked each candidate and how each candidate did in the polls over two weeks.
Romney, the most-picked winner, came in third in polling gains.
Sure, you say, but wasn't Romney already winning? Maybe he won but didn't have much more room to grow.
Maybe. But as a percentage of existing support, Romney was fourth, with Cain more than doubling his support and Gingrich shooting up 36 percent. Even poor Jon Huntsman gained more support as a percentage than Romney.
So what does "winning" mean, then? It means what we all really know it means: We think that person did pretty well. It does not mean that the majority of voters -- or the plurality of voters -- or any voters -- agree.