Sasha Issenberg is a reporter for Slate and the author ofÂ The Victory Lab: The Secret Science of Winning Campaigns, which is about the rise of political and social science Â and statistical modeling within campaigns. We talked on the phone Friday; a lightly edited (for clarity and space) transcript follows.
Dylan Matthews: Your book is about a revolution in the way campaigns work that took place over the last decade. How did that get started?
Sasha Issenberg: Two political scientists at Yale, Donald Green and Alan Gerber, went out and did a field experiment, which was a big deal at the time because political science lagged behind other social sciences in using field experiments to measure cause and effect in elections.
The first experiment was that they created a local GOTV [get out the vote] drive in New Haven and had voters get a reminder from a postcard, a canvasser, or paid callers, and then had a control group, who got nothing. And we learn there that the phone call group had no increase in voting, mail had a small increase, and there was a big boost from the in-person contact. It was hard to get the paper published because they made no theoretical contributions to debates going on in the discipline. It was almost embarrassingly practical. But it was the first time that anyone had developed a method for assessing the effectiveness of anything the campaigns spent money on.
The campaigns went out and did a bunch more of these comparative effectiveness studies, as opposed to mass media, where it's really hard to isolate voters and implement controls. When youâ€™re measuring turnout and registration rates, itâ€™s very easy to select some people to get your mail. In that case, the dependent variable is whether they voted or registered, which is an easy thing to track. In the last few years itâ€™s moved a lot to the behavioral psychology-informed bent, trying to demonstrate things that have been demonstrated in lab experiments about how to change motivations around voting.
In-person doorstep contact is more effective at mobilizing voters than phone calls. Volunteer phone calls are better than paid phone calls. Voters are able to sense the difference. We know that what people in politics now call â€śchatty scripts,â€ť where the caller or canvasser is encouraged to have an open-ended back and forth, are much more effective than robotic scripts.
Now thereâ€™s been a whole body of work on which types of language are better at getting people to vote. Almost all of them have to do with changing the dynamic around voting. The best messages often donâ€™t have much to do with the candidates or issues but with mobilizing voting and getting people excited for the election. Referring to people as voters has been shown to increase somebodyâ€™s likelihood of voting. We have a whole sort of body of research about the contact and the quality of contact that we didnâ€™t 15 years ago.
DM: You argued inÂ Slate the other day that Democrats have a big advantage when it comes to using this research in the field. How did that happen? Some of the most important research initially was done with Rick Perry in 2006, who's no lefty.
SI :I think the biggest gains in the beginning were coming out of commercial marketing, and came from people in politics looking at the corporate world and saying, â€śThey do a good job of identifying and tracking their consumers. What can we learn from how they collect and manage that data?â€ť The biggest gains there were realized in politics by 2006. People figured out how to link up their databases with big databases from credit report agencies [and] direct mail marketers. They learned statistical modeling techniques that came up with prediction techniques for voters, much like how credit scores predict individual borrower behavior.
The biggest gains since have come into politics from the social sciences. Initially they just were measuring turnout and mobilization tactics but it's increasingly being used to measure the effectiveness of persuasion messages, and to empirically challenge the conventional wisdom about what it means to be persuadable. We have good reason to believe that the shorthand weâ€™ve long used, which is to look at people who say theyâ€™re undecided or show up in the center of an ideological or partisan spectrum, are not necessarily the most likely to move.
What randomized experiments allow you to do is distribute messages in the real world and then see who moves in response to them. What statisticians call "heterogeneous effects modeling" allows people in the most advanced campaigns to identify which voters have actually moved in response to a particular message. This is something the Obama campaign has done this year and thatâ€™s something thatâ€™s way outside the area of traditional polling.
The left has been way better than the right at engaging the political scientists and economists who use these techniques to measure real-world cause and effect. You just have dozens of professors and graduate students who want to work with Democratic campaigns, womenâ€™s groups and labor groups, and very little of that on the right.
The reason Perry developed that partnership is that he made them an unusual offer, which is that they could publish their work. Most campaigns want to keep it proprietary, so the academics who are willing to work with them are often people who are aligned with their political goals, and not necessarily in it for research purposes.
DM: This research seems to cast down on conventional polling, which relies heavily on "likely voter screens" that ask people whether they plan on voting.
SI: Yes. I wrote earlier in the year a piece for Slate called "The Likely Voter Lie." In 2008, you can read it, but what the study shows is that the polling firm Greenberg Quinlan RosnerÂ went back at the end of 2008 and looked at who in their samples actually voted. It turned out that something like 87% of people who said they were likely to vote ended up voting. 70% of those who said they [were] pretty likely voted. But 55% of people who said they were unlikely to vote, and got kicked off polls because of that, ended up voting.
They could have been lying to the pollsters to get off the call, or they could be really bad predictors of their future behavior. We have good reason to be suspicious of peoplesâ€™ ability to predict their behavior, especially months in advance, especially when they canâ€™t predict what campaigns will try to do to motivate that behavior.
Campaigns are much more trusting in their internal polling on previous vote history as a predictor. Campaigns come up with your individual propensity to vote in this election which assesses the likelihood that you will cast a ballot. The most influential variables in that algorithm are whether you voted in the past, and whether you seem like someone who votes frequently.
A huge gap has opened up between internal polls and public polls because the internal polls use these statistical models. The public polls use screens but most campaign polls now are able to make an a priori judgment of what they think the profile of the voter will be and just randomly dial people who are in that universe. What that means is that I think that the difference in the public polls weâ€™re seeing now is different assumptions in what the electorate will look like. In internal polls thereâ€™s a lot more stability, because they donâ€™t simply ask in the moment what you do.
DM: Obama has a lot more field offices than Romney does. There are some studies I've seen that suggest that it helped in 2008, but that could just be because McCain ran a bad field operation, and perhaps Romney will do better.
SI:Â I donâ€™t think field offices themselves are predictive of vote performance. Campaigns are opening field offices because theyâ€™re anticipating voting activity or see a need for particular capacity. In the Obama campaign, they come up with very granular vote goals and assign offices to locations based on the amount of work that needs to be done there.
Thereâ€™s simply a huge, there was indeed a big gap in the sophistication between how the two sides did this in 2008 which is way beyond the scope or scale of oneâ€™s field operation. I think Democrats had far better, more aggressive use of data, what people call microtargeting, which guides you to effectively target your contacts, and had far better synthesized the field research from political science about what actually works. I think Romney might be running a more competent campaign and may have planned better but thereâ€™s still a real distance between how smart these two sides are in deciding whom they should engage, when, and how.
DM:Â What is it going to take for Republicans to embrace this kind of thing? The Nate Silver backlash among conservatives doesn't augur well for it.
SI: Right, but there was plenty of resistance in certain parts of the Democratic political class a decade ago when experiments were coming out and saying, â€śHey, robocalls donâ€™t work for turnout.â€ť There were businesses invested in selling robocalls for GOTV to campaigns, and there are strategists who rely on them, and there are very few people running campaigns on the left who were open to hearing from a couple political scientists who had never spent time inside campaigns about what worked.Â The resistance is not uniquely Republican.
A couple things happened on the left. The small group of people who were initially receptive to thinking in empirical terms happened to be in influential if low-profile positions mostly in the prominent institutions of the left: labor, womenâ€™s groups, and environmental groups. They were very interested in being smarter about they do politics because they wanted to use their money more effectively.
The other thing that happened is they lost in 2004 and they attributed that largely to technique. In many cases Democrats did not really understand what Republicans had done but they began to impute these almost magic powers to Karl Rove and â€śmicrotargetingâ€ť and the â€ś72 hour plan,â€ť things that had been sketchily and sometimes inaccurately described in press clippings. It inspired Democrats who might have been skeptical of new ways of running campaigns to put aside their business interests and parochial rivalries in the interest of building institutions that could make their campaigns smarter every year.
The fact that Republicans lost so overwhelmingly in 2008, I think, delayed an awareness of the technical gap between the two sides, and they imputed Obamaâ€™s win to much broader conditions in the country. For the sake of innovation on the Republican side, the best thing that could happen to them is that they lose narrowly on Tuesday, that the story becomes how Obama and his allies ran a mechanically superior campaign, and Republican donors, party leaders, consultants face the existential predicament that Democrats did at the end of 2004, which is, â€śWeâ€™re going to lose forever unless we figure out how to make our campaigns better.â€ť
Thatâ€™s the first step. The second step is finding social scientists who want anything to do with the Republican party in the 21st century, and that probably wonâ€™t be solved on Tuesday one way or the other. Thatâ€™s a bigger cultural problem.