Ruy Teixeira is a senior fellow at the Center for American Progress. His most recent book is “The Optimistic Leftist: Why the 21st Century Will Be Better Than You Think.”


A line of voters stretches down a Philadelphia block in November 2016. Many polls failed to predict Donald Trump’s victory. (Jahi Chikwendiu/The Washington Post)

What are we to make of modern polling? There are a lot of negative takes that one commonly encounters. Many people are skeptical that polls accurately measure public sentiment, pointing to their failure to predict Donald Trump’s victory in 2016. Critics charge that pollsters craft biased questions and that politicians and advocacy groups use the results to manipulate the public. Moreover, polls can’t possibly be representative these days since the proportion of the public with the time or interest to respond to them is steadily shrinking.


(Simon & Schuster)

That’s a stinging indictment. Anthony Salvanto, the director of elections and surveys at CBS News, is out to correct the record in his new book, “Where Did You Get This Number?” Besides stoutly defending the utility and accuracy of modern polling, he provides a basic primer on how polling is done these days. He is perhaps more successful in the latter than the former.

Salvanto starts with election night in 2016, when he was on the “Decision Desk” at CBS and finally called the election for Trump, roughly around the same time as other networks did. He walks us through how he got there and the basic mechanics of how exit polling works, both generally and in the specific context of election night. This is very informative.

He also describes the hits and misses of polling in the 2016 election cycle. He notes, correctly, that the polls got the national popular-vote margin for Hillary Clinton about right (she won it by 2.1 percentage points). In voting at the state level — where the Trump surprise occurred — Salvanto attributes the misses to some combination of undecideds making up their minds very late and difficulties in determining who was, in fact, a likely voter. This is reasonable, though an important additional factor was the failure of many state polls to weight by education, which underestimated the influence of white non-college-educated voters and, therefore, Trump’s support.

Salvanto also does not discuss the evolving controversy about the exit polls’ sample composition and whether it accurately reflects the demographic makeup of real-world voters. This does not matter so much on election night, when the exit polls are weighted and reweighted by incoming vote results and provide a useful tool for calling elections. But the exit polls’ sample composition is important for understanding who is in the American electorate. To give the most egregious example, independent analyses based on census data, other large-scale surveys and vote results estimate that white noncollege voters made up about 44 percent of voters nationally in 2016. The exit polls put that figure at 34 percent. That’s a really big difference.

Salvanto tends to not engage with some of the more thorny polling issues. My sense is that he does this to avoid burdening the reader with too many arcane methodological discussions. This is understandable but may be unsatisfying for readers who have some awareness of polling controversies.

Instead, Salvanto spends a good amount of time dealing with what we might call the FAQs of polling. How come I never get called? (You are not special, so your opinion is not needed to find the average public view on a question.) How can you talk to just 1,000 people in a country of 325 million and get an accurate measure of public opinion? (Just as you don’t need to drink your entire pot of soup to assess it, so you can find the average view of the population from a tiny, randomly selected sample.) How can you be sure people are telling you the truth? (Lying on a survey is harder than telling the truth, and studies suggest that people generally give accurate answers.) By and large, Salvanto provides answers to basic questions in a nontechnical and folksy manner. He doesn’t get into the statistical theory that explains why survey research “works,” but that’s okay in a book of this nature.

I am less satisfied with his discussion of the survey nonresponse problem. This includes not just the fact that fewer and fewer people can be reached and interviewed for a survey — response rates are now below 10 percent, down from the mid-30-percent range in the late 1990s — but also the problem of differential nonresponse. That is, depending on events, people with different views may become more or less interested in being surveyed. For instance, when an event suddenly boosts your candidate’s chances, you may become more interested in being interviewed, while a supporter of the other candidate may become less interested. This is very difficult for pollsters to control for with standard demographic weighting techniques, and it can affect poll results, especially in terms of showing movement in polls that does not correspond to underlying changes in public sentiment.

A related question is how polls decide who is a likely voter. This is very tricky, and Salvanto does a good job of explaining how he approaches the challenge. At CBS News, rather than screening out unlikely voters, every respondent is given a turnout (likelihood of voting) score, which is then used to weight the entire sample. However, as he also points out, pollsters use a wide variety of techniques to identify likely voters, and we learn relatively little about the strengths and weaknesses of these approaches.

One of the strongest parts of Salvanto’s book is his treatment of how pollsters measure (or at least should measure) public opinion on policy and social issues. He carefully explains the ways in which questions can incorporate various biases in their wording and underlying assumptions, and how he seeks to avoid these problems. He illustrates his points with discussions of specific issues like the Keystone pipeline, gun control and player protests during the national anthem at National Football League games . For instance, he explains how a pollster might not just measure views on, say, gun-control proposals but ask further questions to uncover feelings about guns and their function in society and individual lives. This allows the pollster, as he says, to tell a “story” about why people feel the way they do about guns, rather than just report levels of support for a policy proposal.

Salvanto also spends some time discussing poll aggregation as practiced by a variety of sites. The theory behind such aggregation is that it provides a more accurate picture of political races and politician ratings than any single poll. Salvanto does not really dispute this, though he criticizes the process of aggregation for including polls that are methodologically suspect. He also looks askance at the emerging practice of using aggregated results, in conjunction with other information, to forecast the results of elections, an inherently uncertain enterprise. He also argues, reasonably, that such aggregation cannot do what a good, single poll can do, which is explain the story behind topline poll results.

I’m not sure his rejoinder to these increasingly common practices is adequate. Their popularity shows that demand is high, and it seems likely that we’ll see more, not less, aggregation and forecasting in the future as data availability and analytical techniques continue to improve.

Overall, “Where Did You Get This Number?” is a very useful and easy-to-read primer on the basics of modern polling. However, those seeking detailed discussion of contemporary polling issues and controversies will probably have to look elsewhere. Fortunately, this gap can be filled by consulting sources such as FiveThirtyEight.com , the Pew Research Center and Nate Cohn’s posts on the Upshot . We do at least live in the Golden Age of polling information.

Where Did You Get This Number?
A Pollster's Guide to Making Sense of the World

By Anthony Salvanto

Simon & Schuster. 244 pp. $26