The following is a guest post from Princeton University political science PhD candidates Andrew Shaver and Yang-Yang Zhou.


Surveys are now a common, though largely unacknowledged, counterinsurgency tool of the contemporary battlefield. Afghanistan Nationwide Quarterly Assessment Research (ANQAR), Foghorn and BINNA Household surveys have played an important role in the U.S.-led campaign in Afghanistan, where coalition forces have used survey responses to gauge civilian attitudes on a variety of topics, often as a means of assessing the effectiveness of strategies carried out by coalition forces.

Similar efforts were carried out in Iraq. Shortly after toppling the Baathist regime, the U.S. military contracted with a local Iraqi firm to run a major public opinion survey in Baghdad. The effort was massive: in-person interviews nearly every month over a five-year period across the city’s nine urban and rural municipalities. In total, some 200,000 residents were randomly sampled and interviewed. Respondents were asked about everything from their level of support for insurgent attacks against the coalition and Iraqi government military forces to their degree of satisfaction with a variety of public goods and services to their past perceptions and future expectations regarding the abilities of Iraqi security forces to fight terrorism.

Data collected during this period reveals a statistically strong, positive relationship between support for attacks against coalition forces and the number of insurgent attacks actually perpetrated against coalition forces. This basic relationship, plotted below, might lead analysts to ask whether members of the public switch allegiance after observing a rise in effective attacks on coalition forces. Or did popular support for attacks on coalition forces increase as citizens began to blame the invading forces for the concurrent rise in sectarian violence? Perhaps, instead, the relationship is reversed: a shift in public support stemming from some other factor may have led to conditions more conducive to launching attacks on coalition forces.

Unfortunately, rather than exploring these hypotheses using the survey data, analysts must confront the more fundamental and less exciting question of whether the survey responses accurately reflect the attitudes of the citizens they are designed to capture. This concern arises because respondents were asked direct questions.

Certain problems associated with administering surveys in war zones are well known and largely unavoidable. Analysts at the RAND Corporation, for instance, have observed that conflict can prevent enumerators from accessing the most dangerous – and often the most crucial – areas of a country. Access to potential respondents may also be impeded by local leaders and other community gatekeepers.

Yet a far more nefarious problem arises when respondents simply provide inaccurate responses. Social science research shows that asking respondents directly about topics they consider sensitive tends to lead respondents to either provide false answers or refuse to respond. This dynamic is clearly present in Afghanistan, where the refusal rate for the ANQAR survey in 2011 was more than 50 percent.

Fortunately, problems associated with asking direct questions are not new, and academic research shows that they can largely be mitigated with the use of simple survey techniques. Social scientists have long sought to study attitudes, beliefs, and behaviors considered sensitive. Some are socially undesirable (racial prejudice); others are highly private (sexual preference) or outright illegal (drug use, vote-buying). In other cases, an attitude/belief/behavior is not sensitive, but respondents fear repercussions for answering truthfully (reporting corruption).

To elicit honest responses on such topics, methodologists have devised a set of techniques that rely on indirect questioning. The list experiment is one such approach and works by randomly separating respondents into treatment and control groups. Members of the control group are shown a list of non-sensitive items. Those in the treatment group are given the same list with a sensitive item added. Respondents are then asked only to provide the total number of items that they agree with without specifying which ones. The difference between the averages of the two groups can only be attributed to support for the sensitive item.

The endorsement experiment measures support for a divisive actor or policy. Like the list experiment, respondents are randomly divided into control and treatment groups. Treated individuals are asked to rate their support for some uncontroversial policy that is endorsed by the actor of interest (or uncontroversial actor who is endorsing a controversial policy of interest). Those in the control group are shown the same policy (or actor) without the endorsement. Any difference in support for the endorsed and non-endorsed policy is attributed to the controversial actor (or policy).

Finally, randomized response techniques conceal individuals’ responses and protect their privacy by introducing random noise. Respondents are asked to use a randomization device like a coin flip or die roll, which determines whether they answer a sensitive or innocuous question or whether they give a predetermined response (i.e. “yes”) to a “yes”/“no” question or answer honestly. Importantly, under this approach, the result of the randomization device (a “heads” on a coin flip, a “2” on a die roll, etc.) is unobserved by the enumerator, which provides respondents confidence that their true responses are unknown to everyone but themselves.

Recent research shows that while all three approaches are superior to direct questioning, randomized response techniques come closest to capturing actual respondent beliefs. To reach the conclusion, Princeton researchers asked constituents in Mississippi’s 2011 general election whether they voted for or against the “Personhood amendment,” a controversial anti-abortion referendum stating that life begins at conception. Expected to pass handily, its defeat signaled that voters were reticent to openly disclose their opposition. Using the three described indirect questioning techniques and direct questioning, the researchers were able to compare the accuracy of the estimates of each approach against the official election outcome.

Successful counterinsurgency campaigns require public support. Even as the international coalition begins to withdraw from Afghanistan, the U.S. remains engaged in a number of conflict settings, from combating ISIS in Iraq and Syria to quelling violent extremism in Pakistan. To achieve their long-term objectives in winning public support against insurgents, the U.S. will likely need better measures of what wins – and what loses – hearts and minds than circumspect answers to blunt questions.  Fortunately, all three of the survey techniques described have been used effectively to survey citizens exposed to violent conflict – in Nigeria, Pakistan, and Afghanistan – and offer policymakers a new set of tools for gauging public sentiment.