Public opinion survey research in the Middle East has come a long way. It still faces many very real challenges, however. I have been polling Mideast opinion since 1985, for the U.S. government and as an independent scholar. A recent article by Justin Gengler raised some important issues, but along the way it made unwarranted assumptions about polls that I recently conducted. The points where we agree and where we diverge on currently realistic best practices in Arab survey research are instructive.
Some years ago, Arab opinion polls suffered from the paucity of reliable population statistics, neighborhood maps and other standard tools of survey research. Many of those technical problems are more manageable today. But the continuing social and political taboos in many of those societies, including gender issues, difficulty of household access and sincere concerns about government surveillance — not to mention outright, severe government controls, still make Western-style public opinion polls a difficult endeavor.
Transparency takes multiple forms
There is a commendable, emerging academic norm of transparency and data sharing, standards embraced by the Monkey Cage and The Washington Post, which “require the disclosure of standard methodological details when polls are presented.”
I fully embrace and support these standards of data transparency. Indeed, in my monographs “The Arab Street: Public Opinion in the Arab World” and “Slippery Polls: Uses and Abuses of Survey Research in Arab States,” I challenged my polling colleagues precisely on those grounds because widely cited surveys in my view failed to meet methodological standards. I noted issues well beyond typical technical ones, such as the widespread use of loaded questions like, “What do you admire the most about al-Qaeda, if anything?” Or “How important to you is the Palestinian cause?”
While I did not include exhaustive methodological details when publishing my brief “policy alert” about key results of the Qatar survey, I did declare my willingness to share such data during the official release of the findings. In my Oct. 24 public launch event, three days before Gengler falsely impugned this survey as “unscientific” and “weaponized,” I clearly reiterated my readiness to make full methodological and other details, including all the data files, available to any interested parties. Many scholars and journalists have responded positively, and I encourage others to take me up on that invitation.
The advantage of commercial survey firms
In that spirit, allow me to provide some methodological detail. The teams that conducted these surveys are primarily private-sector commercial in orientation, rather than officially or semiofficially affiliated with their host countries’ governments. I have worked with these teams on comparable Arab surveys since 1990. They are licensed and accredited to the top professional associations, including Esomar, and have successfully conducted other surveys for leading international institutions including Pew, the World Bank and the U.S. government.
I prefer such independent, unofficial, apolitical pollsters. That is the best way to avoid censorship, self-censorship, intimidation and bias. In most cases it is still impossible for government-affiliated or officially supervised polling organizations in these countries to ask the most timely and controversial political questions — such as attitudes toward the Muslim Brotherhood, sectarian conflicts or other Arab countries.
Professional, purely commercial market research firms can ask such questions because the government (or government-affiliated organization) does not help construct or censor the survey instrument. The fieldwork conductor neither shies away from tough questions nor pursues any particular political goals. Equally important, my experience has shown that the respondents feel relatively relaxed in honestly answering what is obviously an unofficial, primarily consumer product or other commercial survey, with just a few political or social questions attached.
Sampling and weighting
These surveys all employed a standard, multistage geographical probability sampling method, with primary sampling units and blocks/households allocated proportionally to population size, based on the latest available statistics. Interviews were conducted in Arabic by trained local personnel, all in private households.
Starting points and interviewer routes were mapped out following the “right-turn approach” and “every nth household” methods. For apartment building neighborhoods, the interval was counted vertically by floors, with a maximum of two interviews per building. Within each household, one respondent was randomly selected, using a Kish grid. In case the randomly selected household or respondent was unavailable, three callback attempts were made, at different times and days of the week. No substitution within a household was allowed. In the case of Qatar, there were 24 interviewers and six field supervisors, yielding 1,000 completed interviews over a period of 23 days, not during any holidays.
The question of weighting results is tricky. I always provide the raw findings, not just statistically weighted or otherwise “adjusted” ones. The raw sample demographics, as Gengler concedes, can be expected to vary somewhat from the ideal, and substantial variations should be noted. But there is no evidence that these variations affected my findings in any substantial ways.
The one exception among my polls where weighting would be required to provide accurate totals is in Lebanon, where the Christian subsample is sufficiently overrepresented, and attitudes are sufficiently polarized by sect, as to distort the unweighted data totals. In some of my other surveys, while the samples are representative overall, there are substantial response variations by sect (Sunni or Shiite). I therefore take great care to present those variations in demographically disaggregated form, rather than as weighted totals. Very few other Arab surveys are willing or able to publish such basic sectarian demographic analyses, due to political constraints or social inhibitions.
In the case of the Qatar survey, Gengler is correct that my higher-educated subsample deviates from published population parameters. But this is less significant than it appears. It deviates by just 11 points or so, within the expected range. Doha city proper is somewhat overrepresented, again because of random variations in the achieved sample from a multistage stratified geographic probability method. But when combined with neighboring al-Rayyan, the sample provides an acceptable representation of this urban agglomeration. Other demographic categories are also practically if not perfectly representative, and their results likewise do not indicate any substantial systemic distortions. For this reason, the totals cited represent a meaningful statistical snapshot of aggregated Qatari attitudes.
In principle, it would certainly be possible to take these raw data and weight them, as pollsters often do, to match the overall demographic distributions more precisely, or to “adjust” the sampling by using quotas. But the latter technique introduces a serious non-random element into the procedure, with indeterminate and therefore highly undesirable potentially distorting effects. And artificially weighting sampling or results to match census data can produce a false impression of precision. Nevertheless, if anyone wants to weight my findings to compare them with the unweighted ones, I would as previously noted be happy to provide the raw data files.
The importance of face-to-face, in-home interviews
In Arab surveys, I strongly prefer face-to-face private interviews over telephone, online or “convenience” polls conducted in public places. That is how I have conducted all of the surveys at issue here.
As Gengler correctly argues, in most Arab societies, most people still feel more comfortable talking to a stranger in person than on the phone. The refusal rates for phone polls are also much higher, a flaw that cannot be adequately corrected by any statistical technique. Online polls suffer from well-known deficiencies of self-selection, uneven social penetration and poor quality control. “Convenience” polls are truly unscientific and must be identified as such. It would be right to call out such surveys, as I have done in the past.
Some hard-earned lessons from experience
Beyond such important technical details, I have a few words of other advice, none of it very original, for my fellow pollsters.
Always strive to word questions as neutrally as possible, and provide ways for respondents to express their own rankings and comparisons among various response items. Ask respondents their views about several countries, not just about the United States.
Give respondents a chance to pick their own priorities, rather than imposing them. Make sure to eliminate, as much as is humanly possible, the insidious effects of sequence bias: that is, asking a series of questions that, while individually seemingly neutral, collectively and subtly lead the respondents in a particular direction.
Word questions as specifically as possible. For example, ask about the Muslim Brotherhood, not just about “religious leaders.” Ask about Hezbollah, not just about “sectarian militias.” Ask about the Arab boycott of Qatar, not just about “the GCC dispute.” In other words, avoid ambiguous questions that inevitably produce ambiguous answers.
Finally, poll each Arab society separately, rather than lumping them together. Artificially creating an “Arab” view by combining Egypt, Jordan and a few other countries — but not Iraq, Algeria, Syria or Sudan — makes about as much sense as creating a “European” view by combining Germany, Luxembourg and Slovenia, but not France, Britain, Italy or Spain.
Moreover, in some highly polarized Arab countries such as Lebanon or Iraq, combining sectarian or ethnic subsamples does more to obscure than to illuminate popular attitudes.
Individual country and internal subsamples must each be large enough to be statistically significant. This means that each country should have a sample on the order of magnitude of 1,000 or more.
I encourage others to share additional details of their surveys in the interest of data transparency to advance our common scientific endeavor. There is much progress still to be made in the science of polling in the Middle East.
David Pollock is the Kaufman Fellow at the Washington Institute, and director of its Project Fikra.