Replicate it! A proposal to improve the study of political accountability

May 16
A Liberian man examines his voter identification card while waiting for voting to open during the 2011 General Elections. (photo: Susan Hyde)
A Liberian man examines his voter identification card while waiting for voting to open during the 2011 General Elections. (Susan Hyde)

Joshua Tucker: The following is a guest post from political scientists  Thad Dunning of University of California, Berkeley and Susan D. Hyde of Yale University. Both are members of the EGAP Regranting Initiative Committee.

*****

Like many social scientists, we take it almost as an article of faith that scientific methods will advance our knowledge about how the world works. The growing use by social scientists of strong research designs — for example, randomized controlled experiments or natural experiments — increases the reliability of causal claims in individual studies.  Yet building scientific knowledge across studies is much more difficult than many acknowledge. As The Economist has recently summarized, if science is based on a principle of “trust but verify,” there is a growing realization that there is too much “trust” and not enough “verify.” Some problems stem from publication bias (see, for example, here and here), in which studies are more likely to be published when they show statistically significant findings, and less likely to be published when they yield null results. Professional incentives, at least in the social sciences, also mean that most scholars benefit more from publishing work that is “innovative” or “groundbreaking” and benefit less, professionally speaking, from publishing work that replicates existing findings. In other words, often, once the ground is broken, too few scholars are willing to stick around. Important studies are rarely replicated, and the accumulation of knowledge suffers.

What can be done about this problem?

One potential solution is to change the incentives researchers face, in part by funding new research in a manner that requires replication. If incentives are improved, important studies can be replicated across contexts, and enough scholars may be willing to build in additional research time to coordinate across studies such that their work better contributes to the accumulation of knowledge. This is exactly what the Experiments in Governance and Politics (EGAP) network, in conjunction with UC Berkeley’s Center on the Politics of Development (CPD), is attempting to do as it pilots its first research “regranting” round. EGAP, per its mission, focuses on supporting experimental work on governance and politics, and this particular funding round aims at supporting field experiments in the substantive area of political accountability. With the backing of a $1.8 million grant, EGAP is now soliciting proposals on a focused question from researchers around the world, with a goal of making four to six awards, each within the $200,000-$300,000 range.

Specifically, EGAP is calling for proposals that fall broadly within the area of political accountability, and that are motivated generally by the following questions: Why do people elect poor or underperforming politicians in developing countries? Is it because voters lack the information they need to make informed choices? If they are given better information, do they select “better” politicians, or are many vote choices driven instead by loyalty or fear? Or do alternative causes better explain failures of political accountability? Numerous researchers have investigated these questions, but findings about the effects of information provision are mixed. Such conflicting results may indicate that key factors function differently across different contexts — for example, the effect of informing voters about politicians’ malfeasance or corruption may depend on what voters already know or believe.  But it might also stem from distinctions of study design, from interventions and outcomes that differ across contexts, and from the expectation that researchers demonstrate “novel” results in each published study.

EGAP’s initiative builds on existing reforms related to the credibility revolution in development research, including transparently randomized research designs and preregistration of experimental studies to guard against publication bias and “fishing.”

The EGAP regranting model requires harmonization of questions, interventions, and outcomes, to some degree, across the four to six studies that we plan to fund. Each study will have at least two treatment “arms,” in addition to a control group. In the first arm, which will be harmonized across all studies and thus will build in replication across contexts, studies will provide voters with credible information about politician performance and assess effects on electoral behavior. This replication of interventions across studies is critical: Too often, major conclusions are drawn from only one study on a given topic. The second arm encourages researchers to develop distinctive interventions and encourages comparability by attempting to answer the question: If the common informational intervention is not effective, what is? The second arm thus allows for analysis of comparative effectiveness, by asking which type of intervention most powerfully shapes electoral behavior.  This research structure thus leaves room for innovation on the part of each research team—which remains critical for social science and public policy — while also requiring replication of results.

In addition, regrantees will be expected to make their data public in order to give others the opportunity to replicate the findings within each study before publication. This opportunity for “internal” replication should reveal errors or discrepancies in the data, increasing the chance of reliable findings. This is important because published studies are too often based on data that cannot be internally replicated, implying that results and findings generated could, in fact, be incorrect. For example, during one attempt at replicating research in 2006, only 14 (23 percent) of 62 studies could be replicated completely. During another attempt in 2009 to reproduce the results of 18 articles from a journal, only eight could be replicated. We believe this problem stems not so much from malfeasance of individual researchers but instead from the structure in which research is normally produced. EGAP’s pre-publication replication can mitigate this and help unearth authoritative answers to the research question. And, when results are finalized, EGAP will glean core lessons from the research in the form of policy briefs, making them publicly available, and will market them to those in the best position to make a lasting difference.

Through this new regranting model, we hope that a more conclusive answer emerges to the important policy question of how to enhance political accountability – on which an abundance of international development programming depends. We invite any and all interested researchers to apply. The deadline for submitting proposals is June 16, 2014.

For more information see:

Comments
Show Comments
Most Read Politics
Next Story
John Sides · May 16