Can political science studies be replicated? Here, the flags of member nations fly outside the General Assembly building at the United Nations headquarters in New York. (AP Photo/Adam Rountree, File)

As even casual newspaper readers are becoming aware, science–and especially social science–is having a crisis. In the profession, we call it a “replication” crisis, meaning that published results often studies cannot be reproduced by other scientists.

Even setting aside frauds such as the LaCour and Green study on voter persuasion as well as obvious joke studies such as the 2013 Psychological Science article claiming to find 20 percentage-point swings in women’s vote preferences based on their menstrual cycle, there’s still lots of turmoil.

We’ve seen many studies where there is no agreement on the statistical evidence, or even on what should be considered good evidence. For example, as discussed on The Monkey Cage, Larry Bartels thinks there’s solid experimental evidence that displaying subliminal smiley faces on a computer screen causes big shifts in attitudes toward immigration. I’m not convinced at all. Three respected political scientists published an article claiming that liberals smell better to other liberals than to conservatives. I didn’t find the statistical evidence persuasive. Similar controversies have arisen in psychology, economics, and other fields.

For political science, the next step seems to be coming from an organization on Data Access & Research Transparency. It has organized a joint statement by political science journal editors that mandates the following by January 2016:

  • Require authors to ensure that cited data are available at the time of publication . . .
  • Require authors to delineate clearly the analytic procedures upon which their published claims rely, and where possible to provide access to all relevant analytic materials . . .

The mandate includes some other items on citation policies and style guidelines. But the above two points are the biggies: Make Your Data Available and Describe Exactly What You Did.

This all seems uncontroversial to me–or at least it seemed uncontroversial, until I read Chris Blattman’s skeptical take on the transparency initiative.

Chris doesn’t argue that transparency is a bad thing. By and large he doesn’t seem to disagree about the general virtues of Make Your Data Available and Describe Exactly What You Did. He might draw the line in a slightly different place than I would. For example, he’s concerned not just about proprietary data but also that “people who collect their own expensive data might deserve a year or two to publish a second paper before getting scooped,” whereas I’m inclined to think that “scooping” is a good thing, that it’s research progress, and that any problems with scooping can be resolved via proper assignment of credit. But this is a minor point, whether data would need to be released immediately or after a short delay.

Chris’s larger argument is that openness requirements could deter some research. In particular, qualitative researchers, knowing the difficulties of disclosure, might not even try to publish some of their best work–because do not feel they can realistically follow all the transparency rules. Qualitative researcher typically involves in-depth observation, interviewing, and even participation, and exposing individuals is obviously different from exposing data sources.

Or this work could be published, just not in top journals, meaning it gets less respect within the profession. Moreover, researchers using mixed qualitative and quantitative methods will just ditch the qualitative parts, again because of the difficulty of jumping through all the data and analysis hoops. What would be lost here is a sharing of the insights that often can only be gleaned through close observation–that is, qualitative research.

Chris makes an interesting point. My own research is almost entirely quantitative but I see the immense value of qualitative work. In many, maybe most, cases, quantitative research is motivated by qualitatively-obtained insights. We have an idea based on something we’ve seen in the world, then we study it quantitatively.

There’s little doubt in my mind that systematic qualitative research has an important role to play here. If we skip directly from unstructured qualitative ideas to formal quantitative inference, we’re losing an opportunity to refine these ideas. To put it another way, even if your only goal were rigor, falsifiability, and all the rest and even if you only cared about what could be measured by numbers, you should still have an interest in high-quality, systematic qualitative work. Even something as quantitative as opinion-poll responses relies on earlier qualitative research on understanding what the survey questions mean to the people being interviewed. Other subjects of quantitative interest, such as legislative voting, complement the qualitative research obtained by observing and interviewing politicians and their staff members.

If it’s really true that new transparency guidelines guidelines could push people away from mixed methods, that researchers might just throw away their qualitative data and analysis because they can’t figure out how to follow the transparency requirements–and I’ll take Chris’s word that this could happen–then, yes, we should be concerned.

On the other hand, what can happen when qualitative research is uncontrolled and unvetted? Alice Goffman and Sudhir Venkatesh are two sociologists who did high-profile, much-publicized work based on stories with little documentation. Now people don’t know what to believe about their reports. Verification and replicability are important, even in qualitative work.

The solution has got to be to address the concerns of qualitative and mixed-methods researchers while moving forward where we can. Even basic things like making anonymized data available, releasing exact texts of survey forms, dates of interviews and nonresponse rates, statistical code, etc.: all these should help a lot. I speak as someone who had to essentially retract an award-winning paper because of a data coding error.

I think it’s time to get started, even while we address the concerns raised by Chris Blattman and others.

P.S. In comments, William Kelleher argues against the research transparency initiative on the grounds that it is “neopositivism” that “is already damaging the reputation of the profession” and “is moving to take control over the very definition of political science, and set up an enforcement mechanism to award research that complies with its Physics envy dogma.”

I have two things to say about this.

First, any word that begins in “neo” and ends in “ism” brings to mind “neoconservatism,” and so I think it’s worth pointing out that actually existing neoconservatism is very much not in the spirit of research transparency! In particular, the most notable act of neoconservatism was to start a war based on undocumented claims. The point of the political science transparency initiative is to make data and methods available for all to inspect. If this is “neopositivism,” let me just emphasize that it has nothing to do with its near-namesake.

Second, as I noted in my post above, I have enormous respect for qualitative research (in physics as well as in political science) and would not want our profession to be subject to physics envy. So we’re in agreement here. What I don’t see is why Kelleher seems to think that this move toward transparency in data and methods would be a bad thing.