A recent initiative in political science is intended to do just that. It’s called DA-RT, short for Data Access and Research Transparency. And it’s splitting the discipline.
Even before the recent scandal about fake data, editors from 27 political science journals signed on to a DA-RT transparency statement. It commits the journal to certain principles, such as requiring authors to “ensure that cited data are available at the time of publication through a trusted digital repository” and “where possible to provide access to all relevant analytic materials.” The goal isn’t just to reduce fraud or mistakes, but to raise the clarity and quality of research.
For the kind of statistical, data-driven work I do, the requirements are pretty clear and (to me) make perfect sense. There need to be some exceptions and protections, but my sense is that most quantitative scholars are on board with the idea of sharing their data and code.
In the past few weeks, though, a number of my colleagues doing other kinds of work have said, “Wait a second, what does this mean for us?” Last week, six professors started an online petition asking the profession to hold off on enforcing the DA-RT principles. The basic thrust: “Can we talk about this more?” Hundreds of scholars have signed — including friends and colleagues whose work I enjoy and whose opinions I respect.
One is John Patty at the University of Chicago, a journal editor who signed the transparency statement. He has argued that we’ve debated the DA-RT initiative for five years already, that these journal editors have decided what they think is best (which is their prerogative) and that editors will refine the approach over time to respond to different kinds of research. To him, the request for a delay in this petition looks like foot-dragging and fear-mongering.
These are understandable concerns, and I know a lot of people who share them. But here’s why I differ: So far, DA-RT supporters have put little meat on the bones of their principles. From where I stand, qualitative scholars look to be bearing a lot of risk. Nobody likes uncertainty, especially when your career could be on the line. I’d like to give more time for qualitative standards and norms to be debated and evolve.
In the meantime, let’s move ahead with clear standards for quantitative work. After many years of debate, it’s relatively clear how transparency and data access works for people who rely on quantitative data.
In most cases, it involves making data and statistical code publicly available. The costs of doing so are not that great, and it helps to ensure that we can correct papers that have errors or problems. In addition, other scholars can use the data to answer new questions, especially students or junior faculty members who can’t afford to collect their own data.
There are exceptions, of course. Naturally, proprietary data can’t be posted publicly. And people who collect their own expensive data might deserve a year or two to publish a second paper before getting scooped. Like patents, we want to get the incentives for innovation right. Nevertheless, in my experience most quantitative researchers agree that making data and code publicly available is the right thing to do.
But researchers who rely on qualitative data — political scientists, historians, and anthropologists — don’t have anything like that kind of consensus about transparency and data access. Now some are worried that the best journals will settle on standards that are costly or even wrong. Some also don’t trust centralized systems to make the right decisions. And the profession’s journals should belong to the profession and should change with more care.
I also do a little qualitative work. I run a lot of experimental programs in poor countries, and survey large numbers of people over time. When I do that, I usually hire and train a few local researchers to interview the same 40 or 50 people multiple times over a few years. I find it hugely useful in understanding what’s going on, even if it ends up barely entering my journal articles.
But I’m a bad qualitative researcher. I don’t really know what I’m doing. No one ever properly trained me. I’m sure I’m sloppy in all sorts of ways. And to the extent I use any of this evidence in my papers, it’s not nearly as careful as my statistical work. And no one calls me on it.
This could be the best part of something like DA-RT. My papers would be much better, and science would be so much more “science-y,” if people like me weren’t allowed to be sloppy.
But what would DA-RT mean that I need to do, exactly? I’m not sure. Would it simply mean I just have to be more careful about how I present, describe and use my qualitative data, meeting some basic standard we can all agree on? That’s straightforward, and completely consistent with the DA-RT principles.
Or would top journals expect me to scan, anonymize and post all my notes and transcripts? That’s consistent with the principles, too. Unlike my quantitative data, the marginal cost would be huge, and there are a lot of unanswered questions, from the logistical to the ethical.
Others have similar questions. See the conclusion to this symposium of qualitative researchers debating DA-RT and what it means to be transparent. It shows you there is healthy debate, and broad agreement on some principles. But there is little clarity or agreement on the important details.
It’s clear what dabblers like me do in the face of this uncertainty: stop talking about their qualitative data in the paper. And if you’re a real qualitative researcher new to the DA-RT conversation? You might be worried. If there are two things we know about behavior, it’s that incentives matter, and people are risk averse.
Not everyone who signed the position fits into this category. Some are simply mistrustful of a centralized effort across journals and the professional association, the American Political Science Association.
Then there are the people like me, who just realized what the journals are up to, and are unclear what’s going on, despite public discussions. I’ve spent days trying to figure out what’s happening, and still not clear on the details. I’m betting people like me are the majority of the discipline.
Last there are people dead set against any hard rules. Some of these haven’t even signed the petition because they prefer the train to stop rather than slow down. Jeffrey Isaac from Indiana University is one, and I recommend his essay and blog post. Isaac edits one of the main journals in the discipline, and he’s among a group that says data access and transparency should be encouraged where appropriate. But it’s not always appropriate.
I’m sympathetic to his view. The books that I’ve loved the most, that have challenged the way I think about politics, all used evidence like an art and not a science. I do not want to discourage that kind of work, but I’m worried that the new rules are having a chilling effect on it.
Ultimately I think it’s possible to build a broader consensus and coalition before going ahead. Granted, political scientists are seldom good practitioners of politics. But I think we can do better. I honestly think just a little more time and clarity would bridge any spilt in the discipline.
From the center of any debate, it always feels like there has been enough deliberation and communication. In my experience, that’s seldom true. More deliberation strikes me as the judicious and respectful thing to do.