The controversy over the Montana field experiment raises an important question: whether it’s okay to affect real world outcomes with research.
Is this reaction right?
The day the news broke, I wrote this post. The gist of it: Why can’t academics intervene in the world? If it’s okay for a civil society organization or a citizen to take an action, why can’t a professor? Should we hold researchers to a different standard than nonprofits and citizens? The political scientists quoted by Talking Points Memo weren’t saying that the rich and powerful shouldn’t muck around in elections. They were saying political scientists shouldn’t do so in their research.
Is that right? My research mucks about in the real world all the time. I care about reducing poverty and violence in the poorest and most violent places. I think a lot of aid organizations are doing it wrong, and I try to come up with better ways, often by testing the programs with real people. Partly I’m just asking whether reasonable and common interventions work, which is impartial research. But I would be lying if I said I didn’t have an agenda, or that the programs didn’t entail risks.
Is my research unethical? Or is there something different about research in elections? I’ve been mulling this over for a week.
I think the most persuasive argument against intervening in the world boils down to this: What we do as individual researchers can affect the profession, especially if it creates outrage. This leads some to question the legitimacy of research. It may make it harder for other researchers to attract financial support for their research.
What if no one gets outraged, and the political scientist next door won’t be harmed? Then there’s another, similar argument that’s also persuasive: Why would you muck around in the world if you didn’t really have to? You need a good reason.
Both of these arguments are basically some version of “Do the benefits exceed the risks?” And “Who decides?”
Now, these strike me as ethical standards that everyone ought to consider when they act in the world. But the answers for researchers might be a bit different. This is partly because research can produce special benefits (the greater good of knowledge) but it also has special risks (the risk of future knowledge threatened).
This means that, to some extent, the answer to “who decides?” is partly the university where the academic works, or maybe even the profession where they belong.
I agree. But I still think my original point is important: In a free society, and in a profession where we want to encourage freedom of thought, I’m uncomfortable with putting a heavy burden of regulation on researchers. And I’m completely against a blanket statement that intervening in the world is wrong.
I don’t want to be in a profession that regularly attracts public ire. But I also don’t want to be in a profession (or a university) where the slightest bit of risk gets a project vetoed.
As usual, the best answer is somewhere in between. There are a number of things that free a researcher to take risks. One is consent. Another is taking care that the benefits exceed the risk (or trying to monitor that very closely).
If you take a very conservative view, as some do, then even small risks will veto a project. Some people take that view. I don’t think we all should.
Medical research on children is a good example. For a long while, universities were very cautious about experimenting on kids, for seemingly obvious reasons. A few decades later, the problem was that doctors had little idea what kept children healthy. Children’s bodies worked differently and reacted differently, in unknown ways. So research changed. Today, if you run a medical experiment, you have to justify why you’re not also looking at children. Because it’s important.
The same could be true of what I do: research in war zones, or with the very poorest or most violent. Very few people do research with these people, at least outside the United States. Partly for that reason, I’ve never seen a more depressing jumble of overpriced and ineffective programs. That needs to change.
So we should act in the world. But when we do, you could say we need some formal restraints. For example, in a mailer, clearly indicating what is research and what is not. Or rules for when it’s okay (or not) to act in close elections.
Recently, Macartan Humphreys gave four guidelines on how to make field experiments more ethical. They are sensible. Should we give them some bite by putting into place profession-wide formal requirements?
I’m skeptical that the profession could come up with formal guidelines that fit all sizes, that adapt to changes in method or knowledge, and actually reduce the risk of something terrible happening. In my experience, bureaucratic procedures are better at producing the illusion of control (and legal cover) than actually improving ethics.
In fact, I think political scientists already have most of what we need to deal with knotty ethical questions, and it’s working well. We have human subjects committees. We have professional norms. We have a free press. The debate this past week has been a healthy one.
I’m arguing for something simple: a profession that gives researchers some leeway to take moderate risks, make mistakes and, I think, trust these institutions to correct the course as we go on — without blanket bans on acting in the world. A profession in which no study courted controversy would be a sad one.