Facebook’s controversial study is business as usual for tech companies but corrosive for universities

Jay Rosen
July 3
Jay Rosen has been on the journalism faculty at New York University since 1986; from 1999 to 2005 he served as Department chair. He is the author of PressThink, a journalism blog.

Is this user feeling happy or sad? And is it Facebook’s fault? (Dado Ruvic/Reuters)

When I first heard about the Facebook study that subtly manipulated the news feeds of users to see if less sad inputs made for less sad outputs (you can read about the study here, and the lessons of it here) I was shocked but not surprised. This is shorthand for: I knew it was likely. Smart people had warned of it. But I never expected to see it play out in this way. As ‘The Awl’ put it, paraphrasing a common reception: “Facebook screwed around with users’ emotions just to see what would happen, and because it could.”

I want to isolate what for me is the most troubling part of the swirl of events that brought the “emotional contagion” study to public attention. It involves the legitimacy of my own institution: the American research university and the scholarly enterprise that is based there. (I have a PhD in media studies and I am a journalism professor at NYU.)

My initial reaction when I read about the study: No surprise that Facebook experiments with user mood. This one sounds a little creepy but no different in kind from others we know about … Wait, this is an academic study? It appeared in a real, scholarly journal?

Quite real. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks” was published online and in print by the Proceedings of the National Academy of Science. It carries the names of two researchers from Cornell University, professor of Communication and Information Science Jeffrey Hancock and Jamie Guillory, a Cornell Ph.D. student at the time, now at University of California, San Francisco; plus a researcher from Facebook, Adam D. I. Kramer (“Core Data Science Team, Facebook, Inc.”) The article was edited for publication by Susan Fiske, a Princeton University psychology professor. It relied on experimental data that Facebook itself collected.

Universities have review boards for when their scientists experiment on human subjects. Their purpose is to make sure that human dignity and autonomy are respected and ethical disasters like the Tuskegee studies are avoided. Facebook had some sort of internal review but we know little about it, except that it’s “come a long way” from where it was in 2012 when the research in question was conducted. This according to Adam Kramer of the data science team at Facebook.

In a statement this week Cornell said:

Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Not our data. No review required by us. That’s very different from: this research meets our seal of approval. So whose seal did it have? “I was concerned,” the editor for the scholarly journal, Susan Fiske, told ‘The Atlantic,’ “until I queried the authors and they said their local institutional review board had approved it.” Her impression was incorrect. The Cornell board had approved their own faculty member’s participation, but only because he had not collected the data. Therefore the ethical burden fell elsewhere.

Meanwhile, Adam Kramer, the researcher from Facebook, expressed doubts about the study:

The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

Notice how the worrisome development is an unexpected flood of “anxiety.” A Facebook experiment — never meant “to upset anyone” — did make some people upset. Problem! The worrisome thing is not: “This study evaded the system of controls that universities have put in place for when human beings are the ones experimented upon, but somehow still grabbed the legitimacy points that publication in a peer-reviewed scholarly journal confers on a work of research….”

The problem is not: When we experiment on people in this way, how do we know we’re being respectful of their full humanity? It’s: “We never meant to upset anyone…” (And underneath that, of course: will this tarnish the brand?)

When it comes to experimenting on human beings, we should distinguish between “thick” and “thin” forms of legitimacy. Research universities — including my own institution — must be especially attentive to this distinction. Their thing is “thick” legitimacy. Anything that takes them away from it undermines the institution.

Because it’s an Internet company, Facebook experiments on human beings non-stop. Doing so is part of its business model. One name for this practice is A/B testing. An Internet company with a consistent traffic base can easily test which headline results in more clicks. Or in the case of Facebook’s happy/sad study: which newsfeed inputs result in “less sad” user outputs. This is experimental research, of a kind. It’s not the same as peer-reviewed scholarly studies, but the data generated is real enough and it cannot be dismissed.

Thin legitimacy is when the experiments conducted on human beings are: fully legal and completely normal, as in common practice across the industry, but there is no way to know if they are minimally ethical, because companies have no duty to think such matters through or share with us their methods.

Thick legitimacy: when experiments conducted on human beings are not only legal under U.S. law and common in practice but also attuned to the dark history of abuse in experimental situations and thus able to meet certain standards for transparency and ethical conduct— like, say, the American Psychological Association’s “informed consent” provision.

For purposes of establishing at least some legitimacy Facebook relies on its “terms of service,” which is 9,000 words of legalese that users have no choice but to accept. That’s thin. Meaning: not nothing, but there is not a lot of choice there. Universities like Cornell, academic journals like the Proceedings of the National Academy of Sciences, editors who are professors of psychology at Princeton: they have to do better than this thinned out, lawyers-in-charge standard. That’s what I’m calling “thick” legitimacy, the protection of which is their primary job.

Maybe when Facebook is a research partner they will have to enforce on the project a standard of “thick” legitimacy. Or Maybe Facebook one day decides that “thick” legitimacy is the better way to go. Questions of this kind did not arise in the case of the “emotional contagion” study. But they should have.

When the study’s methods became controversial in the public square, in the press, and in online conversation, that should be a moment for the university to shine. Our strengths include: Academic freedom. Knowing what you’re talking about. “Yes, we thought of that.” To the press or to anyone else who has questions about it, we should be happy to explain our research, including ethical issues as they arise. As academics, we pride ourselves on thinking these things through. And we have procedures! If you experiment on human beings you have to follow them. Academic research is not some free-for-all. It has to meet certain standards. When those standards become controversial in the public square we are happy to explain them. Because we know what we’re doing—

Except when we don’t. Reached by the Atlantic magazine, one of the academics researchers on the Facebook study chose silence rather than “let me explain our research design.”

Author Jamie Guillory responded but declined to talk, citing Facebook’s request to handle reporters’ questions directly. Early Sunday morning, a Facebook spokesman emailed me with this statement: “We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.”

To me that’s… thin. Company boilerplate signifies, a Ph.D. and expert in the field defers. More serious is that we haven’t heard a word from the senior scholar on the study, Cornell Professor of Communication and Information Science Jeffrey Hancock. But we did hear this: “One of the study’s authors told The Atlantic on Monday that he’s been advised by the university not to speak to reporters.”

That’s not how “thick legitimacy” institutions operate. Neither is: our guy didn’t collect the data, so we’re in the clear. Or: “I’d love to comment but we agreed to let Facebook handle the questions…” Thin, thin, thin. My fellow academics: this is not our niche!

Show Comments
Most Read
Next Story
Jared Bernstein · July 3