Did Facebook overstep its bounds when it ran a secret psychological experiment on a fraction of its users two years ago? That's the question at the heart of an Internet firestorm of the past few days. The consensus is that Facebook probably did something wrong. But what, exactly? To say this is one more example of Facebook prioritizing power over privacy is to vastly oversimplify what's going on here. The reality is that people are objecting for a lot of reasons. Whatever your gut feelings about Facebook, don't give into them. Yet.
If you're just coming to the story: For a week in 2012, Facebook took a slice of 689,000 English-speaking accounts across its userbase. Then for a random portion of those users, it tweaked the newsfeed in different ways to change what they saw. For instance, some newsfeeds were made to be "happier" when Facebook made negative-sounding posts less likely to appear. Other newsfeeds were made "sadder" when Facebook reduced the incidence of positive-sounding posts.
The apparent goal was to find out whether emotions were contagious on Facebook — whether happy (or sad) newsfeeds made users more likely to write more happy posts (or sad posts) themselves. The results were enlightening: The researchers found evidence to suggest that "emotional contagion" is in fact a thing. But Facebook probably didn't anticipate the backlash that followed. Adam Kramer, one of the lead researchers and a Facebook data scientist, penned a Facebook post to address the criticism, saying the study was partly motivated by a desire to understand what would keep people from leaving Facebook.
That hasn't stopped a vigorous — and healthy — debate from taking place about the convergence of business and academic research, and whether Facebook acted irresponsibly or unethically with its users' data. Facebook does a lot of questionable things, but its research on Facebook users probably shouldn't rank highly on that list. To understand why, let's unpack some of the charges being lobbed at the social network. Call it a taxonomy of Facebook critiques.
It used people's data for an academic study. The test involved a vast number of accounts, even if that amounted to, as Kramer put it, a mere 0.04 percent of Facebook's users. There was no opportunity for those people to give their informed consent to the specific study being conducted, despite the fact that by using Facebook, they had technically given Facebook their broad permission to use data for "internal operations, including troubleshooting, data analysis, testing, research and service improvement." Should that be considered enough consent? Arguably not, at least if you're using academic best practice as your standard. But as we'll see, that may not be the relevant standard.
It manipulated people's newsfeeds to make them happy or sad. This isn't quite right. Facebook wasn't trying to see if it could make people sad just because it could. Rather, it was testing a legitimate hypothesis amid a wider body of academic literature about what Facebook may be doing to us emotionally, culturally and socially. This is a valuable line of academic inquiry, because it has potential implications for the way we engage with Facebook (as Facebook itself understands). If there's one thing that's problematic about Facebook's methodology, it's probably that outside researchers can't replicate the study to test Facebook's results themselves.
Is it problematic that Facebook was the one to run the study, and for commercial purposes? If you're feeling exploited — either by Facebook's collection of data or by the way the company used it — then you have a bigger issue with 21st-century enterprises more generally. Whether disclosed publicly or not, the use of data for controlled, randomized experiments is how businesses operate, argues Brian Keegan, a social science researcher at Northeastern University:
These tests are pervasive, active and on-going across every conceivable online and offline environment from couponing to product recommendations. Creating experiences that are “pleasing,” “intuitive,” “exciting,” “overwhelming” or “surprising” reflects the fundamentally psychological nature of this work: every A/B test is a psych experiment.
The study made it past an institutional review board. How? Facebook got the approval of an IRB, which are panels designed to assess the ethics of human-subject research. The IRB looked at the results of Facebook's data analysis and gave it the green light, but evidently didn't consider how Facebook acquired the data in the first place. Was that an ethical lapse?
If Facebook were an arm of the government or a federally funded academic institution, then yes. Research conducted in those environments on human subjects require an IRB's approval. But as a private entity, Facebook isn't legally bound by those requirements, nor was the study itself, apparently. It probably thought that getting an IRB's seal of approval would help boost the study's legitimacy — though PNAS, where the research was published, is already considered one of the world's leading scientific journals.
The distinction between Facebook and the ivory tower isn't, if you'll excuse the pun, merely academic. All kinds of businesses and organizations perform A/B testing on people without getting an IRB's approval, or without the customer's knowledge. Advertisers test and craft their messages to engender a specific emotional response to brands and products without consumers realizing it. Political campaigns use the e-mail addresses they've gathered or bought from other companies to create more positive responses and drive grass-roots actions. They even test the color of the buttons on their Web sites to increase donation rates. Target famously mined customer data to find out which of its customers were pregnant — much to the surprise of one father.
"We know that companies study their clients all the time," writes Danish researcher Thomas Leeper, in a measured blog post. "There is nothing out of the ordinary here in terms of business practice."
Facebook's problem is that it presented this research as science. So we've applied a scientific rubric to a commercial analysis that many companies would simply run internally, without telling the rest of us that it happened (much less disclose the results).
People should've been given the opportunity to opt in or out. Or at the very least, they should've been told that their behavior was being watched for this specific study. But as a social network that seemingly changes its algorithm, layout or privacy settings every six months, the fact that Facebook already manipulates what you see and how you behave on Facebook should be self-evident. Nobody knows this better than Upworthy, whose traffic numbers tanked after Facebook tweaked its newsfeed to feature Upworthy posts less frequently. Because of Facebook's secret sauce, your experience of Facebook is not like my experience of Facebook. That's intentional.
It's creepy. Beyond all the aforementioned criticisms, there's still something deeply unsettling about Facebook's experiment. I suspect the episode says more about us and our growing relationship with data as opposed to anything unique to Facebook's actions this time. We're only just beginning to grasp — through reports on Big Data or congressional hearings about data brokers — how pervasive (and potentially invasive) companies can be when they're armed with data.
If nothing else, the study actually benefits consumers in an important way: It arms us with better information about Facebook itself and how we interact with it — which helps us evaluate whether to use the service at all. And if Facebook were really serious about studying emotional contagion, perhaps its next project should analyze the outrage on Facebook about Facebook.