Facebook knew all of those things because they were findings from its own internal research teams. But it didn’t tell anyone. In some cases, its executives even made public statements at odds with the findings.
This week, each of those revelations was the subject of a story in the Wall Street Journal, part of an ongoing investigative series that it’s calling the Facebook Files. The reporting is based on internal Facebook documents, some of which were turned over to the Journal by a person seeking federal whistleblower protection, and interviews with current and former employees, most of whom have remained anonymous.
While the stories are noteworthy in themselves, their provenance points to a deeper issue at Facebook. It is that the world’s largest social network employs teams of people to study its own ugly underbelly, only to ignore, downplay and suppress the results of their research when it proves awkward or troubling. Why it would do such a thing is a question whose answer lies at least partly in the company’s culture and organizational structure.
Like other major Internet platforms, Facebook weighs concerns about its impacts on users and society alongside traditional business imperatives such as growth, profit and marketing. Unlike some rivals, however, Facebook routes weighty decisions about content policy through some of the same executives tasked with government lobbying and public relations — an arrangement that critics say creates a conflict of interest. Often, they seem to prioritize public perception over transparency.
Facebook did not respond to a request for comment Thursday. In the past, it has responded to criticism over the role of its organizational structure by downplaying the role of any given executive and explaining that big decisions at the company receive input from multiple teams.
From its early days, Facebook has employed data scientists across various teams to study the effects of its products, and taken their findings seriously at the highest levels. In 2008, for instance, CEO Mark Zuckerberg signed off on the introduction of a “like” button only after its data scientists found in a test that it made users more likely to interact with one another’s posts, a story recounted by longtime Facebook executive Andrew Bosworth in a 2010 Quora post. In 2015, members of the company’s news feed ranking team explained to me how they rely on a dizzying array of surveys, focus groups and A/B tests to measure the impacts of any proposed change to the algorithm along multiple dimensions. Most of those findings were never publicized, but they factored heavily in the company’s decisions about which changes to implement.
More recently, Facebook has tasked its data scientists and multiple integrity and safety teams across the company with investigating questions about its products’ influence on things like global affairs, the flow of political information and users’ well-being. In at least a few cases, their findings have informed key product decisions. The 2018 Facebook news feed change around “meaningful interactions,” for one, was justified partly by appeal to research that found interacting with friends on social media was better for people’s mental health than passively watching videos.
Yet a pattern has emerged in which findings that implicate core Facebook features or systems, or which would require costly or politically dicey interventions, are reportedly brushed aside by top executives, and come out only when leaked to the media by frustrated employees or former employees.
For instance, the New York Times reported in 2018 that Facebook’s security team had uncovered evidence of Russian interference ahead of the 2016 U.S. election, but that Chief Operating Officer Sheryl Sandberg and Vice President of Global Public Policy Joel Kaplan had opted to keep it secret for fear of the political fallout. In February 2020, The Washington Post reported that an internal investigation following the 2016 election, called “Project P,” had identified a slew of accounts that had peddled viral fake news stories in the run-up to Donald Trump’s victory, but only a few were disabled after Kaplan warned of conservative backlash.
In September 2020, BuzzFeed obtained a memo written by former Facebook data scientist Sophie Zhang, making the case that the company habitually ignored or delayed action on fake accounts interfering in elections around the world. In July 2021, MIT Technology Review detailed how the company pulled the plug on efforts by its artificial intelligence team to address misinformation, out of concern that they would hurt user engagement and growth. Just last month, the company admitted that it had shelved a planned transparency report showing that its most shared link over a three-month period was an article casting doubt on the safety of coronavirus vaccines.
Kaplan, a former Republican operative, is a recurring figure in many of these accounts. His current and former bosses, Nick Clegg and Elliot Schrage, respectively, also surface at times, albeit less often. They, in turn, report to Sandberg, who is Zuckerberg’s right hand.
Part of the issue, insiders say, may be the scope of these executives’ roles. As policy chief, Kaplan has input into decisions about how to apply Facebook’s rules, while also overseeing its relations with political leaders in D.C. — a mandate that all but ensures political considerations shape the platform’s policy choices. Clegg, meanwhile, oversees both policy and communications, weighing not only politics but PR concerns in evaluating which policies to pursue.
In contrast, Twitter’s then-vice president of global communications, Brandon Borrman, told me in 2020 that his company sends decisions about content enforcement, trust and safety, such as the call to fact-check one of Trump’s tweets for the first time, up a chain of command that is separate from its political and public relations divisions. Borrman said that he and the company’s top government relations executive were briefed on the decision only after CEO Jack Dorsey had accepted the trust and safety team’s recommendation.
Alex Stamos, Facebook’s former chief security officer who struggled to publicize his team’s findings on Russian election interference, has argued Facebook’s organizational structure helps to explain why all kinds of well-intentioned internal studies and projects at the company never see daylight. (Stamos now researches cybersecurity at the Stanford Internet Observatory.)
“I keep talking about how organizational design is a huge problem at Facebook,” Stamos tweeted Wednesday, after the third report in the Journal’s Facebook Files series. “In these cases, the unified product policy/government affairs structure and the isolation of people who care in dedicated Integrity teams are the problem. And Zuck.”
The last line of that tweet is a reference, of course, to Zuckerberg, who emerges in Sheera Frenkel and Cecilia Kang’s recent book “An Ugly Truth: Inside Facebook’s Battle for Domination” as the driving force behind a company culture that has long prioritized growth and dominance over concerns of societal harms.
Sandberg, for her part, is portrayed in the same book as averse to confrontation and unable or unwilling to stand up to Zuckerberg and Kaplan on pivotal decisions. Her private conference room at Facebook’s headquarters long bore a sign that said “Only good news,” according to numerous reports — a credo that may go a long way toward explaining why uncomfortable internal research findings struggle to find an audience.
Robyn Caplan, a researcher at the nonprofit Data & Society, said she has repeatedly found over the years that online platforms’ struggles and inconsistencies with content moderation have their roots in corporate organizational dynamics that emerged from start-up culture. In many cases, Facebook will consult with internal and external researchers on how to address a problem, only for their advice to be overridden by more influential internal stakeholders, such as the leaders of their product, engineering or business divisions.
Caplan said the report that Facebook applied different tiers of content moderation practices to influential and ordinary users is symptomatic of a widespread practice in social media — one that prioritizes avoiding bad press over treating users fairly. “That’s an instance in creating a set of policies or processes designed to make the most (potentially) vocal critics happy, while undercutting needs of other users and groups,” she said. Caplan investigated a similar system at YouTube in a 2020 study that she co-authored with Tarleton Gillespie, a principal researcher at Microsoft Research.
For an approach that’s intended to avoid bad press, Facebook’s penchant for suppressing inconvenient internal research has itself generated a remarkable amount of, well, bad press.
As the latest batch of leaked internal critiques continues to trickle out, the company faces a choice. It could rethink its philosophy, realign its internal structure to separate policy from politics, and begin to pay greater heed to the trust and safety researchers and data scientists in its ranks. Alternatively, it could decide that such research creates more headaches than it’s worth, and limit the amount of self-critical projects it undertakes in the first place. Or it could carry on with the status quo, weathering this round of bad publicity and regulatory pressure as it has all the others before it — and likely continuing to rake in the enormous profits that have remained a constant through it all.
Correction: An earlier version of this story mischaracterized a peer-reviewed journal article by Robyn Caplan and Tarleton Gillespie as a white paper.