Facebook doesn’t seem ready for that. A New York Times report earlier this month about the company’s strategy of denial, delay and deflection shows the company has been preoccupied with the “unfair” public criticism it has received — despite periodic declarations that Facebook recognizes it has maybe been doing some things wrong and plans to fix them. Zuckerberg offered some wan assurances in an interview with CNN last week, pledging that his company is reviewing “thousands of apps” with the goal of “making sure that developers like Aleksandr Kogan [author of the app used by Cambridge Analytica], who got access to a lot of information and then improperly used it, just don’t get access to as much information going forward.” Not as much access to information? That’s weak sauce, and it doesn’t sound at all like a comprehensive review of Facebook’s strategic choices about how to handle our data.
Still, I get it. Like many other Facebook fans, I believe some significant percentage of the criticism leveled against the social network (that it’s addictive, for example, or that it puts us in “filter bubbles,” or that it makes us depressed) are unfair and even silly. That’s why I have been predisposed before to think the company’s missteps are just ordinary new-medium growing pains.
But consider the reports that Facebook’s strategic response to the criticism has included opposition research about the U.S. senators who would be quizzing Zuckerberg at his appearances before Congress this year. Facebook’s departing public-policy chief Elliot Schrage more or less admitted as much in a blog post Wednesday, characterizing the work of Definers Public Affairs and other firms Facebook has worked with as “useful to help respond to unfair claims where Facebook has been singled out for criticism, and to positively distinguish us from competitors.” That’s an awfully positive spin on efforts to spread the blame for Russian disinformation to tech rivals such as Google. (There was even a brain-dead effort to blame liberal philanthropist George Soros, a bête noire in the eyes of right-wing politicians, for the nonprofit organizations that have condemned some of the company’s practices.) This stuff happened, Schrage hints, because Facebook’s relationship with Definers had become “less centrally managed.”
Facebook has been making life particularly hard on those of us who have defended the company. I still see Facebook’s major accomplishment in connecting more than 2 billion people as mostly positive. This doesn’t mean I’m naively dismissing how the platform has been (mis)used mischievously and maliciously. I’m just not surprised by this misuse — which every new mass medium has experienced, starting with the printing press.
But the Times report, as well as follow-up coverage in the Wall Street Journal, make me more skeptical about Facebook’s willingness to recognize its need to change. Is Facebook’s use of Definers Public Affairs really just a recent lapse, given that the company engaged in a similar “defining” effort in 2011, when its leadership sought to blunt Google’s efforts to start a competing social network? Facebook ultimately won out there, even though its rank tactic was exposed by tech bloggers.
Still, Zuckerberg and Facebook absolutely do have the capacity to turn this crisis around, provided the company takes some aggressively forward-looking, positive steps.
Facebook needs to stop treating critics — even the meanest, most unfair and most intractable ones — as combatants. That’s an unforced error that is centrally responsible for how bad the company looked after the Times article was published. Facebook is never going to win the hearts and minds of every critic, but the company may win over enough if it shows that it is listening to them and then makes significant changes in how it treats user data and speech.
Zuckerberg also should embrace the model of Facebook as an “information fiduciary” — an enterprise that has the same kinds of ethical obligations to users that lawyers have to clients or that doctors have to patients. That would mean not merely that Facebook is being more humane to its users — it also would help the company step up to the role of user advocate. There’s a lot of good scholarship on this issue, notably from my colleague Jack Balkin at Yale Law School, and it boils down to adopting fiduciary duties such as the duty of care, the duty of confidentiality and the duty of loyalty. In nonlawyer language, this means not being negligent with users’ information, keeping it private, and not using users’ information to serve the company’s interest at the expense of users’ well-being. This includes not leveraging data with the aim of controlling users or allowing third parties to do so. (This could have an ancillary effect of reducing “fake news,” which is designed to play on readers’ prejudices and fears.) Being an information fiduciary means making a long-term unbreakable commitment. It’s a set of obligations that sits on top of, and governs, the company’s terms-of-service agreements with users.
Finally, the company ought to cease hostilities with its occasional rivals such as Apple and Google — even when they’re critical of what Facebook has done. If the social network embraces the “information fiduciary” model, either it will have a competitive advantage against companies that don’t embrace it, or the other companies will embrace it, too. An industry-wide adoption of fiduciary standards is a good answer to those critics, such as European Union regulators, who think American tech companies don’t honor privacy enough.
There are other advantages to adopting this stronger ethical framework in dealing with user information. Facebook may not have committed itself to being an advocate of user privacy against governments yet, but Google and Apple certainly have. The company definitely cares about telling users, to the extent possible, about how it handles government requests and demands. But status as an information fiduciary would give Facebook stronger standing to resist turning over user information in response to government demands — both from the U.S. government and from other governments that may have fewer due-process constraints. It also would put Facebook in the position of being advocates or even tribunes for user interests — in speech as well as privacy — if it wants to be. There’s even some solid Supreme Court precedent on this point.
The main thing for Zuckerberg and Facebook to remember is that, if they’re going to assume a “wartime” footing to respond to a current crisis, they need to have a plan for the peace. That plan can’t excuse any lapses of their earlier responsibilities toward users and toward society generally. Instead, Facebook needs to be more ambitious and promulgate new, higher standards of fiduciary duties to all of us. Other tech companies need to do this, too. That’s what “wartime” CEO Mark Zuckerberg should aim for, and it’s what a lasting peace requires.