EVERY EXPLOSIVE report on Facebook’s data-dealing in recent months is really part of the same story. Facebook wanted to connect the world, and it also wanted to make money. To do both, it decided to connect itself to the rest of the Web — by sharing user information from firm to firm. At some point, Facebook discovered that users did not want their data shared so widely and that regulators also objected. The company changed its policies. But its response was halfhearted and uneven. To recover trust now, it must do more than come clean and apologize.
The latest investigation from the New York Times chronicling Facebook’s privacy faux pas describes partnerships Facebook formed with other technology corporations. The agreements range from deals with device-makers integrating Facebook with their systems, to an “instant personalization” experiment that let sites tailor their displays to users’ public profiles, to arrangements that allowed companies such as Netflix and Spotify to read some private messages.
The deals helped Facebook extend its reach across the Internet, and data the company gained in return could have helped it improve its then-fledgling targeted-advertising system — which has since become a golden goose. The partners benefited, too, from shiny features and from information that helped them better understand their own audiences.
Facebook says services that received private data without explicit consent could use that information only to “recreate the Facebook experience” — making them simple extensions of its own social network. This argument is plausible for some partnerships, such as with device-makers. For others, it is less persuasive. And that is only one type of agreement described in the Times article; Netflix and Spotify, for example, had broader permissions to handle data but also provided more notice. The bottom line, however, is clear: Consumers often did not know what information Facebook was giving away, to whom or for how long.
Facebook evidently thought in its earlier days that it could share whatever data it wanted without anyone protesting. The company eventually learned different, including from a Federal Trade Commission investigation that resulted in a 2011 consent decree. And over time, it rolled back many of these privacy-violating features — but messily. Most troubling, Facebook continued to give some companies more access to data than its public pronouncements suggested.
D.C.’s attorney general announced Wednesday that the District would sue Facebook, marking the first U.S. regulatory action in response to its Cambridge Analytica scandal. It probably won’t be the last. But Facebook’s future will depend on more than the outcome of any court case. The company faces a trust deficit that grows with every story of apparent negligence.
Facebook has to come clean about exactly what it has shared in the past and what it is sharing now. But that’s no longer sufficient. The best way to regain trust now would be to endorse a federal privacy law — a real one. That requires more than pushing for mushy principles that every tech company seems to say it supports, and more than advocating the loosest possible framework to preempt California’s stricter regulations. It requires accepting and supporting a future in which users really control their own data.