Sandy Parakilas was an operations manager on the platform team at Facebook in 2011 and 2012. He is an adviser to the Center for Humane Technology.
Last week, Facebook suspended data firm Cambridge Analytica and an associated researcher, Aleksandr Kogan, for violating its policies on data protection. This followed reports that Cambridge Analytica acquired data on more than 50 million users without their consent and used that data to help Donald Trump win the U.S. presidential election in 2016. (Cambridge Analytica denies any wrongdoing.)
In 2011 and 2012, I led the team responsible for data policy violations on Facebook App Platform, which included well-known games such as Farmville and Candy Crush as well as many others.. The company cared little about protecting users’ data then, and the Cambridge Analytica story shows that hasn’t changed.
Before we discuss how Facebook treats issues such as these, some background on how the platform works may be helpful. Whenever you authorize an app that connects to Facebook, you see a dialogue box that asks you for permission to grant the app access to data from your Facebook account. The app might just ask for the information listed on your public profile such as your name, or it might ask for more personal things such as your friend list, your “likes,” your photos or other data that isn’t publicly available.
At the time that Kogan was gathering data on behalf of Cambridge Analytica, Facebook also allowed developers to access your friends’ data, even though those friends had never agreed to connect to the app. This let developers quickly build giant data sets of many users’ information. There was a way to turn off this friend-based access deep in Facebook’s settings, and some language in the user terms of service that Facebook claims gave them the right to do this. But few users were aware of either.
Critically, once the data passed from Facebook’s servers to the developer, Facebook lost all insight into or control over how the data was used. To prevent abuse, Facebook created a set of platform policies that forbade certain kinds of activity, such as selling the data or passing it to an ad network or data broker such as Cambridge Analytica. However, Facebook had very few ways to discover abuse or act on it once discovered.
Here’s an example of how an investigation into one such issue played out during my time at the company: In late 2011, it was revealed that an app called Klout was creating “ghost” profiles of children. These public profiles were not created or authorized by the children and were reportedly based on friend data from adults who had authorized the Klout app. As the lead for platform data protection issues, I had to call the leadership of Klout and ask whether it was violating any Facebook policies, because we couldn’t see what it was actually doing with the data. The leadership swore it was not in violation. I reiterated the importance of following the policies, and that was the end of our call. Facebook took no further action, and Klout continued to access Facebook data, though it turned off the ghost profiles feature.
While Klout was an unusual case because the alleged violation was publicly visible, other less visible data protection issues happened regularly during my tenure. Facebook had the following tools to deal with these cases: It could call the developer and demand answers; it could demand an audit of the developer’s application and associated data storage, a right granted in the platform policies; it could ban the developer from the platform; it could sue the developer for breach of the policies, or it could do some combination of the above. During my 16 months at Facebook, I called many developers and demanded compliance, but I don’t recall the company conducting a single audit of a developer where the company inspected the developer’s data storage. Lawsuits and outright bans were also very rare. I believe the reason for lax enforcement was simple: Facebook didn’t want to make the public aware of huge weaknesses in its data security.
Concerned about the lack of protection for users, in 2012 I created a PowerPoint presentation that outlined the ways that data vulnerabilities on Facebook Platform exposed people to harm, and the various ways the company was trying to protect that data. There were many gaps that left users exposed. I also called out potential bad actors, including data brokers and foreign state actors. I sent the document to senior executives at the company but got little to no response. I had no dedicated engineers assigned to help resolve known issues, and no budget for external vendors. Facebook’s users were being protected by whatever external partnerships I was able to strike without having to pay those partners. The only time my team got any attention was when negative articles appeared in the press.
Facebook will argue that things have changed since 2012 and that the company has much better processes in place now. If that were true, Cambridge Analytica would be small side note, a developer that Facebook shut down and sued out of existence in December 2015 when word first got out that it had violated Facebook’s policies to acquire the data of millions. Instead, it appears Facebook used the same playbook that I saw in 2012. It took the developer’s word rather than conducting an audit, and it ignored press reports about Cambridge Analytica using Facebook data in violation of its terms during the election. It looks like they took no further action until Friday, when whistleblowers and news stories forced them to finally suspend Cambridge Analytica and Kogan from the platform.
In the wake of this catastrophic violation, Mark Zuckerberg must be forced to testify before Congress and should be held accountable for the negligence of his company. Facebook has systematically failed to enforce its own policies. The only solution is external oversight.