Google kept quiet for more than six months about its discovery of a bug that put at risk the personal data of hundreds of thousands of Google+ users, the company said Monday, a delay that could spark a new round of regulatory and political scrutiny.
The decision to not immediately report the software bug — in a process that included briefing chief executive Sundar Pichai — was discussed in an internal document that expressed concerns about the company’s reputation and the possibility of increased scrutiny from regulators, said a person familiar with internal deliberations at Google.
Google said Monday that it did not immediately announce the data leak because it was unsure which users were affected or that the data had been misused. The company declined to comment on whether concerns about regulators or its reputation affected its decisions.
The person close to the situation, who spoke on the condition of anonymity to describe sensitive matters, said the document was not part of the official decision-making process at Google.
Google found and repaired the software bug in March, according to a company blog post Monday. But the delay until October in revealing the incident could reignite long-standing complaints from federal and state officials that tech giants such as Google are reckless with user privacy and not forthcoming enough when breaches and other security incidents happen.
Google discovered the Google+ security bug in the same month that Silicon Valley rival Facebook was facing massive scrutiny over its role in allowing people affiliated with political consultancy Cambridge Analytica to collect data on 87 million users — an incident that led to demands that Facebook chief executive Mark Zuckerberg testify on Capitol Hill. He did so in April.
Google announced in its blog post Monday that it will mostly discontinue Google+, its failing social media offering, limiting it to only business and other enterprise customers. The company also announced new curbs on the information, such as call logs and contact lists, that outside developers can gather on Android, the Google operating system used by most of the world’s smartphones. And it will impose new limits on the data shared about users of its popular email service, Gmail.
The post did not make direct reference to the internal document expressing worries about Google’s reputation, the existence of which was first reported by the Wall Street Journal. It also first reported that Pichai had knowledge of the incident and the decision not to make an immediate announcement.
The revelation is likely to heighten the stakes of his coming appearance to testify before Congress, amid allegations that technology companies are squelching conservative voices online. A date for that hearing has not been set.
Reports of the Google+ security bug reopened complaints about how Google handles personal data. Privacy advocate Jeff Chester of the Center for Digital Democracy called the delay in revealing the software bug “a digital coverup” and said, “Google has demonstrated that it cannot be relied on to protect privacy.”
The Google+ incident is different in several key ways from Facebook’s scandal with Cambridge Analytica, which triggered federal investigations from multiple agencies. An internal Google company review, called Project Strobe, discovered the bug allowing outside software developers to potentially gain access to personally identifiable information on users, including names, email addresses, ages, occupations and relationship status.
But the company has said that other information, such as phone numbers and social media posts, was not put at risk and that it has no evidence that any of the data was improperly collected by outsiders. A review of two weeks of data in March, the company said, showed that information on as many as 500,000 people may have been at risk from developers from 438 software applications.
“This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” said the company blog post. “The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds."
In the blog post, Google said it wasn’t immediately sure whether the data had been misused and what affected users could do to protect themselves. The decision was made by a standing company committee, the Privacy & Data Protection Office, before being reviewed by company executives.
Other disclosures of data mishandling have attracted the attention of the Securities and Exchange Commission, which is putting increasing pressure on corporations to disclose data security incidents and has reopened its cybersecurity unit. This year, the SEC fined the company formerly known as Yahoo $35 million for failing to tell investors about a massive cyber-breach for two years — the first time the regulator has punished a company for such conduct.
“This is the kind of disclosure situation that the SEC will absolutely investigate,” said John Reed Stark, who spent nearly 20 years in the SEC’s enforcement division and now runs a cybersecurity consulting firm. “The SEC enforcement staff are likely scouring Google’s public filings and other statements to review all relevant disclosures.”
Even if a third party did not exploit the security vulnerability identified by Google, the SEC probably would be interested in whether investors were properly notified about the risks and the incident, Stark said.
The potential for new investigations goes beyond the SEC. The Federal Trade Commission has repeatedly investigated privacy incidents at Google and other leading technology companies. Google signed a consent decree with the FTC in 2011 to settle allegations that an earlier social media platform, Google Buzz, mishandled user data.
As part of that settlement, the company agreed to 20 years of privacy audits and to not misrepresent its privacy polices in the future. Google later agreed to a then-record FTC fine of $22.5 million in 2012 after allegations that the company worked around privacy settings on Apple’s Safari browser to track users.
David C. Vladeck, former director of the FTC’s Bureau of Consumer Protection and now a Georgetown Law professor, said the new Google+ incident is “obviously a problem for Google.”
“If Google hadn’t obtained consent from the users of Google+ to share their information with the software developers, then Google could well have problems with the FTC,” Vladeck said. “Even if the problem was an unanticipated bug, what is Google’s defense for concealing that bug for six months, especially if users could have taken steps to mitigate the … sharing of their data?”
There is also a risk of increased pressure in Congress. Democrats have promised to ratchet up regulation of the technology industry if they retake the House in the midterm elections. Last week, Rep. Ro Khanna (D-Calif.), who represents Silicon Valley, introduced a list of privacy principles he dubbed an “Internet Bill of Rights,” including the right to be informed of the scope of data use.
“These types of occurrences are why we need an Internet Bill of Rights,” Khanna said in a statement. “I hope many tech leaders will rally around the approach and advocate for well crafted regulation.”