On Tuesday, Stamos plans to announce the creation of an initiative to do that, in some of his first public remarks since leaving Facebook and joining Stanford University as an adjunct professor and Hoover fellow.
“There aren’t processes to thoughtfully think through these trade-offs,” he said in an interview ahead of his talk at the university’s Center for International Security and Cooperation. “You end up with these for-profit, very powerful organizations that are not democratically accountable, making decisions that are in their best and often short-term interest … without there being a much more open and democratic discussion of what these issues are.”
He hopes the new initiative, called the Stanford Internet Observatory, will help unite “sometimes warring factions” of academia, tech companies and Washington policymakers to work together to help solve “the negative impacts technology can have on society,” he said.
The Observatory will aim to assist technology companies in their investigations — “a bridge between multiple platforms fighting the same problems,” he said — by sharing data and providing more transparency and accountability to their security challenges.
The viral spread of phony news and disinformation during the 2016 election season prompted growing scrutiny of the technology industry and also rising awareness within key companies that their platforms can fuel extremism and could even tilt the outcome of elections — all far from the original vision of social media.
Silicon Valley is attempting to grapple with these issues in the absence of legal requirements, clear precedents or even widely accepted principles on how to act. Stamos was in the middle of these often-contentious debates, within Facebook and as its representative in meetings with government officials and outside researchers.
Stamos’s planned remarks, titled the “Battle for the Soul of the Internet,” amount to something of a public debut for the former executive, who has stayed largely under the radar since leaving Facebook following internal friction, including disagreements about how best to combat the Russian disinformation campaign and other emerging threats such as junk news. At Facebook, he played a key role in combating the 2016 Russian influence campaign, which reached nearly 90 million U.S. Facebook users. He is among the half-dozen senior executives who have departed the social network this year.
Silicon Valley has made strides in battling disinformation since the Russian interference, Stamos said. But foreign hackers could still alter the outcome of an American election, including the coming congressional midterm vote.
The top risk comes from the type of cyberespionage that Russia’s military intelligence agency used two years ago to steal emails from the Democratic National Committee and the campaign of Hillary Clinton then selectively leak embarrassing documents. Stamos called the tactic “hack and leak” and said it sparked controversy and news coverage that undermined Clinton’s campaign in the final days of a closely fought election.
“That’s a fundamental weakness of an open society,” Stamos said. If Russians hacked a major politician’s emails and leaked them to an organization like WikiLeaks today, “nothing concrete would change,” he said. “You can’t prevent the hack-and-leaks, so you’re going to have to deter it.”
The nation remains vulnerable to hack-and-leak tactics, he told The Washington Post, because such tactics rely on a variety of soft targets — such as individual email accounts and campaign computer systems — along with public eagerness to consume and share purportedly secret information. Given these vulnerabilities, Stamos said, foreign adversaries may already have damaging political information in hand and may be waiting for the most disruptive time to release it online.
By comparison, he said, Facebook and other tech companies have grown more sophisticated at detecting fake accounts and more aggressive about shutting them down, especially when such accounts act in coordinated ways to spread disinformation. The companies have also moved toward greater transparency in online advertising, an effort that will help deter and catch bad actors, he said.
That marks a shift since Facebook and other Silicon Valley companies came under withering attack from Capitol Hill a year ago for allowing Russia’s Internet Research Agency to target American voters with social media posts and advertising — much of it bought in rubles — seeking to exacerbate racial, religious and other American political fault lines.
Since revelations about the Russian election interference, Facebook has made huge investments in security, hiring 20,000 experts and moderators to review content and ads. Google, whose systems were also targeted by Russian operatives, has hired over 10,000 reviewers to monitor its systems. Twitter has also purged fake and malicious accounts, including over 3,000 accounts run by the Kremlin-linked Internet Research Agency and 50,000 Russian bots on its service.
Stamos said there is evidence that the Russians and others have grown more sophisticated in their tactics, as part of what appears to be efforts to evade the growing scrutiny of American technology companies. But the “friction” in practicing disinformation has made it more difficult for those pushing it in the manner that the Internet Research Agency did in 2016.
Even so, “we’ve gone after one half of the equation,” he said, because the hack-and-leak tactics remain largely unaddressed.
Stamos expressed some regret about Facebook’s handling of the 2016 disinformation campaign, acknowledging that the company was better equipped to battle hacks into corporate systems, as opposed to the use of its tools to influence voters. “I wish we had had a propaganda-focused intel team back then, instead of just focusing on traditional cybersecurity,” he said.
He also said there was some risk to Silicon Valley as it grows more aggressive in combating disinformation online, especially when companies suspend the accounts of Americans voicing their views — as opposed to foreigners using fake accounts to pretend to be U.S. political activists, as the Internet Research Agency did.
Conservatives have sharply criticized Facebook, Twitter and other companies for allegedly muzzling their ability to spread their viewpoints. The fact that companies typically do not publicly detail the offenses that result in the removal of a post or an account has fueled a backlash accusing social media companies of abusing their power as censors, Stamos said.
“We need way more transparency from the companies,” Stamos said.
Public faith in formal legal systems is built on the clarity of rules, the rights of the accused to contest evidence and the public nature of decision-making, Stamos said.
“None of us would be okay with a legal system where the decisions are made in black boxes and there’s no rights of appeal and there’s no understanding of why decisions were made,” he said.