Such is the consensus among lawmakers, tech company officials and independent experts who study hate speech and related disinformation: Even as Silicon Valley has become more aggressive in battling foreign efforts to influence U.S. politics, it is losing innumerable cat-and-mouse games with Americans who are eagerly deploying the same techniques used by the Russians in 2016.
“Everyone’s witnessed the playbook playing out,” explained Watts, a fellow at the Foreign Policy Research Institute. “Now they don’t need Russia so much. They’ve learned that the tactic is devastatingly effective.”
Accounts controlled by Russians probably helped amplify such misleading narratives, experts say, but the evidence so far is that they started with American political activists who are increasingly adept at online manipulation techniques but enjoy broad free-speech protections that tech companies have been reluctant to challenge.
The past few weeks, in particular, amid the buildup of political polarization ahead of Tuesday’s vote have illustrated the power of social media to spread hate speech and disinformation. The man accused of shooting and killing 11 people at a Pittsburgh synagogue less than two weeks ago expressed his hostility toward Jewish people on Gab.ai, a social media site that caters to the far right, while sharing allegations about Soros that have been widely criticized as anti-Semitic.
Cesar Sayoc, the man charged with sending pipe bombs to CNN and prominent Democrats last month, showed signs of radicalization on Facebook and Twitter in the months before his arrest. Accounts in the name of Sayoc — or slight variations — railed against Soros while peddling conspiracy theories related to former secretary of state Hillary Clinton.
"I think the scale and scope of domestic disinformation is much larger than any foreign influence operation,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab. “And I also think that as a society, as a nation, it’s much harder for us to have coherent self-reflection on that, and you see that play out in real time.”
On Monday, Facebook announced it had removed 30 accounts from the site — and another 85 on Instagram, which it owns — that “may be engaged in coordinated inauthentic behavior.” Facebook did not attribute the accounts to any particular foreign government but noted that many Facebook pages tied to the accounts were in Russian or French, while many Instagram accounts used English. Facebook did say that it had received reports from U.S. law enforcement that “may be linked to foreign entities.”
But Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said in a visit to Washington last month that the bigger challenge remains detecting and policing the many Americans spreading disinformation.
“If you’re talking about volume, the majority of the volume we see is domestic. And that doesn’t just mean in the United States; it means around the world,” Gleicher said. “It makes perfect sense when you think about it, because in order to run an information operation, the most important thing is that you understand the culture that you’re targeting. And there are always going to be more people inside a culture that can do that than outside.”
This observation is borne out by new research from Harvard’s Shorenstein Center on Media, Politics and Public Policy and Oxford University’s Computational Propaganda Project. The Harvard researchers said Monday that they had seen major spikes in outright fabrication and misleading information proliferating online over the past six months, with people using warlike rhetoric in social media posts to spread anti-immigrant sentiment. A “significant portion” of the disinformation appeared to come from Americans, not foreigners, the Harvard researchers said.
The Oxford researchers, meanwhile, reported last week that misleading and polarizing news reports were spreading more widely on social media in the 2018 election season than two years ago. The disinformation also is reaching a broader audience, outpacing the spread of authentic reports from mainstream, “professional” news organizations, the researchers said.
Tech companies “have done a good job tracking and blocking foreign origin stuff — content that originates from Russia, Iran or [the Islamic State],” Oxford researcher Phil Howard said. “They have done much less to combat homegrown English language misinformation.”
Since the 2016 election, Facebook, Google and Twitter have hired more staff and coded more powerful algorithms to thwart disinformation spread by foreign or domestic actors online. The tech giants also have been more aggressive at taking down fake accounts, while tightening policies against the kind of content — including hate speech and efforts at voter suppression — that they’re willing to tolerate on their platforms.
Facebook declined to comment, while Google did not immediately respond to requests for comment.
Twitter emphasized its work to combat networks of automated accounts, known as bots, and tweets that seek to mislead users about how, when and where to vote. “Attempts to game our systems or to spread deliberately malicious election content will be removed from Twitter — whether they are foreign or domestic in origin,” Carlos Monje, the company’s director of public policy and philanthropy in the Unite States and Canada, said in a statement.
Domestic disinformation and hate speech are hardly new, but concerns about Russian interference so dominated the political debate since the 2016 vote that the role of Americans drew less attention in scholarly reports, congressional hearings and news coverage.
Foreign influence operations in 2018 have appeared more sophisticated as Facebook, Twitter and other social media platforms have discovered and purged accounts operating from Russia and Iran.
Facebook shut down more than 800 apparently domestic accounts and publishers in October for violating its policies prohibiting spam. But that action generated yet another round of allegations that Silicon Valley was curbing the free speech rights of conservatives, an issue that has become a political problem for social media companies.
The First Amendment protects Americans from government censorship, not corporate decisions about what content to allow on technology platforms. Yet there have been bipartisan demands that Silicon Valley act as a protector of political speech in all but the most egregious cases.
There is little consensus, however, on what qualifies as egregious beyond hate speech, child pornography and calls for violence. Tech companies have vigorously resisted being cast as “arbiters of truth,” even as they have tried to crack down on the worst cases of disinformation by establishing partnerships with third-party fact-checkers and hiring thousands of new content moderators. Facebook has heavily publicized a “war room” at its Menlo Park, Calif., headquarters to signal its seriousness in dealing with such issues.
Yet now, with the arrival of Election Day, there is consensus among outside experts that technology companies are still losing the fight against politically charged hate speech and disinformation. Lawmakers are expressing, yet again, frustration that the tech companies haven’t done more to protect Americans from the untruths, conspiracy theories and hateful language that spreads so efficiently on social media.
“I think there is a growing sense that a lack of any moral or legal responsibility about [spreading] hate or violence just doesn’t cut it,” said Sen. Mark R. Warner (Va.), the top Democrat on the Senate Intelligence Committee.
Social media researcher Jonathan Albright has published a series of essays on Medium in recent days detailing the extent of the problem, calling disinformation in 2018 more widespread and serious than in 2016.
“We’re even more behind,” Albright said in an interview. “The number of people trying to game the system has increased.”
And most of them, he added, are Americans.
Elizabeth Dwoskin and Andrew Ba Tran contributed to this report.