But even as the disinformation campaign from two years ago finally came into focus, it was far from clear how to prevent future bids to distort American politics.
U.S. intelligence agencies warned this week that the federal government remains ill equipped to combat Russian disinformation even as crucial midterm congressional elections loom this fall. And technology companies, while cooperating with federal investigators, acknowledge that they still struggle to detect and thwart foreign propaganda without impinging on the free-speech rights of Americans.
One expert who has repeatedly warned Congress about these problems, former FBI agent Clinton Watts, said, “There’s no one in charge of this. No one is tasked to do this. . . . The FBI doesn’t do thought policing on social media in advance. It’s not going to be able to detect this in advance.”
Some technology companies have been reluctant to submit to greater government scrutiny or hinder the ability of users to speak anonymously and, if they so desire, with the help of automation tools that can dramatically amplify certain voices. But scrutiny of the power of their platforms — and how the Russians used them — has forced the companies into discussions with lawmakers and regulators.
The consequences of these business choices were made apparent again in recent days, as a dashboard monitoring Russian disinformation campaigns over Twitter detected a surge of conversation related to Wednesday’s school shooting in Florida, along with apparent efforts to further polarize the online debate around gun control.
The same dashboard — Hamilton 68, hosted by the Alliance for Securing Democracy in Washington — recorded sustained Russian efforts in January to push the “#ReleaseTheMemo” online campaign. It sought the publication, ultimately successfully, of a Republican memo alleging misdeeds by the FBI in conducting surveillance of a Trump campaign associate.
“As we heard this week from the nation’s top intelligence officials, Russia is still using social media to attack our democratic institutions and sow division amongst Americans,” Sen. Mark R. Warner (Va.), the top Democrat on the Senate Intelligence Committee, said in a statement after Friday’s indictment. Warner helped lead a Senate investigation into Russian manipulation on Facebook, Twitter and Google.
Social media researchers are particularly critical of Twitter because users can register under fake names and, within certain guidelines, use automation tools to post at a pace hard for humans alone to match. Twitter declined to comment on Friday’s indictment.
“The research is really unequivocal. We know that these features on Twitter get used more often to attack and to divide then they get used to bring people together,” said Samuel C. Woolley, research director of the Digital Intelligence Lab at the Institute for the Future, a Palo Alto, Calif.-based think tank. “It’s time for the social media companies to design for democracy.”
But most researchers consider Facebook significantly more influential. Its requirement that users submit their real names when creating accounts didn’t keep the Internet Research Agency from creating fake accounts, buying 3,000 ads targeting American voters and creating free posts that reached 126 million people. Facebook also owns Instagram, which does not require authentic names of users and was also targeted by the Russian disinformation campaign.
The Internet Research Agency, a private firm in St. Petersburg often described as Russia’s leading online troll farm, relied on simple identity theft of several Americans, including dates of birth, home addresses and social security numbers, to create Facebook and PayPal accounts in their names, according to the indictment.
Facebook has been cooperating with congressional investigators and special counsel Robert S. Mueller III, but the company has been among those saying that more needs to be done to prepare for the congressional elections.
“We know we have more to do to prevent against future attacks,” Joel Kaplan, the company’s vice president of global policy, said in a statement Friday. “We’re committed to staying ahead of this kind of deceptive and malevolent activity going forward.”
Twitter said in a statement Friday evening, “Tech companies cannot defeat this novel, shared threat alone. The best approach is to share information and ideas to increase our collective knowledge, with the full weight of government and law enforcement leading the charge against threats to our democracy.”
This divide between what’s knowable about the past and preventable in the future is exacerbated, Watts and other experts say, by President Trump’s reluctance to endorse what his intelligence agencies have long said about the Russian disinformation campaign and marshal government resources against it.
“Who has the ball?” said Watts, a fellow with the Foreign Policy Research Institute in Philadelphia. “No one can have the ball because the president won’t acknowledge that it even occurred.”
Outside experts question how much technology companies and U.S. officials can do against a determined adversary skilled at exploiting traditional American strengths — technology tools, free-speech protections and the rule of law.
Michael Carpenter, a former Pentagon and White House official who worked on Russia policy for the Obama administration, said better education about the threat may be the answer. He noted that during France’s presidential election in 2017, the release of supposedly hacked emails aimed at damaging candidate Emmanuel Macron did not keep him from winning.
“Just about everybody in France understood this was a Russian operation and discounted it heavily,” said Carpenter, senior director of the Penn Biden Center for Diplomacy and Global Engagement.
Whether the United States is capable of withstanding Russian disinformation — or schemes by other countries — in coming elections remains unclear. Efforts to understand what happened in 2016 are advancing quickly. Efforts to stop a repeat occurrence are not, experts say.
“We’re going to have to figure out how to make Internet conversation more robust in the face of these forms of institutionalized, professional trolling,” said Peter Eckersley, chief computer scientist for the Electronic Frontier Foundation, a civil liberties group based in San Francisco.
“The answer can’t be censorship; it should instead be better user interfaces for labeling, analyzing and ranking the veracity of news and the manipulativeness of arguments. It’s going to be a huge struggle for tech companies, the media and civil society to build vibrant and constructive spaces for public conversation in response to such tactics, but if we fail, democracy might fail with us.”
Ellen Nakashima contributed to this report.
Follow The Post’s tech blog, The Switch, where technology and policy connect.