Social media companies have found no magic bullet to fight disinformation, or “fake news,” since the 2016 U.S. elections. If anything, America’s adversaries, including Russia and China, have become “more adept at using social media to alter how we think, behave and decide,” according to the U.S. intelligence community’s threat assessment, and they have branched beyond politics to other hot-button issues including the coronavirus pandemic. Disinformation isn’t just a foreign threat, as it’s becoming a more common tactic within American politics.

1. How is disinformation affecting the campaign?

U.S. intelligence agencies warned that countries including Russia, China and Iran see the 2020 election as another opportunity to promote their strategic interests. On Aug. 7, those agencies reported that China and Iran would like to sway U.S. voters against President Donald Trump while Russia is using tools including social media and Russian television to work against his Democratic challenger, former Vice President Joe Biden. Facebook and Twitter say the Russian group that interfered in the 2016 presidential election, the Kremlin-connected Internet Research Agency, is active once again, using fake accounts to undermine Biden’s candidacy. Facebook and Twitter said they were warned about the renewed Russian efforts by the Federal Bureau of Investigation.

2. Didn’t we solve this?

It seems not. Tactics used by so-called trolls, or online provocateurs, are becoming stealthier. A March report from New York University’s Brennan Center for Justice found that Russian trolls had “gotten better at impersonating candidates and parties,” and imposter accounts are surfacing on social media. Russia’s Internet Research Agency now avoids telltale misspellings and poor grammar by lifting divisive messages directly from U.S. sites or publications. One Russian influence operation identified by Facebook was a news site that recruited freelance American journalists to write about domestic politics. (The site, called PeaceData, denied being a Russian front.) While disinformation campaigns in 2016 relied primarily on regular images and text, officials warn that now they may also include deepfakes, convincing audio and/or video that has been altered or fabricated using machine-learning technologies.

3. How is disinformation no longer just a foreign threat?

Though Trump is quick to accuse American newspaper and networks of “fake news,” his own Twitter feed has been a major purveyor of “conspiracy-driven propaganda, fakery and hate,” as the New York Times put it. The Washington Post said false claims and disinformation have been “the central feature of Trump’s presidency.” The Atlantic exposed what it said were plans for a “billion dollar disinformation campaign” to re-elect Trump -- Trump’s campaign called the story itself disinformation -- while the Times reported on a group of Democratic tech experts who imitated Russian social media tactics as an experiment in the 2018 Alabama Senate race. Google has pulled dozens of ads this year from Trump’s re-election campaign -- and three for Democrats seeking to run against him -- for unspecified violations of a policy that includes rules against false claims and deepfakes. Twitter suspended about 70 accounts tweeting identical messages in support of Michael Bloomberg’s short-lived presidential candidacy, describing them as violating its rules against “manipulation and spam.” (Bloomberg is the owner of Bloomberg LP, the parent company of Bloomberg News.)

4. Did what happened in 2016 change votes?

A bipartisan report released in 2019 by a Senate committee didn’t find evidence that Russia’s efforts -- which also included hacking and releasing Democratic Party emails -- changed the election outcome. But researchers at Ohio State University, using post-election surveys, said that the impact of fake news on voters in key battleground states “may have been sufficient to deprive Hillary Clinton of a victory.” The website BuzzFeed found that of the 20 top-performing false election stories in the last three months of the campaign, 17 were pro-Trump or anti-Clinton.

5. What is the government doing?

In 2018, the White House eased rules on “offensive cyber operations” aimed at “defending the integrity of our electoral process.” The effort reportedly included sending direct messages to individual Russians behind disinformation operations letting them know that they had been identified, and an attack that temporarily took the Internet Research Agency offline. The Department of Homeland Security said there was no evidence of actual foreign interference in voting that year, although the influence campaigns continued. In 2018, then-Special Counsel Robert Mueller charged 13 Russians and three companies with a conspiracy to meddle in the 2016 election, a case that was expanded last year.

6. What are companies doing?

Facebook says it will block campaigns from submitting new ads in the week leading up to the Nov. 3 election, preventing candidates from posting uncontested messages ahead of the vote. It removes dozens of accounts and pages each month for what it calls “coordinated inauthentic behavior,” including on its Instagram platform, and has begun removing (or labeling) deepfakes and other “deceptively altered or fabricated” media. But while Facebook has a fact-checking program aimed at labeling fake news, it has declined to subject political advertising to such scrutiny, on free-speech grounds. Microsoft Corp. is releasing new technology to fight “deepfakes” by analyzing videos and photos and providing a score indicating the chance that they’re manipulated. As for Twitter, it introduced a new policy earlier this year on “synthetic and manipulated media” and now flags content it believes to be “significantly and deceptively altered or fabricated.” Facebook and Twitter have created various tools to help protect candidates from impostors, such as badges that can be added to their verified accounts.

7. Will it be enough?

Some researchers warn that while the social media companies have spent billions of dollars globally since 2016 preparing, their methods may be no match for today’s problem. For example, one study found adding “disputed” to a post could be counterproductive, as some readers took the absence of a label on others as meaning by default that story was true. Others say the companies have failed to account for the way disinformation tactics have evolved and spread. A 2019 report by New York University’s Stern Center for Business and Human Rights found examples of homegrown disinformation about the 2020 candidates across social media, including fake sex scandals and a race-based smear campaign.

For more articles like this, please visit us at bloomberg.com

©2020 Bloomberg L.P.