1. Didn’t we solve this?
It seems not. U.S. intelligence agencies have warned that countries including Russia, China and Iran see the 2020 election as another opportunity to promote their strategic interests. Senator Bernie Sanders was briefed early this year about what were described as Russian efforts to covertly aid his campaign for the Democratic nomination, ostensibly to help President Donald Trump. Meanwhile, the U.S. Justice Department and the European Union have said they are tracking disinformation campaigns by Russia and China aimed at sowing societal and political divisions in the West over the coronavirus crisis.
2. Why not?
The tactics used by so-called trolls, or online provocateurs, are evolving, becoming stealthier to avoid being detected and evicted from social media platforms. The Kremlin-connected Internet Research Agency, the troll farm linked to the 2016 election meddling, now avoids telltale posts with misspellings and poor grammar by lifting divisive messages directly from U.S. sites or publications, and may also be paying Americans to post on their behalf, according to a New York Times report. While disinformation campaigns in 2016 relied primarily on regular images and text, officials warn that now they may also include deepfakes, convincing audio and/or video that has been altered or fabricated using machine-learning technologies. Some domestic political operators have learned the same tricks and are pushing the boundaries.
3. How so?
The Times has reported on a group of Democratic tech experts who conducted a “small experiment” in the 2018 Alabama Senate race, by imitating Russian social media tactics. The Atlantic exposed what it said were plans for a “billion dollar disinformation campaign” to re-elect the president; Trump’s campaign called the story itself disinformation. Google has pulled dozens of ads this year from Trump’s re-election campaign -- and three for Democrats seeking to run against him -- for unspecified policy violations (the policy includes rules against false claims and deepfakes). Twitter suspended about 70 accounts tweeting identical messages in support of Michael Bloomberg’s short-lived presidential candidacy, describing them as violating its rules against “manipulation and spam.” (Bloomberg is the owner of Bloomberg LP, the parent company of Bloomberg News.)
4. Did what happened in 2016 change votes?
A bipartisan report released in 2019 by a Senate committee didn’t find evidence that Russia’s efforts -- which also included hacking and releasing Democratic Party emails -- changed the election outcome. But researchers at Ohio State University, using post-election surveys, said that the impact of fake news on voters in key battleground states “may have been sufficient to deprive Hillary Clinton of a victory.” The website BuzzFeed found that of the 20 top-performing false election stories in the last three months of the campaign, 17 were pro-Trump or anti-Clinton.
5. What is the government doing?
In 2018, the White House eased rules on “offensive cyber operations” aimed at “defending the integrity of our electoral process.” The effort reportedly included sending direct messages to individual Russians behind disinformation operations letting them know that they had been identified, and an attack that temporarily took the Internet Research Agency offline. The Department of Homeland Security said there was no evidence of actual foreign interference in voting that year, although the influence campaigns continued. In 2018, then-Special Counsel Robert Mueller charged 13 Russians and three companies with a conspiracy to meddle in the 2016 election, a case that was expanded last year.
6. What are companies doing?
Facebook removes dozens of accounts and pages each month for what it calls “coordinated inauthentic behavior,” including on its Instagram platform. One Russia-directed network yanked in March was targeting the U.S., while two of eight pulled in April originated in the U.S. and also focused domestically. Facebook has expanded a fact-checking program aimed at labeling fake news, but refused to subject political advertising to such scrutiny on free speech grounds. “People should be able to hear from those who wish to lead them, warts and all,” the company said. It did implement stronger verification rules for political advertisers, including requiring proof that they have a domestic address to place ads in the U.S. It also has closed hundreds of accounts and has begun removing (or labeling) deepfakes and other “deceptively altered or fabricated” media. Facebook has said that it will remove deepfake videos intended to mislead, but not those edited in traditional ways.
7. Will it be enough?
Some researchers warn that while the social media companies have spent billions of dollars globally since 2016 preparing, their methods may be no match for today’s problem. For example, one study found adding “disputed” to a post could be counterproductive, as some readers took the absence of a label on others as meaning by default that story was true. Others say the companies have failed to account for the way disinformation tactics have evolved and spread. In a September report, New York University’s Stern Center for Business and Human Rights found examples of homegrown disinformation about the 2020 candidates across social media, including fake sex scandals and a race-based smear campaign. Meanwhile, Twitter and Facebook were both flooded by disinformation regarding the coronavirus, prompting yet another round of countermeasures. Asked why Facebook was able to act so aggressively around the disease, but less so around politics, Chief Executive Office Mark Zuckerberg said the health data is more “black and white.”
For more articles like this, please visit us at bloomberg.com
©2020 Bloomberg L.P.