Twitter’s disclosure confirms much of what we already knew about efforts to influence U.S. elections and adds some extra details. Russian trolls, who were far more effective than their Iranian counterparts, began their activity online by manipulating citizens of their own country over the invasion of Ukraine and anti-corruption, before moving on to U.S. targets. Iranian accounts directed users to pro-regime websites.
The data also proved a point often lost in the panic over Russian interference: Moscow appears to be relatively nonpartisan. Trolls went after the American right and then, after President Trump’s victory, diversified to focus on fissures among the left, all the while using people’s engagement with tweets on hot-button issues as a gateway into posting more polarizing, often false, content. One especially troubling takeaway from Twitter’s release is that the accounts regenerated under new names after being shut down.
It matters that Americans know about these efforts — not just that they exist, but exactly what they look like. After all, the country can only counter disinformation collectively. Social media sites must update their policies and adjust their algorithms, but if everyday citizens lack the literacy to spot nefarious content as they see it, and if the media does not guard against giving false campaigns even more oxygen, these operations will continue to succeed.
It would be wrong to interpret Twitter’s data as a sign that propaganda-peddling adversaries swung the 2016 election. In fact, domestic disinformation probably poses more of a threat going forward. But domestic copycats have borrowed the tactics of their Russian predecessors. And by confronting attacks from abroad, platforms are finally starting to articulate their responsibility not to facilitate manipulation, no matter who pulls the strings.
Other companies should take a tip from Twitter and make the data they collect on disinformation more widely available. Though Twitter’s structure gives it a leg up over its peers such as Facebook, sites across the board should disclose as much about influence campaigns as liability concerns allow. It is heartening to see platforms scrubbing themselves of malicious content. But the rest of the country cannot help with cleanup if it does not know what the mess looks like.