John Kelly is the chief executive of Graphika, a network analysis firm, and an affiliate at the Berkman Klein Center for Internet & Society at Harvard University.
If nothing else, the attorney general’s summary of the Mueller report serves as a potent reminder of the ongoing problems the country faces with disinformation on social media. Foreign powers using the Internet to intervene in U.S. politics is a genuine problem — and there’s little to suggest that it will go away any time soon.
Yet as Democrats push for disclosure of the full Mueller report and for legislation that would bolster election security to protect against foreign interference, there remains more to the story than the focus on Russian election influence would suggest. Some self-reflection is needed if we are to truly save ourselves.
Our online political discourse is being warped more by American manipulators than by foreign ones. Domestic online “astroturfing” by paid consultants and technically sophisticated volunteers predates social media, and the tools, techniques and ranks of practitioners have only multiplied in recent years.
This realization should not serve as an excuse for dismissing foreign interference. The weaponization of online political divides actually makes it easier for foreign influencers to inject their own agenda into the fray. Foreign sock puppets are easy to insert into large, semi-automated communities of formulaic partisans, where they can simply mimic domestic accounts until they are ready to inject their own signal into the stream. Recent accounts of Americans “taking a page” from the 2016 Russian playbook don’t get it quite right: The foreign players arrived at a thriving casino.
This discouraging state of affairs is a far cry from the optimistic view of many, myself included, a decade ago: that the democratization of public voice would enhance democracy. It turned out that organized manipulators have captured a larger share of the conversation than normal citizens, and that the proliferation of social platforms is resulting in a proliferation of opportunities to manipulate. And it must be said clearly — all parts of the American political spectrum are being affected: #Resist is inorganically boosted alongside #MAGA.
Most of the dominant platforms are not unserious about political manipulations, nor do they underestimate their responsibilities in the face of democratic governments increasingly ready to bring down the regulatory gavel. I think they get it. But given the First Amendment and critical values of user privacy, this is very tricky terrain for them to navigate.
Online manipulation is not confined to politics but also affects commerce, including the advertising models at the heart of industry revenue. Online fakery is responsible for billions in misdirected online ad spending and costly uncertainty about the validity of metrics underpinning both traditional online advertising and the lucrative frontier of influencer marketing.
The platforms’ interest in curbing coordinated manipulation of online behavioral signals, commercial and political, points in a useful direction. The promise of the Internet we felt years ago relies on true signals emerging from real people. On most platforms, around most kinds of interests, these signals are authentic, and platforms are providing real benefits to their users. We don’t want to throw the baby out with the bathwater, nor can the baby long tolerate such toxic bathwater.
The leading edge of data science tells us something important here: It is far easier (for sophisticated algorithms) to identify that something is being manipulated online than it is to attribute that manipulation to a specific actor. Without curtailing the vital protections that online anonymity provides to vulnerable, good-faith actors, a social, political and industry commitment to authenticity in online behavior, and the measurement thereof, could cut through many of our problems like a sword through the Gordian knot.
Facebook, a focal point for criticism around political manipulation, has proactively disclosed its internal discoveries of foreign information operations and developed policies to combat “Coordinated Inauthentic Behavior” (CIB) on its platform, backed by a serious investment in talent and technology to identify and eliminate it. Twitter has made a wonderful move by publicly releasing a treasure trove of important data on Russian, Iranian and other foreign state information operations for external researchers to examine, and is taking steps to make it harder to register and operate fake accounts at scale.
The industry should collaborate in following Facebook’s and Twitter’s lead. My team’s collaboration with a group at Oxford University to analyze Russia’s 2016 online influence operations for the Senate Select Committee on Intelligence showed clearly how the various social media platforms are being leveraged together, and against one another. They therefore must cooperate closely with one another and researchers if they are to prevent this manipulation. Without such cooperation, even the biggest platforms will not be as effective as they could be, and smaller platforms will be utterly defenseless.
A serious effort to prevent any manipulation of America’s online political conversation, period, would drastically reduce opportunities for foreign influence of our political discussions in the process. It would also make for a better democracy. Americans on the right and the left have been vigorously targeted by foreign actors. They are also covertly manipulated by partisan domestic political organizations using the same kinds of deceptive tactics. Neither side should for a moment feel as though these facts in any way diminish the integrity or validity of their ideas and values — but both sides should want to stop it, cold.