MOST AMERICANS likely recall President Trump’s tweeted threat to North Korea last year that he has a “Nuclear Button.” They may also remember when he called Omarosa Manigault Newman, a former aide, “that dog.” Now picture these proclamations covered with a gray box: This content violates Twitter’s rules against abusive behavior, the text explains, but remains online because the platform has deemed it in the public interest.
This will be the new normal under a policy Twitter announced last week to begin labeling tweets by national political figures with significant followings that the company would usually remove for breaching its terms of service. To see those missives, users will need to click through a warning screen. An algorithm tweak will also ensure offending posts appear less often in search results and timelines.
There is a strong argument that the rules governing everyone else’s ability to harass or spew hate should apply equally to those in power, whose harassing behavior is most likely to silence critics or cause other harm. But there’s also an argument that private companies such as Twitter have the least business meddling with the public conversation when elected or would-be-elected officials are involved. Doing so could have a dramatic impact on the democratic process, and citizens deserve to know what the people who represent them are doing and saying — perhaps even especially when their comportment is appalling.
These competing concerns put Twitter in a quandary. Shifting toward transparency, with a set of criteria to determine when a tweet is a matter of public interest and how to weigh the implications of removal vs. labeling, is a sensible solution. The narrow change also offers a chance for Twitter and other sites to learn lessons that could lead to broader reform in their treatment of sensitive content — from the violent but newsworthy to the misleading to anything else that toes the line of acceptability.
Just as important, Twitter’s move adds useful nuance to the fight over online speech by moving the focus away from takedowns alone to make room for content moderation mechanisms. Such mechanisms may range from fact checks to reducing posts’ circulation to stripping them of advertising revenue. The idea is to focus not only on limiting speech but on limiting reach, too.
Implementing this revised strategy will not be easy for Twitter. The company is already getting flak from conservatives who claim the policy is another example of Big Tech censorship, and executives should prepare themselves for the same tricky calls and high-profile fights that have plagued peers including Facebook and YouTube in recent months. It’s encouraging, though, that Twitter has mustered the gumption to have those fights at all.