The Washington PostDemocracy Dies in Darkness

Opinion Do Facebook’s failings really prove we need government more involved in online speech?

Facebook CEO Mark Zuckerberg in Washington on April 10, 2018. (Alex Brandon/AP)
Placeholder while article actions load

Facebook doesn’t want this much power — or so the company has been insisting in a public affairs blitz that has it begging governments for updated and tougher regulation. Yet as much as the recent deluge of reporting shows the firm needs to be checked, the revelations also point to the risks that may arise from an overzealous state.

A new swath of stories based on the leaked “Facebook Papers” tells us more about what we’ve already begun to understand: The technology juggernaut has made a habit of prioritizing growth and engagement over safety and responsibility, and Chief Executive Mark Zuckerberg has made a habit of playing down the harms of this reality in public. The gravitational pull toward the false, sensational and insidious is summed up best by an internal experiment involving a dummy account for a fake user named Carol, which was set to follow then-President Donald Trump and a handful of conservative publishers. Within five days, the platform was encouraging her to join QAnon groups.

The trouble lies at the very core of Facebook’s design. Tinkering around a site’s edges by punishing certain types of content can achieve only so much when the essential features privilege engagement above all else. Facebook today plays a game of whack-a-mole in which misinformation, hate and more pop up everywhere in an algorithmically amplified frenzy, no matter how motivated the firm is to take it down. The idea that the government should start playing that same game instead, however, could introduce new problems without solving those we already have.

Look at Vietnam, where Facebook last year buckled to demands from the ruling Communist Party to stifle dissident speech. Or Russia, immersed in a clampdown on Internet freedom that recently resulted in Apple and Google agreeing to remove a strategic voting app developed by the imprisoned opposition leader Alexei Navalny. Look at India, Turkey and others, writing similar strictures that give whoever’s in political power say over these vital mediums for communication, too.

All this may seem far away from the United States not only geographically but also philosophically — yet it was not so long ago that the then-occupant of the Oval Office was threatening to “close down” social media sites after Twitter added a fact-check label to his tweets. Even the best-case scenario of regulation that tells sites what content they can and can’t allow merely replaces Silicon Valley executives with Washington officials without fixing what’s broken.

What would work, then? Legislators could start not with regulating speech, but with issues such as privacy. The incentive to sell ever more personalized ads based on ever more personal information helps explain why sites insist on grabbing users’ attention at all cost. Or lawmakers could impose constraints that apply to everyone, and everything everyone says: restricting the ability of all posts to go viral, say, or at least requiring review beforehand. These ideas come with trade-offs, too. They would, indeed, block the boundless and endless flow of information that defines the digital age. Yet boundaries may be exactly what is needed — instead of more of the same from a different, and potentially dangerous, source.