Elon Musk’s vision for Twitter is a public town square where there are few restrictions on what people can or can’t say on the Internet.
Twitter, Facebook and other social networks have spent billions of dollars and employed armies of people to create and enforce policies to reduce hate speech, misinformation and other toxic communication that degrades public discourse. In doing so, they’ve provoked the ire not only of politicians on the right, who claim these actions amount to censorship, but also people on the left, who say tech companies’ enforcement is both too limited and biased.
“What Musk seemingly fails to recognize is that to truly have free speech today, you need moderation,” said Katie Harbath, a former Facebook public policy director and CEO of consultancy Anchor Change. “Otherwise, just those who bully and harass will be left as they will drive others away.”
She added that content moderation and responsible platform design done right can actually allow for more speech.
Jack Dorsey, former Twitter CEO, who co-founded the social media company 16 years ago, said in a tweet about Musk’s potential takeover bid: “I don’t believe any individual or institutions should own social media, or more generally media companies. It should be an open and verifiable protocol. Everything is a step toward that.”
Twitter declined to comment. Musk didn’t immediately reply to a request for comment.
Musk, a prolific Twitter user himself with more than 80 million followers, has touted the benefits of free speech in the lead-up to his hostile takeover bid unveiled in a Securities and Exchange filing last week. Following the disclosure, he conducted a poll asking on Twitter whether taking the company private at $54.20 a share should be up to shareholders and not the board. During a TED conference in Vancouver on Thursday, he touted the merits of free speech on the Internet.
“I think it’s very important for there to be an inclusive arena for free speech,” Musk said during the TED interview. “Twitter has become kind of the de facto town square, so it’s just really important that people have the, both the reality and the perception that they are able to speak freely within the bounds of the law.”
Musk, who has previously referred to himself as a free speech maximalist, also said he hoped to make available to the public the company’s algorithm, helping people understand how content surfaces on the platform. He also said platforms should police speech in accordance with U.S. laws, a comment that was widely interpreted to mean that he was advocating for limited content moderation because speech in the United States short of direct calls for violence is largely protected by the First Amendment.
And he said his bid wasn’t about making money.
“My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization,” he said.
Some pro-free speech networks have been found by researchers to be havens for white supremacists and those who wished to harm society.
Tech executives argue that Musk’s ideals arose out of a time when the Internet served a different purpose — when concerns about government repression and news organizations as gatekeepers for speech led to early social media pioneers, including Twitter’s own founders, to believe free expression was paramount above all other ideals.
Early Internet pioneers of Musk’s generation, which include Dorsey, long subscribed to the ideal that more speech is the best antidote to harmful or bad speech. The CEOs were shaped by experiences such as the Arab Spring, where everyday activists used social media services to share their experiences even as governments were trying to repress them.
They came of age in a time when governments could speak through them with a megaphone that could far more easily drown out the public than it can today. The belief was so strong that a former executive at both Google and Twitter used to refer to Twitter as “the free speech wing of the free speech party.”
At the same time, the people who built the Internet — coming up during a period many in Silicon Valley refer to as Web 1.0 — took on hardcore speech postures to also fight back against religious conservatives and opponents of the Internet itself, some of whom argued that the Internet should be restricted because it would become a haven for “porn and sometimes first person shooters,” tweeted Yishan Wong, a former CEO of the Internet platform Reddit.
“To [many of the older tech leaders], the Internet represented freedom, a new frontier, a flowering of the human spirit, and a great optimism that technology could birth a new golden age of mankind,” Wong, a Silicon Valley pioneer, said in a widely viewed thread. “It’s not that the principle is no longer valid (it is), it’s that the practical issues around upholding that principle are different, because the world has changed. ”
Wong added that the notion that more free speech is the best counter to bad speech is “naive” in today’s world.
Fast forward and the Internet is indeed a different place. Russian trolls sowed disinformation on social platforms in the 2016 U.S. election and President Donald Trump used a network of followers as a misinformation megaphone in the lead-up to the last election. Anti-vaccine activists have used social media to spread health conspiracies to millions of people. Even now, amid the conflict in Ukraine, researchers and Facebook have identified disinformation networks trying to tilt public opinion toward Russia.
“The Internet is not a frontier where people can go to be free, it’s where the entire world is now, and every culture war is being fought on it,” Wong tweeted.
In an interview, Wong said early Internet pioneers of Musk’s era had “lived experience of free speech working pretty well, and the enemies of free speech being wholly bad,” and that informed their worldview.
Twitter’s shift from being a largely unmoderated platform into one with more robust content moderation took place a year after the 2016 presidential election, when it was revealed that Russian operatives spread disinformation on social media to try to tilt the election outcome toward Trump. Larger rivals Facebook and YouTube also took on similar initiatives in response to the 2016 election.
In late 2017, Twitter began building tools and hiring content moderators to weed out disinformation, fake accounts, spam and other forms of what the company called “inauthentic behavior.” That effort got even bigger in 2018, when the company launched an initiative geared toward “Healthy Conversations,” and solicited opinions from more than 200 outside experts for how to keep the service free of harassment and bullying. (Before this, user complaints about harassment and bullying were widely ignored by the company, according to numerous reports at the time.)
In 2019, Twitter also developed labels that would cover up tweets by powerful people and politicians who broke the service’s rules but whose tweets were considered newsworthy. And in 2020, it developed new policies to tackle misinformation during the presidential election and the pandemic.
All of these new measures dramatically changed how speech was policed on the platform and resulted in many more people having their posts and accounts removed.
Today, the teams working on healthy conversations at Twitter comprise dozens of people, some of whom have been among the most concerned about Musk’s potential takeover, according to internal documents obtained by The Washington Post and people familiar with the discussions who spoke on the condition of anonymity to protect their jobs.
Researchers who study social media say Twitter has vastly improved in some areas, even as some rule-breaking is still easy to identify on the service. The company has gotten much better at detecting fake accounts and disinformation, for example, and also was the first social network to penalize Trump for violating its policies. (Trump is now banned from Twitter.)
Advocates for tech accountability say it would be very risky for Twitter or other social networks to remove some of the measures they’ve taken in recent years.
“A platform that allows people to spam misogynist and racist abuse is unsafe for pretty much anyone else and would lose advertisers, corporate partners and sponsors rapidly, leaving it a commercially unviable husk within months,” said Imran Ahmed, founding CEO of the nonprofit group Center for Countering Digital Hate, which researches and promotes accountability for tech companies.
Wong, along with others, pointed out that if Musk were to take control of Twitter, he would be in for a “world of pain” because of the challenges of moderation.
“Given the misunderstandings that exist around free speech on platforms I sometimes think it is hard to grasp until you’re on the frontline having to make these decisions to get the gravity & difficulty of the work,” tweeted Esther Crawford, a Twitter executive whose own social network, Squad, was acquired by Twitter. “I’m very pro free speech but there must be limits for the health of a platform and to ensure the safety of people.”