As most of the world condemned last week’s mass shooting in New Zealand, a contrary story line emerged on 8chan, the online message board where the alleged shooter had announced the attack and urged others to continue the slaughter. “Who should i kill?” one anonymous poster wrote. "I have never been this happy,” wrote another. “I am ready. I want to fight.”
To experts in online extremism, the performance echoed another brand of terrorism — that carried out by Islamist militants who have long used the Web to mobilize followers and incite violence. Their tone, tactics and propaganda were eerily similar. The biggest difference was their ambitions: a white-supremacist uprising, instead of a Muslim caliphate.
As Facebook, YouTube and other tech companies raced to contain the sounds and images of the gruesome shooting, 8chan helped it thrive, providing a no-holds-barred forum that further propelled the extremism and encouraged new attacks.
The persistence of the talk of violence on 8chan has led some experts to call for tougher actions by the world’s governments, with some saying the site increasingly looks like the jihadi forums organized by the Islamic State and al-Qaeda — masters in flexing the Web’s power to spread their ideologies and recruit new terrorists. Critics of 8chan argue that the site, and others like it, may warrant a similar governmental response: close monitoring and, when talk turns to violence, law-enforcement investigation and intervention.
The owner and administrators of 8chan, which is registered as a property of the Nevada-based company N.T. Technology, did not respond to multiple requests for comment through email addresses listed for the site, as well as a request placed through a founder of the site, who said he remains in touch with Jim Watkins, an American who is based in the Philippines and owns the company.
The 8chan site’s Twitter account said Saturday that it “is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime,” and noted that it would not comment further. New Zealand police declined to comment on whether they had contacted 8chan.
The 8chan administration is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime. We always comply with US law and won't comment further on this incident so as not to disrupt the ongoing investigation.— 8chan (8ch.net) (@infinitechan) March 16, 2019
But the brazenness of the threats of racist and anti-Muslim violence posted on 8chan poses a striking new challenge to a foundational idea of the Internet: that in all but the most extreme cases, such as child pornography, those hosting sites are not legally or morally responsible for the content others upload to them.
Telecommunications companies in Australia and New Zealand already have taken the rare step of blocking Internet access to 8chan and some other sites. Public pressure is building as well on other companies, including some based in the United States, that provide the technical infrastructure for sites that espouse violence against Muslims, African Americans and Jews.
“This is terrorism. It’s no different than what we see from ISIS,” said Joel Finkelstein, executive director of the Network Contagion Research Institute, which, in partnership with the Anti-Defamation League, studies how hateful ideas spread online. “The platforms are responsible if they are organizing and propagating terror.”
A crackdown would mark an extraordinary step in confronting online extremism. Terrorism experts say U.S. law enforcement and intelligence agencies have been reluctant to treat white supremacists and right-wing groups as terrorist organizations because they typically include Americans among their ranks, creating complex legal and political issues. It’s a thorny issue for tech companies, too: Platforms such as Facebook and Twitter blocked white-supremacist content after the Charlottesville riots in 2017, a watershed moment that sparked a debate about censorship.
Some are also skeptical that any effort to suppress such activity online would be successful, because the Web’s decentralized nature makes targeted takedowns difficult and allows hate groups to quickly retreat underground.
Fewer than 200 people watched the New Zealand massacre live. A hateful group helped it reach millions
The increasingly hateful tone of 8chan has become a cautionary tale for how corners of the Web can be radicalized. Launched in 2013, the site grew out of an exodus from the lightly moderated message board 4chan and quickly gained an audience as a cauldron for the extreme content few other sites are willing to support. The past week has marked a new low.
“I’d never seen the whole board so happy about what had just happened. Fifty people are dead, and they’re in total ecstasy,” said 8chan’s founder, Fredrick Brennan, who said he stepped down as an administrator in 2016 and stopped working with the site’s ownership in December.
Brennan said he has been stunned to see how little the current administrators have done to curb violent threats, and he voiced remorse over his role in creating a site that now calls itself the "darkest reaches of the Internet.” But he worries there are no true technical solutions beyond a total redesign of the Web, focused around identification and moderation, that could undermine it as a venue for free expression.
“The Internet as a whole is not made to be censored. It was made to be resilient,” Brennan said. "And as long as there’s a contingent of people who like this content, it will never go away.”
A move to silence 8chan would clash with a key tenet of the Internet, enshrined in a landmark 1996 U.S. law, that allows Facebook, YouTube, Twitter and others to operate with minimal government interference. The Communications Decency Act sharply limits the legal liability of platforms for content their users post.
But 8chan’s content in the aftermath of last week’s shooting has renewed debate over whether the Internet’s freewheeling culture has gone too far — and whether sites that harbor talk of white-supremacist violence should face the same depth of government scrutiny that previously seemed reserved for chat rooms frequented by members of Islamist terrorist cells.
Federal authorities in the United States — mindful of constitutional protections for the free-speech rights of Americans and, in some case, their links to mainstream political actors — have long been reluctant to gather intelligence among potential domestic terrorists in the same intrusive ways they do among foreign terrorist groups, said Clinton Watts, a senior fellow at the Foreign Policy Research Institute and a former FBI counterterrorism expert.
Although the alleged Christchurch shooter last week was an Australian and 8chan is operated from the Philippines, Watts said the site probably attracts Americans, making it part of one of the bureau’s legal blind spots in combating domestic terrorism.
“These domestic extremists are organizing in the same way” as foreign Muslim extremists, using websites to inspire bloodshed, radicalize believers and even plan assaults, he said. There was one key difference in the political and legal dynamics, however: “Domestic terrorists vote. Foreign terrorists don’t.”
It’s unclear just how closely law enforcement is surveying sites like 8chan already. The FBI said in a statement that, while “individuals often are radicalized by looking at propaganda on social media sites and in some cases may decide to carry out acts of violence … the FBI only investigates matters in which there is a potential threat to national security or a possible violation of federal law.”
Any move to crack down on sites that host conversation, no matter how loathsome, will confront the constitutional protections for free speech and the conviction among many experts that suppressing talk in one portion of the Internet will only prompt its growth elsewhere online.
There is an ever-growing number of technological options for evading government censors, obscuring identities, faking locations and posting identical copies of disfavored content, which makes any quest to crack down on perceived misbehavior daunting for authorities, if not impossible.
The spread of the shooting videos last week was a classic example: Even Facebook and YouTube were overmatched by human users, organized in part on 8chan, and were unable to block the images of mass murder for days. Both companies said afterward that they struggled to control the crush of uploads in the hours after the attack but were taking steps to prevent a recurrence.
“When you shut things down of that nature, another one springs up,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “What we’ve seen on 8chan is just on the surface.”
Yet there’s less disagreement that the New Zealand shootings — two deadly attacks on mosques, including one live-streamed on Facebook — fit classic definitions of terrorism, meaning that the act was calculated to inspire public fear and spread an ideology. The platforms that helped spread videos of the killings, such as 8chan, played a role in that act that went beyond mere exchange of free speech as commonly understood, experts in online extremism said.
Inside YouTube’s struggles to shut down video of the New Zealand shooting — and the humans who outsmarted its systems
Facebook’s former chief security officer, Alex Stamos, said the alleged gunman’s tactics mimicked those of the Islamic State: committing an act of attention-grabbing mass violence, then bolstering and shaping that attention through technological means.
“For all of his hatred of Muslims, he’s copying a Muslim supremacist organization,” Stamos said. “There’s a sad irony there.”
Stamos is wary of government tactics that smack of censorship: He has long argued that any power you give to liberal Western democracies will be used by illiberal authoritarians to block legitimate speech. But he favors more aggressive law-enforcement monitoring of any site where terrorist acts are being planned.
The FBI and other U.S. authorities for years have infiltrated the online sites of foreign terrorist organizations, as designated by the State Department, experts in political extremism said. This has included active monitoring of chats about jihadi themes, using false personas to engage potential terrorists in direct conversation and, in the most serious cases, taking action when violent plans appeared to be forming.
“Thanks to the efforts of the companies and law enforcement, potential ISIS supporters got to the point where they couldn’t trust anybody they met online,” Stamos said. “They discouraged the hobbyists and left only real supporters in some of these online groups.”
An anonymous audience for hate
The anonymity of 8chan is its most critical feature — there are no profiles or post histories for users, who call themselves “anons,” making it difficult to know how many people visit the site, who they are and whether their messages are legitimate threats or merely inflammatory posts intended to shock.
The site portrays itself as a beacon of free speech and says it deletes only posts that clearly violate U.S. law, such as those featuring copyrighted material or child pornography. Its most active forum, the “politically incorrect” board “/pol/," features more than 12 million posts and runs rampant with images of disturbing violence, white-supremacist memes and far-right hate speech. Brennan estimates that more than 100,000 people visit the site every week.
8chan lists one administrator — Ron Watkins, the son of N.T. Technology owner Jim Watkins — and roughly a dozen programmers and “global volunteers.” Brennan said Jim Watkins owns other Internet businesses and has built a technical fortress to guard 8chan from potential takedowns: He owns nearly every component securing the site to the backbone of the Web, including its servers, which are scattered around the world.
“You can send a complaint, but no one’s going to do anything. He owns the whole operation,” Brennan said. “It’s how he keeps people confused and guessing.”
Watkins did not respond to repeated requests for comment.
The site’s only revenue comes from a small group of donors and advertisers whom Brennan estimates pay about $100 a month, which he said is not enough to cover the site’s expenses. But Watkins is content to lose money, Brennan said, because he sees it as a pet project: “8chan is like a boat to Jim. It doesn’t matter if it makes money. He just enjoys using it.”
The board has grown increasingly fanatical, Brennan said, as its user base of early trolls and Internet libertarians have ceded ground to the “committed Nazis” who now dominate the site. In previous mass shootings, he said, the board often fueled anti-Semitic conspiracy theories that painted the attacks as faked. The Christchurch shooting marked the first moment Brennan said that most users portrayed an attack as a point of pride and a step toward their goal of a global race war.
Posters have pushed each other to flood the New Zealand police email inboxes with images of gore and pornography, to widely distribute the gunman’s writing, and to spray-paint a neo-Nazi symbol onto “Muslim-run” schools and businesses. Many glorified the gunman as a “hero” and said they would hang posters around their neighborhoods of a meme showing the gunman with his rifle and manifesto in a messianic pose, a halo of sun around his helmet camera. “This guy is the only person I’ve ever truly admired/looked up to in my life," one poster wrote.
Posters this week shared the names and addresses of religious centers they said they intended to target, as well as tips for future shooters on how to improve their videos for more “amazing kill shots … [and] details many of us are salivating for.” Links and memes of the gunman’s video and manifesto could be found virtually everywhere, as well as threats and eager calls to carry out more violence. “Invaders,” one poster wrote, had 90 days to leave the United States and other countries or “be executed on the spot.”
Some 8chan posters hinted at even more private gathering places online. When one poster who said he was a white nationalist “highly inspired” by the killings asked where the board’s plans were for “accelerating” the gunman’s plan, another poster wrote that “we don’t discuss that here” but at a site on the dark Web available only to those “that prove themselves.”
Brennan said 8chan is only the most visible corner of a vast network of privately organized sites that shelter and fuel extremist thought. And while he believes 8chan and sites like it should enforce stricter moderation for violent messages, he also worries about a broad shift toward censorship that could push people further into the digital shadows: sites on the dark Web, secret chat rooms and decentralized file-sharing networks that are even harder to monitor and shut down.
Brennan expects there will be another shooting because of 8chan, and he said he’s seen nothing from leaders there to suggest they would begin cracking down on incitements of brutality. Some of the people expected to moderate the site, he said, subscribe to extreme beliefs themselves. “It’s like having the lunatics run the asylum,” he said.
‘An extraordinary response’
The enduring extremism on 8chan reveals what experts say has become an existential crisis for the Web: how the empowering freedom of digital connectivity can rally the most dismal and dangerous viewpoints together, often anonymously and consequence-free.
It also highlights how even the biggest improvements from tech giants such as Facebook and YouTube, which have in recent days terminated hundreds of accounts “created to promote or glorify the shooter,” will do little to limit vile speech on a global stage.
The sites’ anonymity can have real-world impact. Public school campuses in Charlottesville closed for two days this week after threats of an “ethnic cleansing” at a high school there surfaced Wednesday on 4chan.
Internet service providers in Australia and New Zealand, which temporarily blocked access to 8chan, 4chan and other forum and video sites that hosted the shooting footage, showed one potential technical remedy. Telstra, Australia’s largest telecommunications company, said it took action following a request from the New Zealand government, which says sharing the content is a criminal offense. Nikos Katinakis, a top Telstra executive, said that while some sites have removed the content and seen their blocks lifted, 8chan remains blocked. “Extraordinary circumstances … required an extraordinary response,” he said in a statement.
We studied thousands of anonymous posts about the Parkland attack — and found a conspiracy in the making
8chan, however, is shielded in another way: the U.S. web-services giant Cloudflare, which helps websites guard against “distributed denial of service,” or DDoS, attacks that online vigilante groups have used to target 8chan in the past.
Cloudflare says that it helps 8chan and other websites regardless of their content, as long as they don’t violate U.S. laws, and that the company complies with court orders, works with law enforcement and bans terrorist propaganda networks and other groups on official sanction lists. Cloudflare would not discuss specific business or financial details about its relationship with 8chan.
After the Charlottesville riots, Cloudflare stopped working with the neo-Nazi site Daily Stormer, a ban that led Cloudflare chief Matthew Prince to later question whether he had set a dangerous political precedent.
Alissa Starzak, Cloudflare’s head of policy, said the role of policing should be left to the companies, governments or content moderators. She questioned the free-speech ramifications for revoking services from websites hosting content with which the company disagrees. “It’s still going to be on the Internet,” she said. “They might be more open to a DDoS attack, but is that the goal? A vigilante attack?”
Alice Crites and Devlin Barrett contributed to this report.