When Reddit co-founder Steve Huffman returned to become the company’s CEO last summer, he knew one big change he wanted to make: Introduce really clear, really specific content rules for the site, which was in the middle of a massive upheaval at the time over how it moderated objectionable content. Turns out, Huffman said during a panel discussion Wednesday at the Transformers event held at The Washington Post, that was a mistake.
When you draw really clear lines in the sand at a site like Reddit, “there will always be some a—–,” Huffman said, “looking for loopholes.” He eventually came to the conclusion that virtually every other major social site has come to: that content guidelines for online communities work best when they’re “specifically vague,” giving the contours of clarity on what sort of content is forbidden, while affording those in charge of enforcing the rules some leeway with when, precisely, the rules apply.
While Huffman said that harassment, bullying and illegal activity were banned by Reddit’s rules, “we think of ourselves as a platform where people can express themselves freely even if the things they’re expressing make us very uncomfortable.”
Huffman’s comments come as Reddit has spent the past several months trying to move on from a battle over the state of its own soul. While the site’s evolving moderation practices have been an issue for some Redditors for a while, things really got heated last year when Reddit started cracking down on communities that condoned or encouraged extreme, largely objectionable speech.
“I was watching Reddit go through this very, very difficult time,” Huffman said of his return to the site, noting that at the time he was concerned for the survival of the site that calls itself the “front page of the Internet.” It was in that context that Huffman announced that Reddit was “considering” a new set of rules for the site that provided more details on what sort of behavior was and wasn’t allowed, and promised to do more to enforce the rules that already existed.
Reddit is one of many sites in the middle of a larger Internet culture war, largely centered on the evolving anti-abuse measures that platforms like Reddit have introduced in response to bad behavior. Reddit’s response to its abuse problem caused some of its users to revolt, while others asked Reddit to do much more to systemically address abuse and harassment. Those who opposed the idea of Reddit doing more, or even anything at all — including some members of said objectionable communities — accused Reddit of censoring their speech. Meanwhile, anti-harassment advocates have argued that Reddit and other sites like it aren’t doing nearly enough to stop the abuse of their platforms, driving users from the site who no longer feel safe speaking freely there.
The role and reach of online moderation at sites like Reddit has become an increasingly controversial issue lately: These rapidly-growing platforms now play an important role in hosting public speech and encourage their users to see them in this way. But the specific content-moderation process for many of them remains opaque, meaning that it’s hard to know exactly how these sites are interpreting and enforcing their own rules.
Reddit’s moderation process is kind of old-school: While the site employs a small moderation staff, it relies on the volunteer moderators of individual subreddits to enforce site-wide rules and set behavioral guidelines for their communities. Other sites, like Twitter and Facebook, employ large numbers of full-time moderators, who are tasked with addressing reports of rule-breaking, spam and abuse, and taking action if needed.
Reddit does take site-wide action from time to time against people and communities it would prefer not to have representing it to the wider world. Recently, Reddit has banned some particularly objectionable subreddits and tried to quarantine others from the rest of the site, in order to minimize behavior that the company believes is primarily directed at abusing, harassing, or hurting other people, without banning speech that is objectionable but not actually in violation of its guidelines.
“We believe there is value in letting all views exist, even if we find some of them abhorrent, as long as they don’t pollute people’s enjoyment of the site,” Huffman wrote in July, shortly after his ascension to CEO.
On Wednesday, Huffman was speaking with Twitch founder Emmett Shear, at a panel moderated by the Intersect’s Caitlin Dewey. Shear noted that Twitch, too, avoids defining some of its rules too specifically — in its case with pornography. (Twitch is owned by Amazon, whose CEO is Jeff Bezos, owner of The Washington Post.)
If Twitch tried to specifically define what constituted pornography, it would end up “either banning things you don’t want to be [banning], or you end up allowing pornography,” he said. Twitch has a round-the-clock, paid moderator staff enforcing and interpreting its site-wide guidelines. But like Reddit, it also relies on volunteer moderators to monitor the video streaming site’s real-time chat rooms, and take action as needed.
Twitter also radically overhauled its abuse enforcement mechanisms over the past couple of years, for instance, in response to multiple high-profile instances of abuse on the site. Twitter’s rules would fit the “specifically vague” characterization that Huffman and Shear discussed: While the rules are certainly longer now than they were at the site’s founding, it’s still difficult for many users to anticipate what will — or won’t — be deemed in violation of them.
But the rules themselves have always been only part of the issue. Mixed in with stories of extreme abuse and harassment on platforms like Reddit are frustration with inconsistent enforcement of the rules that already exist, leaving it unclear what, say, Twitter believes would constitute a threat in violation of its rules.
To that, Huffman said, “These policies are only as good as our ability to enforce them. Which is where we’re spending a lot of our time now.”
Liked that? Try these: