In 2009, Jennifer Pozner was dealing with a big problem on Twitter. The executive director of the women’s advocacy group Women in Media and News, Pozner was used to fielding insults and criticism online. But this one was special: a man who made new Twitter accounts everyday to harass her.
“He would sometimes put my name in his new Twitter handle,” Pozner said. “The worst one was he put my name in his Twitter handle — ‘JennPoznerFan’ — and he would steal my pictures from Flickr, or [on his feed], there would be my face photoshopped onto porn images of women being humiliated.” Other times, she said, her harasser would send out messages saying “vicious, horrible things” — all on a Twitter handle associated with her name.
Eventually Pozner and her abuser both moved on. But, she said, she felt that Twitter had left her twisting in the wind with very little support to deal with the situation.
Hate mail, of course, is nothing new. Neither is cyberbullying. But the advent of social media has made it easier than ever for malcontents to find and harass targets. And the pace of Twitter, combined with its highly personal yet very public nature, has made it a hotbed for hate speech.
Twitter said in a statement Wednesday that it is looking to change its harassment policies after two users sent a barrage of offensive messages to Zelda Williams, daughter of the late comedian Robin Williams, that prompted the actress, 25, to abandon several of her social media accounts, including her Twitter feed. But Williams’s experience is just a small sample of the abuse that many Twitter users, such as Pozner, have experienced for years.
“Zelda has become this poster child, but what that overlooks is that Twitter, in particular, has become a place for abuse, and for women and people of color in particular. The company knows it and has done precious little” about it, Pozner said. In her own research into these issues, she said, she’s become accustomed to seeing rape threats, threats against people’s families and racial slurs leveled against Twitter users — particularly women and people of color. Women who are also minorities, she said, have it the worst.
All social media networks face some of these issues. Facebook last year faced pressure from activists and advertisers that prompted it to review its community pages for graphic content before placing ads on those pages. Twitter, activists say, hasn’t moved as quickly to address these problems.
Technologists and activists say they have offered Twitter some simple ideas that would make the social network feel safer. For example, “Block Together,” a program created by Jacob Hoffman-Andrews — a former Twitter employee and technologist at the Electronic Frontier Foundation — lets users block new Twitter accounts or Twitter users with fewer than a certain number of followers.
Twitter does have policies for reporting abuse on its network; the company reviews reported threats and often removes offending accounts or even informs law enforcement.
“Our Trust and Safety team works hard to keep Twitter safe for all users, while respecting basic principles of free expression,” Twitter said in a statement. “When content is reported to us that violates our rules, which include a ban on targeted abuse and direct violent threats, we will suspend those accounts.”
But those measures don’t always work. Imani Gandy, a journalist who documented her own problems in getting Twitter to address abusive comments leveled at her on the site, said she’s also heard of an instance where a message containing a rape threat was deemed to not violate Twitter’s policies — although the company does consider threats of violence to be in violation of its policies. Take Back the Tech, a global campaign to raise awareness about violence against women, gave the company an “F” on a report card assessing its policies.
But there are reasons that some of these tech companies are hesitant about filtering any content on their sites. Social networks, particularly Twitter, have drawn a clear line in the sand when it comes to protecting free expression. The importance of social networks like Facebook and Twitter came into sharp focus during the Arab Spring protests of 2010 and 2011; even more recently, Twitter has played a role in the protests in Ferguson, Mo.
All of that leads these firms — and Twitter in particular — to tread lightly when setting policies that limit speech.
“We evaluate and refine our policies and safety precautions based on input from users and technical limitations, while working with outside organizations to ensure that we have industry best practices in place,” the company said in a statement.
When Facebook was under pressure last year, it also pointed to the difficulty of filtering speech on its network. Evaluating controversial material “requires us to make difficult decisions and balance concerns about free expression and community respect,” Facebook said in a blog post. “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial.”
Legally, services that rely on user-generated content — the videos, photos and the short posts that make up the bulk of the stuff on social networks — aren’t held liable if people post pirated content or hate speech using their tools.
What’s less clear is how much responsibility they should take themselves for the good of their users. Williams, for instance, stopped using Twitter after being hounded with messages that blamed her for her father’s suicide by hanging — and users sent her pictures of her father altered to show bruises around his neck.
“I’m sorry,” Williams wrote in a Twitter message Tuesday. “I should’ve risen above. Deleting this from my devices for a good long time, maybe forever. Time will tell. Goodbye.”
Too often, activists say, the everyday people who face this kind of abuse also fade away — with far less fanfare, for fear of sharing their own stories.
“That’s what happens when these platforms ignore or reject reports,” said Sara Baker, campaign coordinator for Take Back the Tech. “People feel defeated and think there is no point in reporting to, or sharing their story with, anyone.”
Follow The Post’s tech blog, The Switch, where technology and policy connect.