No one knows exactly how to prevent social media from further devolving into a cesspool of harassment, abuse and gender-based violence. But across the pond, one British judge seems to think she’s happened upon a good idea: Throw the harassers and abusers in jail.
“You better watch your back, I’m going to rape your arse at 8pm and put the video all over,” one of the retweeted messages read.
“Best way to rape a witch,” read another message, this one written by Nunn himself: “try and drown her first then just when she’s gagging for air that’s when you enter.”
The conviction was a relief to the politician, member of Parliament Stella Creasy, who said it sent a message that “this is an old crime taking a new form online.” It was also a victory for advocates who have campaigned for harsher punishments for digital abusers, both in the U.K. and the U.S. After all, whenever a disturbing abuse case arises — think Anita Sarkeesian, the gaming critic who was recently forced from her home — concerned observers are left wondering: What can be done? Is there no legal recourse?
In many cases there’s simply not, says Danielle Citron, a professor of law at the University of Maryland and the author of a new book on hate crime online.
“We have a much higher standard in the U.S. when it comes to what constitutes threatening speech,” Citron explained. And even if a Twitter threat or a frightening Facebook post clearly meets that standard, she said, “we rarely prosecute these crimes.”
Here’s how Citron breaks it down: The U.S., unlike, say, the U.K., constitutionally protects speech under the First Amendment. That is self-evidently, demonstrably great. But over time, the Supreme Court has carved out a few areas of speech that it doesn’t think are worth protecting. You can’t knowingly slander another person, for instance, and you can’t ask someone to commit a crime for you. You also can’t make what the court calls “true threats.”
What makes something a “true threat”?
This is basically the key point — the critical issue that the whole messy, horrible situation turns on. A true threat has to be specific. It has to be unequivocal. It has to be immediate. It has to cause serious fear of harm — the kind that has the victim constantly looking over his or her shoulder. Some lower courts have even ruled that the threatener has to deliberately intend to cause fear, a tricky issue, since many defendants will claim they were joking after the fact. (Citron expects that the Supreme Court will rule on this soon.)
In either case, this standard — which doesn’t exist in the U.K. — immediately protects many, perhaps even the majority, of abusive online messages we see on social networks and other sites.
“I’m going to kill you today.” — true threat.
“If I see you today I’m going to kill you.” — not a true threat, in most circumstances. (Citron notes that everything is contextual — in some cases, depending on how this threat is framed and phrased, and what other material it’s sent with, it could meet the true threat standard.)
“I would really like to kill you.” — likewise not a true threat.
“Best way to rape a witch: try and drown her first” — the exact message Nunn sent to Creasey, probably doesn’t constitute a true threat in the U.S. (After all, it never even mentions Creasey, let alone sets a date or time.) So, had his case been tried here, it’s quite possible that the First Amendment would triumph … and a bored, hateful father of one would go on to troll another day.
What happens if someone makes a true threat online?
Even then, at least in the U.S., there’s little to suggest that police will pursue the case. Citron found that, between 2009 and 2012, only 10 cyberstalking cases were prosecuted in the country. Most related to revenge porn, a similarly fraught corner of Internet law.
“We see this happening for two reasons,” Citron said. “Cops are afraid of the technology … and they’re not well-versed in the laws on this issue. They need better training.”
When victims come to her for advice on getting cops to pursue their cases, Citron goes so far as to suggest that they print the law out and bring it to the station with them. That stands in stark contrast to cases like Nunn’s, where police and prosecutors have not only pursued these convictions — but pursued them even for borderline cases. Notes Sarah Jeong, a journalist and legal scholar who has written extensively on technology and the law, many of the abusive tweets made in the wake of the Austen campaign clearly met the U.S. standard for true threats.
“So I think the UK government purposefully picked something that wouldn’t meet a higher standard (like the US standard) in order to set the bar there,” she said by e-mail.
Does that mean American Twitter trolls will always get away with murder?
No, not quite. Citron notes that, in the seven years she’s been lecturing on cyberharassment and other forms of digital abuse, public opinion and awareness of the issue have changed wildly: The media now reports on these issues often, she notes, and lawmakers in 14 states have passed laws on digital abuse within the past 13 months alone. None of those laws have satisfied advocates and legal experts, who are looking for the right balance between generous free speech protections and real protection for victims. (In the wake of “The Fappening,” the legal scholar and writer Amanda Levendowski told me that she couldn’t pick a best one — they were all “pretty bad.”) But still, Citron says, they’re something. They’re a start.
For better or worse — and there are valid arguments on both sides — that start probably wouldn’t have jailed Peter Nunn, had he been tried in the U.S. Not that Nunn seems terribly upset by his conviction. A protected account that purportedly belongs to him has tweeted 530 times since the incident with Cresey; the banner image shows a defiant Nunn standing outside the Westminster court where his case was tried: a martyr for hatefulness. Or maybe free speech.