The question of what you can or cannot say online seems to come up, under some pretext, nearly every single day.
There are Internet trolls. Tragic stories of high school and middle-aged children, cyberbullied relentlessly — even to their deaths. A host of hateful, and hated, Web sites, that traffic in racism, misogyny or other prejudices and seem to flare up whenever news in those fields strikes.
Most recently, the Internet has been seized by a veritable epidemic of high-profile harassment and threats, a menace aimed, overwhelmingly, at women. In fact, given that the year kicked off with Amanda Hess’s seminal cover story on online misogyny — and is just now ending in the throes of Gamergate — you might as well call 2014 the year of the online threat. Or, at the very least, the year when the online threat, and online “speech” more generally, became a subject of enormous mainstream consternation.
It makes sense, then, that the Supreme Court picked this year to take on its first case about social media threats — a case that the court began hearing arguments on today.
It’s called Elonis vs. U.S., and it revolves around the disturbing murder fantasies that a Pennsylvania man, Anthony Elonis, posted to his Facebook page. But while that might be the starting point for the whole legal drama, the issues it’s considering go way beyond a case of creepy Facebook TMI. Essentially, the court is nailing down the exact line that separates a true online threat (which is not protected under the First Amendment) from disturbing, but non-threatening, online expression (which is). Just as importantly, the court is deciding what prosecutors must prove in order to establish a statement as threatening: Does the person who made the post have to consider it a threat, or does it just have to look that way to a reasonable observer?
In other, broader terms, the court’s considering what you can or cannot say online, and who gets to do the judging. Needless to say, that matters: Not just to guys like Elonis, but to anyone with an Internet connection and a keyboard. Which, I’d imagine, includes you.
Shortly after his soon-to-be-ex-wife left him and took their two children, 27-year-old Anthony Elonis took to Facebook to post a series of violent, disturbing rants and murder fantasies about her.
In one: “Fold up your PFA and put it in your pocket. Is it thick enough to stop a bullet?”
In another: “There’s one way to love you but a thousand ways to kill you. I’m not going to rest until your body is a mess, soaked in blood and dying from all the little cuts.”
Elonis, who framed the posts as “rap lyrics” inspired by the likes of Eminem, was charged, and later convicted, with making threats. (The actual charge is a little more wordy: “transmitting in interstate commerce communications containing threats to injure another person.”) He was sentenced to 44 months in jail, and he’s served much of that time already.
But throughout his prosecution, and in a series of appeals since, Elonis has repeatedly insisted that his posts weren’t threats — they were jokes or art or self-expression, or some combination of the three. In other words, he didn’t mean it.
This issue of “meaning it” — what the law calls intent — is really important, it turns out. In fact, that’s the fulcrum on which this whole thing rests. If Elonis meant the posts as real, credible threats, than yeah — he should, by all accounts, be in jail. But if he was thinking something else at the time he posted them, then they might be the “art” or “poetry” he claims. And we want to protect art and poetry, right?
As any Internet user can attest, it is often really, really difficult to tell what people “mean” online. (This is, incidentally, why sarcasm doesn’t play here.) If you write something out in a letter and mail it to someone, your meaning is pretty clear. If you write some dark “poetry” in your journal and keep it to yourself, your actions would also seem to signal a pretty clear intent. But online, who knows what anybody really means? The Internet is nothing if not a dumping ground for half-formed thoughts shorn of context, tone and other traditional markers of seriousness/sincerity.
In that vacuum of concrete meaning, a sort of lawlessness has flourished — an insistence that many posts, even threatening or offensive ones, don’t really count. They were jokes, they were words on Twitter, they were, in the words of Emily Bazelon, “as unreal as an attack on an avatar in World of Warcraft.” They weren’t meant that way.
The response to all this ambiguity, in some circles/circuits, has been to keep the definition of online threats really narrow. Online speech is only threatening, according to this school of thought, if the speaker knew it would frighten and sent it for that reason (… in addition to some other things; threats have to be specific and unequivocal, for instance). If you’re a prosecutor, to convict somebody like Elonis, you have to establish what he meant at the time he wrote it — not an impossible task by any means, but one that could, per Bazelon and others, discourage police and prosecutors from pursuing these cases.
But wait! Thanks to the lack of clear precedent here, there’s another standard floating around the lower courts, as well. According to that school of thought, it doesn’t matter if you meant what you said or not. What really matters is whether a “reasonable person” would see an alleged threat that way. The journalist and legal scholar Sarah Jeong compares it to the standard for civil tort liability: It doesn’t matter if a corporation meant to cause an oil spill; the oil spill did damage, so the company can still be sued for it.
This, in essence, is the fundamental decision at the heart of this case: Does it matter what you mean when you post something online? Or does it only matter how people read it?
To the non-law-geeks among us, all this probably seems procedural. But its implications are pretty significant. This is basically a question of holding the bar for online threats up here (*hand at eye-level*) or dropping it considerably lower (*waist-level, let’s say*). Since threats have become such a scourge on the Internet — and since they’re prosecuted so very rarely — that could prove a protection not only for women like Elonis’s wife, but also public figures like Anita Sarkeesian et al. They could make that type of case easier to prosecute.
Of course, there’s a flip side, too: One that Elonis and his defenders have argued both in their filings and in the press. This sets a precedent for censoring the Internet, they argue. It opens up a door where anything that’s objectionable to a “reasonable person” — and let’s be real, there is plenty on the Internet that qualifies — could be prosecuted, criminalized or otherwise shut down.
“The effect of the decision that does emerge almost certainly would be felt in the very public space of such Internet sites as Facebook,” Lyle Denniston explained over at SCOTUS blog. “For that reason, Elonis is running interference for the Internet as a whole, and especially for those sites where expression is robust, indeed.”
Is that fear realistic? It’s hard to say. After all, even without the whole “intent” standard, our legal definition for “true threats” is very narrow, and our protections for free speech incredibly strong.
But as this case and others like it at lower courts have shown, those definitions and standards have had to evolve with changes to technology and the culture. This may be the first time the court has ruled on free speech and social media. It will not, in all likelihood, be the last.