In this Friday, May 27, 2011, file photo, journalist James Foley responds to questions during an interview with The Associated Press, in Boston. (AP Photo/Steven Senne, File)

The tragic, public airing of James Foley's murder at the hands of the Islamic State is raising sensitive questions about the role that social media companies play in disseminating news online. Was Twitter right to block the gruesome video showing Foley's death? Is there a legitimate news justification for distributing the video, or is it enough simply to talk about it in the abstract?

Social media firms are grappling with issues that have long bedeviled newspapers. And that's no accident; around half of users on Twitter and Facebook say they use those services to get news at least some of the time. But just because social networks fulfill some of the same functions as news organizations doesn't necessarily mean people expect the same things of them.

In fact, there's a lot of disagreement on this point, with some taking the position that Twitter's blocking access to the video and suspending accounts that tweet the link amounted to censorship. Others argue that this behavior is simply editing, much as a newspaper would edit what appears on its Web site. Still others said that blocking content is always wrong, no matter the circumstances. Running underneath these debates is a more troubling concern: What kind of precedent does it set when Twitter has decided to "[reverse] a long record of non-intervention"?

I don't mean to pick on Twitter — nor to suggest that Twitter made the wrong call. Determining what to do with sensitive content is something that all social media companies struggle with on a daily basis. The New Republic

on the real, and sometimes fallible, human beings who have the unenviable job of deciding what is and isn't hate speech, based on what are often very complicated unwritten cultural norms. You could call it a case-by-case approach, nuanced in its consideration of all the unique circumstances surrounding an incident like the Foley video. At the same time, you could also say that this approach offers less of a strategy than a sort of rough-and-tumble scramble that's rarely satisfying to anyone and tends to produce weird inconsistencies like these:

What's a little surprising about all this is that there hasn't been more of an effort by different tech companies to establish common rules of the road. Mostly, companies draw up their own terms of service or content policies, and then interpret those documents to the best of their ability. The practical outcome is that variations in the language produce variations of interpretation, which in turn produces variation in results.

While experts in online safety and privacy say that company officials routinely appear on tech panels together to discuss particular cases or experiences they've had while adjudicating reports of abuse, there aren't the kinds of established best practices or guidelines common to other industries or policy questions.

Maybe it's time to try to develop some. Setting a universal set of guidelines would benefit social media companies by deflecting the backlash from any content decisions onto the agreed-upon framework. Maybe the framework itself could be open-sourced and developed by folks from beyond the companies themselves. Who knows? It could be the start of something good.

Some companies, like Twitter, will allow graphic content but place a barrier over it that users must click through — something the industry calls an "interstitial." Other companies, like Facebook, allow the selective sharing of graphic content if the user posting the content is condemning it, but not if the content is being celebrated or glorified. What if some of these practices were shared and extended?

There is some level of sharing going on, according to an employee for one tech company who asked not to be named because he wasn't authorized to discuss other companies' policies publicly. The employee added that some third parties, like the Anti-Defamation League, have convened special working groups to discuss sensitive content where tech companies have gotten involved.

Of course, there are all sorts of reasons why this wouldn't work. Social media companies are all built differently. You can't choose to selectively share something on Twitter as you can with Facebook: Once something is up, all of your followers can see it. There are no laws governing sensitive content that companies can build a coherent industry framework around, unlike with, say, child pornography.

"It might be a fact of life" that these companies will deal with the same sensitive incident differently, said Jennifer Hanley, the policy and legal director at the Family Online Safety Institute, a Washington-based nonprofit. "They're looking at who the content's coming from, the nature of the content, just how graphic it is and what the public value is."

In response to tech companies' realistic inability to please everyone at once, individual users of these services have emerged as voices of reason — urging people not to share the Foley video or the video of authorities in Ferguson shooting a protester to death.

This is the first time that we've seen people engaging in counterspeech — the act of pushing back against potentially harmful content in the name of decency or another moral motive — outside of online bullying and sexual assault, said Hanley.

"Having users saying, 'Don't share it!' is really powerful," she said. "I'd love it if this is the start of more empathy and understanding of human suffering and emotions overall."

But this is all an ad hoc response to what is really a structural problem for social media companies that isn't going away, particularly as they expand to reach more users internationally. It seems a little early to admit we've been stumped by the question. If we can't agree on what to do with sensitive content, perhaps at least we can work on creating a common vocabulary across companies so that we don't have to keep rehashing the same debates about free speech and censorship.