SAN FRANCISCO — The massive growth of live-streaming everything from Little League games to a giraffe’s birth has developed a sinister edge as murderers, rapists and terrorists have found ways to broadcast video that tech companies such as Facebook are struggling to contain.
Among the most shocking incidents yet came on Easter Sunday, when a man armed with a smartphone and a black handgun took video of himself fatally shooting a bystander on a Cleveland street. The alleged killer, Steve Stephens, posted the video on his Facebook page, then took to the Facebook Live streaming service to confess his actions — in real time. As of Monday evening, Stephens was still at large.
Facebook disabled Stephens’s profile page more than two hours after the initial posting, according to the company, but not before the video of the shooting spread across the social network and to other social media, including YouTube and Instagram. It has been viewed more than 150,000 times.
On Monday, Facebook said it was investigating why it took so long to receive reports of the video and was reviewing its procedures.
Live video of violent incidents, including suicides, beheadings and torture, have gone viral, with some reaching millions of people. This summer, Facebook faced criticism after a live stream of a disabled young man being tied up, gagged and slashed with a knife stayed up for 30 minutes. Last month, two Chicago teenage boys live-streamed themselves gang-raping a teen girl.
“Bound up with doing all of these terrible things is the possibility of showing thousands, possibly millions, of people that you’re doing it,” said Mary Anne Franks, a University of Miami law professor. She expressed doubt that Facebook could adequately monitor live videos. “When it comes to Facebook Live as a product specifically, I don’t think it’s a solvable problem,” Franks said.
Shortly after Facebook launched live video streaming to the public last year, chief executive Mark Zuckerberg said he wanted a product that would support all the “personal and emotional and raw and visceral” ways that people communicate. The company has encouraged users to “go live” in casual settings, while waiting for baggage at the airport, for example, or eating at a tasty food truck.
On Monday, one day before the company's annual developer conference, questions arose again about whether Facebook had done enough to contain the video service’s dark side.
“This is a horrific crime and we do not allow this kind of content on Facebook,” company spokeswoman Andrea Saul said in an emailed statement. “We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.”
Stephens, 37, who appeared to shoot an elderly man, uploaded three videos between 11:09 and 11:22 a.m. Sunday, Facebook said in a blog post. In the first video, he announced his intent to commit murder. Two minutes later, he uploaded the second video of the shooting. Then, at 11:22, the alleged killer took to his live-video account to broadcast himself confessing. Facebook received its first report of the killing an hour and a half later, at 12:59 p.m. The company said it disabled the account 23 minutes later.
The manhunt continued Monday, with authorities saying they had launched a nationwide search for Stephens. By the late afternoon, authorities were offering up to $50,000 for information leading to his arrest.
“Steve, if you’re out there listening, call someone — whether it’s a friend or family member or pastor. Give them a call. They’re waiting for you to call them,” Cleveland Police Chief Calvin Williams pleaded with Stephens at an afternoon news conference.
The incident demonstrates how Facebook is grappling with its increasingly consequential role in world affairs, as well as the fundamental challenges in policing a growing amount of highly charged content, including Islamic State beheadings that potentially inspire terrorists and fake news that has allegedly skewed the national political debate.
While Facebook’s policies permit some graphic content to be posted — for example, the broadcasting of police brutality at a protest would generally be allowed — the company prohibits users from posting images or videos “for sadistic pleasure or to celebrate or glorify violence.”
Facebook’s reluctance to play a greater gatekeeper role has also drawn the ire of critics, who say that the rush to profit off viral content and video is leading to dangerous societal outcomes.
The company raced into live video after observing the explosive popularity of platforms such as Snapchat, Meerkat and YouTube. Zuckerberg was reportedly so moved in a meeting by data that showed a staggeringly high amount of time users spend on video that he immediately put 100 of the company’s top engineers on a two-month lockdown and charged them to come up with new video products, according to a profile in the Wall Street Journal.
The amount of time consumers spend watching videos online has increased more than fourfold since 2011, according to the research firm eMarketer. That video explosion, much of it being broadcast by everyday citizens and not news organizations with standards for displaying graphic content, is exposing the public to a new level of violent imagery.
“It's amazing how performative people are in their cruelty,” said Danielle Citron, a University of Maryland law professor who has worked with Facebook and other tech companies to block “revenge porn” and prevent cyberstalking.
Yet the issue is complicated for technology companies, which have wide latitude to host content created by others without being legally responsible for it, Citron said.
In recent interviews and blog posts, Zuckerberg has acknowledged the complexity of the company’s new role in the global spotlight. He hopes to “amplify the good” and “mitigate the bad” effects of the Facebook platform, he wrote earlier this year.
Since Facebook launched live-streaming, first with celebrities in late 2015 and then to the general public, there have been so many live suicides broadcast that the company decided to create a set of tools for users to flag them and alert law enforcement, a tacit acknowledgment of its gatekeeper responsibilities.
Yet in July, the company apologized and blamed a technical glitch after live video of the aftermath of the Philando Castile police shooting, which was posted by his girlfriend and caused a national outcry, was temporarily disabled by Facebook software.
In an interview last month at the company’s Menlo Park, Calif., headquarters, Facebook executives said that the majority of live video content posted on the site is positive in nature.
Facebook Live product director Daniel Danker said, “We largely rely on the community to flag live moments that are unsafe or otherwise fall outside of our community standards.”
When users flag such content, it is then sent to a global team of professionals within 24 hours. But critics say that, within that length of time, inappropriate videos can be viewed millions of times.
Facebook has declined to reveal how many people and resources it invests in policing content.
In the Washington Post interview, Danker acknowledged there were obstacles to creating a safe space in real time. “It's particularly challenging of course with live because there’s no time to react,” he said. “It's happening as you see it.”
Timberg reported from Washington. Lee Powell contributed to this report.