The Sri Lankan government blocked access to Facebook and other social-networking sites Sunday after suicide attacks killed more than 290 people, a move meant to stop misinformation from inciting further violence in a country where online mistruths have fomented deadly ethnic unrest.
Analysts, meanwhile, question whether shutting down social media is effective at defusing strife. The Brussels-based International Federation of Journalists has said there is no “substantive” evidence to show that such bans, which are common in South Asia, can “scale down violence."
Roshni Fernando moved to Colombo recently from London. After suicide bombers struck churches and hotels Sunday morning, she was frustrated by her inability to reach people back home.
“If I don’t reply to your messages it is because WhatsApp and Facebook appears to have been shutdown in Sri Lanka,” she wrote on Twitter.
“I have had friends in London trying to contact me through both," she told The Washington Post, “and I can’t see them or message anybody."
The rapid proliferation of falsehoods online has become a regular consequence of shootings, terrorist attacks and other major news events — one that Facebook, Google, Twitter and other social media sites have struggled to curtail. Within hours of the first bombings Sunday morning, researchers said they saw a spike in false reports about the perpetrators and the number of victims.
In response, Sri Lanka’s Defense Ministry said the government had “taken steps to temporarily block all the social media avenues until the investigations are concluded.” A state-run news service said “false news reports were spreading through social media.”
Facebook said it was “working to support first responders and law enforcement as well as to identify and remove content which violates our standards.” The social media giant said in a statement it was “committed to maintaining our services and helping the community and the country during this tragic time.”
Twitter declined to comment. Representatives for Snapchat and Google-owned YouTube did not respond to requests for comment.
Viber, a messaging app popular in Sri Lanka, did not comment on the ban, but the platform tweeted soon after the attacks, offering support and encouraging users to “be responsible and rely on updates from official and trusted sources.”
NetBlocks, a London-based digital rights organization, said its data show each of those services had been affected. Alp Toker, the group’s executive director, said it appeared that the Sri Lankan government had ordered local Internet providers to implement the blackout. The providers interpreted the order differently, he said, which explained why some social-networking services still seemed to be operable for some users.
Sanjana Hattotuwa, a senior researcher at Center for Policy Alternatives in Colombo who monitors social media for fake news, said he saw a significant uptick in false reports after the bombings Sunday.
There was a significant amount of misinformation on the death toll, he said, and unverified information on perpetrators was spreading rapidly on Facebook and Twitter. He cited two instances of widely shared unverified information: An Indian media report attributing the attack to Muslim suicide bombers, and a tweet from a Sri Lankan minister about an intelligence report warning of an attack.
No group has claimed responsibility for the attacks. The government made 13 arrests on Sunday, but has not identified the suspects.
Hattotuwa has asked users to flag such content directly to him.
“There are new Twitter accounts popping up putting out unverified information," he said. “There are Facebook posts which violate the guidelines through either intent or are graphic in nature.”
Hattotuwa was sharing the information with Facebook and Twitter. He said the platforms were on “high alert.”
South Asia saw the highest number of shutdowns globally in 2018, according to the International Federation of Journalists. The organization said authorities justified most of these shutdowns by citing “law and order” imperatives, saying the measures were intended to preempt violence, or were undertaken in response to it.
Governments around the world have expressed deep unease with the spread of misinformation and violence on social media, particularly during shootings and terrorist attacks. Facebook and YouTube, for example, struggled to remove graphic video from the deadly attack on two mosques in New Zealand last month. The New Zealand government has proposed rules that would compel companies to take down such content faster or face penalties, an idea that European regulators also have considered in recent weeks.
Sri Lanka’s government shut down access to social media platforms in March 2018 out of concerns that sites had helped militants foment deadly ethnic unrest in the deeply divided country. Anti-Muslim riots left three dead and prompted officials to declare a state of emergency.
Officials said Facebook and Facebook-owned WhatsApp had “been used to destroy families, lives and private property.” And they accused the tech giant of failing to act swiftly and aggressively enough to take down content that had been deemed a national security risk.
But Hattotuwa said Sri Lanka’s action in 2018 was undertaken too late, after the violence had already broken out.
“While a ban on social media helps to contain the spread of rumors, it also hampers efforts by journalists to push back on them,” he said.