On WhatsApp, which has 1.5 billion users, information can go viral in minutes as individuals forward messages along to their friends or groups, without any way to determine its origin.
Messaging platforms have hosted disinformation campaigns in at least 10 countries this year, according to a report by the Computational Propaganda Project at Oxford University. WhatsApp was the main platform for disinformation in seven of those nations, including Brazil, India, Pakistan, Zimbabwe and Mexico. Other messaging apps that have hosted disinformation include Telegram in Iran, WeChat in China and Line in Thailand.
“In the U.S., the disinformation debate is about the Facebook news feed, but globally, it’s all about closed messaging apps,” said Claire Wardle, executive director of First Draft, a nonprofit news literacy and fact-checking organization affiliated with Harvard University’s John F. Kennedy School of Government.
The closed nature of messaging services complicates the already difficult task of fighting rumors and stamping out lies. Unlike the largely open forums of Facebook and Twitter, WhatsApp hosts private chats among groups of friends. It is encrypted, or mathematically scrambled, so that no one — not even the service’s employees — can read the content of messages that were not intended for them.
“In many countries, messaging services are the main platform to get online,” said Samantha Bradshaw, co-author of the report from the Computational Propaganda Project. “The closed platforms can be more dangerous because the information is spreading in these intimate groups of friends and family — people we tend to trust.”
A group of friends picnicking in southern India this month stopped their car to offer some local children chocolate. It proved to be a deadly mistake — rumors quickly spread on WhatsApp that they were child kidnappers.
A virulent mob gathered in response to the messages. In the end, one of the picnickers, a software engineer named Mohammed Azam Ahmed, 32, lay dead.
“They kept pleading, but nobody listened to them,” said the victim’s brother, Mohammed Akram. “My brother was killed by fake news.”
Now WhatsApp, under pressure from political leaders and spurred by new leadership, is taking steps to root out misinformation. Executives held urgent meetings with political leaders in India last week, and the service is building new technology to promote news literacy. Last week, the company announced a major change, limiting the ability to forward messages — a feature that has been blamed for enabling disinformation to go viral.
WhatsApp’s new boss — Chris Daniels, a veteran executive from WhatsApp’s parent company, Facebook — has vowed to prioritize safety. Daniels is playing catch-up with Facebook, which since the 2016 U.S. election has poured immense resources into combating viral fake news and other malicious content promoted by profiteers, ideologues and Russian operatives — an effort with mixed results. The fix at WhatsApp is even harder because the chat app was designed to be a black hole.
The app’s encryption makes it impossible for WhatsApp’s security staff to read messages unless a user specifically reports them as problematic. And because WhatsApp lets people sign up with just a phone number, — unlike Facebook, WhatApp does not require users to have an email address or reveal their real name — engineers have limited visibility into users’ friends or into what they have posted in the past, cutting them off from key clues to malicious behavior. WhatsApp says the average group size is six, but it allows groups of up to 256.
Conversations on these platforms are less visible to outsiders, journalists and fact-checkers who often debunk misinformation. In Colombia, Mexico and Brazil, news and fact-checking organizations have recently set up WhatsApp hotlines where people can forward along questionable content to be debunked. The organizations then return the correct story to the person who sent it and hope that person shares it with their groups.
For months leading up to the Mexican election, an edited video with accompanying text began circulating on WhatsApp. The grainy footage showed a man being burned alive in the state of Tabasco, while a crowd shouted “Morena” — the political movement of the front-runner, Andrés Manuel López Obrador, who won the election — in the background, implying that the man was being tortured for his political beliefs. Accompanying text blamed Obrador supporters, using a derogatory term for leftists. Many people forwarded it to the Mexican fact-checking hotline, asking whether it was real.
Journalists and fact-checkers who reviewed the video said that the man was attacked because he was stealing a motorcycle, not for his political beliefs, and that the full unedited version showed the crowd shouting the names of different political candidates. The fact-checking group published an article with the facts as they understood them — citing local news reports — and sent it back to all the people who forwarded the original falsehood, asking them to spread it in their groups. They said they recognized the limitations of the strategy.
In Brazil, a nationwide strike by truck drivers in May was organized on WhatsApp, said Daniel Bramatti, head of Abraji, Brazil’s association of investigative journalism, and an organizer of a WhatsApp hotline to vet news that may be fake. False stories have spread about political candidates and about the dangers of many vaccines, he said — the yellow-fever story reached so many people that the federal government issued an official warning debunking it. Since the hotline was created, 17,000 messages have been forwarded to the group, Bramatti said.
WhatsApp is giving software tools to Brazilian news organizations that will allow them to send a link to a fact-checked story to large numbers of users at once — allowing them to debunk fake news en masse — ahead of the country’s presidential election in October.
Brazil, which has about 120 million WhatsApp users, has shut down WhatsApp three times, most recently in 2016, over a fight between the site and the government, which wanted data about malicious actors and criminals.
But the dark side of misinformation on messaging services is most apparent in India, where more than 225 million people use it, according to the Indian government, a total quickly gaining on the estimated 240 million who use Facebook. There, the combination of an inexperienced and digitally illiterate user base, coupled with WhatsApp’s encryption, has proved toxic, leading to fear, misunderstanding and, in some cases, violence.
Beyond the July 13 incident that claimed the life of Ahmed, two dozen others have died in recent weeks from lynch mobs sparked by rumors that spread on WhatsApp of child-kidnapping rings or organ-harvesting gangs, authorities say. The violence has prompted an angry warning from the Indian government. Last week, the government called on WhatsApp to do more to address accountability and “traceability” in the app to stem the tide of fake news — or face legal action.
Many police departments and municipalities have created their own grass-roots response to the crisis, including public education campaigns with street theater, or “town criers” going from village to village with loudspeakers on top of fans, warning citizens not to believe fake news.
But some Internet experts say that the Indian government has been slow to respond to the growing problem, in part because their political parties are hardcore users of the platform and often use it to send out false or misleading information themselves.
In a recent regional election in India that was seen as a prelude to the country’s 2019 national election, WhatsApp researchers found that one political party — which they did not name — used the platform inappropriately, with party loyalists setting up thousands of WhatsApp groups and in some cases successfully spamming users with near-constant political messages.
The company was able to catch and block some of the accounts, but many slipped through their fingers.
The misuse of WhatsApp mirrors the way in which other tech tools have been weaponized in recent years, in particular around misinformation. The company’s roots are about getting as little information about users as possible, and WhatsApp’s founders, Brian Acton and Jan Koum, were libertarians who believed deeply in privacy.
After Facebook acquired WhatsApp for $19 billion in 2014, its largest acquisition ever, the messaging company operated separately from its parent and was divorced from Facebook’s efforts to combat misinformation, such as hiring thousands of moderators and building artificial-intelligence software to spot malicious posts. Researchers said that independence allowed problems to fester, undermining Facebook’s corporate mission to promote democracy around the world.
“Our focus has always been on helping people stay safe and maintaining private communication on WhatsApp,” said WhatsApp spokesman Carl Woog. “We recognize the severe consequences that can come from viral misinformation, and we’re working with others to address this challenge.”
Acton and Koum fought frequently with Facebook over user privacy, access to data and how to make WhatsApp turn a profit, according to two people familiar with the debates. Acton left late last year. Koum announced his resignation this year after a Washington Post report revealed he planned to leave over broad clashes with Facebook.
“They were fierce when it came to data privacy, and they were fiercely independent,” said Kevin Lee, who was a global manager for spam operations at Facebook through 2016.
WhatsApp does not see itself as a social-media service, because content is not posted publicly and algorithms do not spread information virally. But even without algorithms, WhatsApp’s ability to forward messages has turned it into a hybrid. Its leadership was previously opposed to any efforts to intervene in users’ ability to send messages.
The shutdown in Brazil and growing commercial spam problems led to a crisis within WhatsApp, and Facebook began to clamp down, according to the two people familiar with the matter. Facebook sent engineers and pushed WhatsApp to hire policy experts for the first time, doubling the company’s size and moving WhatsApp’s headquarters to Menlo Park, Calif., where Facebook is based.
Last week the company started training Indian nonprofits on how to spot fake news and “to think before you share,” Woog said. WhatsApp also ran full-page newspaper ads in India that included 10 tips on how to recognize false information, including “check information that seems unbelievable” and “use other sources.”
The company is hiring engineers to specifically focus on disinformation in elections, and it is building in new technology that will indicate whether a message has been forwarded, an indicator that the person who sent the message did not actually write the story or produce the content in question.
On his first week on the job in May, Daniels, who declined to be interviewed, assembled WhatsApp’s 300 employees for a town hall. The commitment to privacy would not change, he told them, but from now on the focus of the service would also include safety — preventing misinformation and the harm it can cause, according to an executive who attended the meeting.
Gowen reported from New Delhi. Farheen Fatima in India contributed to this report.
Dig Deeper: Artificial Intelligence + Misinformation
Want to explore more on how misinformation spreads? Check out our curated list of stories below.
As experts race to figure out how to stop “deepfakes,” they worry we may not be ready for the damage.
Working with both parties, the first debates this year gave us a preview on what’s being done to avoid what happened in 2016’s election.
Global misinformation issues begin not with Facebook, but with closed messaging apps like WhatsApp, where bad info is shared among family and friends.