Chinmayi Arun is a fellow at the Berkman Klein Center at Harvard University and was the founder director of the Centre for Communication Governance at National Law University Delhi.
This month, WhatsApp announced that it was blocking numbers flagged by the Election Commission of India for spreading “fake news” and objectionable content. Before this, it had announced the launch of a new fact-checking “tip line” service in India. WhatsApp has also introduced features such as limits on forwarding in an effort to slow down the spread of misinformation. But these changes may all be too little, too late.
Over the past year, India has seen cascades of rumors spread through WhatsApp with the same techniques used to great effect in Brazil: Public links allow people to join political WhatsApp groups. As rumors spread, they transition from political groups to general and personal groups and can even be picked up and amplified by the mass media. Research conducted after the Brazilian election found evidence that bots were used to forward misinformation from group to group. In India, human “volunteers” forward the misinformation.
But there is also the potential for more targeting of information on WhatsApp than ever before. To start with, WhatsApp permits any member of a group to harvest the phone numbers of all the other members of the group. Indian law requires mobile numbers to be registered and linked to government identification. This means that if the ruling party sends volunteers to join ideologically aligned groups and leverages its ties to the government to access the databases linking mobile numbers to individuals, it could theoretically identify individual members of WhatsApp groups. With access to this data, the ruling party can target misinformation campaigns — on social media and via text — to receptive audiences.
Even more worrying is the possibility that the government could have acquired access to lists of users’ phone contacts. Last year, WhatsApp was asked to share this data while under pressure from the Indian government over how it was used to promote lynchings in India. Though WhatsApp announced that it was unable to share the content of user communications with the Indian government, it made no mention of metadata. In the past, there have been reports that WhatsApp has shared such metadata with global governments.
The ruling party has already been criticized for collecting metadata from the phones of people who install the prime minister’s “NaMo” app. If it somehow gained access to metadata from WhatsApp, one of the most popular social media apps in the country, it could map networks of voters who are in contact with each other. The potential for misuse of this data is obvious: A party would be able to target propaganda to affinity groups, giving it an unfair advantage.
WhatsApp’s slate of recent announcements shows that it is now taking steps to tackle disinformation. But it is hard to gauge whether the company’s responses will change things for the better or the worse. The forwarding limit may slow down campaigns by groups with limited resources, but it would not slow down a political party with hundreds of volunteers willing to forward rumors as many times as necessary.
WhatsApp’s effort toward blocking phone numbers flagged by the Indian Election Commission is also a good idea in theory. But there needs to be greater transparency and accountability in the process to ensure the commission’s reporting is neutral, perhaps by publicly reporting how these efforts affect different parties. Regulators should also consider whether similar action needs to be undertaken against misinformation on other platforms, including the NaMo app. It is important that efforts to curtail misinformation do not affect one political party more than the other, or they may skew the elections even further.
There are guidelines for the mass media during elections. We need clear guidelines for social media platforms as well. Yet the deeper issue is that the companies’ engagement with the election is entirely self-regulated, and companies and authorities rarely consult experts and stakeholders. Their efforts so far have been knee-jerk reactions. Perhaps this will change over time, and the data being collected on elections and misinformation will lead to a more considered response in the future.
But without any effective check on their behavior, political parties will continue to invent new ways to influence and manipulate elections using the Internet. As millions more Indians gain Internet access each year, one thing is clear: Regulators, companies and other stakeholders need carefully calibrated strategies to cope with online disinformation — or they risk the very foundations of Indian democracy.
Read more: