TECH
13/11/2018 1:43 PM IST | Updated 13/11/2018 2:26 PM IST

WhatsApp Gives $1 Million To 20 Research Teams To Fight Fake News

Fake news, calls for violence, election-related propaganda and other topics come under the different proposals for which the grants were given.

WhatsApp representational image.
Anadolu Agency via Getty Images
WhatsApp representational image.

Months after launching research grants to fight fake news in India and around the world, WhatsApp has finally announced that it is awarding the grants to 20 research teams. Each grant has a value of $50,000 (roughly Rs 36 lakh), adding up to $1 million. The company had issued a call for papers in July 2018, and received proposals from over 600 research teams around the world—it sought proposals from social scientists and people in other related disciplines, and not just people studying online interactions.

"WhatsApp cares deeply about the safety of our users and we appreciate the opportunity to learn from these international experts about how we can continue to help address the impact of misinformation," said Mrinalini Rao, lead researcher at WhatsApp. "We recognise this issue presents a long-term challenge that must be met in partnership with others. These studies will help us build upon recent changes we have made within WhatsApp and support broad education campaigns to help keep people safe."

ALSO READ: The Government And WhatsApp Are Arguing Over Pornography As A Prelude To Snooping On Your Inbox

It had said in July that elections-related misinformation would be a key focus, and six of the 20 selected proposals fall under this category. There are studies from around the world, such as analysing the voting behaviour in the 2018 Brazil elections, and the use and misuse of WhatsApp among Indonesian campaigners and users. One proposal, titled Social media and every day life in India by Philippa Williams, Queen Mary University of London (Principal Investigator), and Lipika Kamra of OP Jindal Global University examines the role of WhatsApp in everyday political conversations in India; another, titled Misinformation in Diverse Societies, Political Behavior, and Good Governance by Robert A Johns, University of Essex, Sayan Banerjee, University of Essex, and Srinjoy Bose, University of New South Wales, uses field experiments with WhatsApp in India and Afghanistan, to establish a relationship between misinformation on social networks, and public opinion on ethnic relations.

Another proposal, titled WhatsApp vigilantes? WhatsApp messages and mob violence in India by Dr Shakuntala Banaji, London School of Economics and Political Science (LSE), Anushi Agrawal and Nihal Passanha, Maraa, and Ramnath Bhat, LSE, looks at the specific issue of "WhatsApp lynchings" that took place in India, leading to talk of a WhatsApp ban by the government.

ALSO READ: Playing Drake While Ploughing Dirt: Tech Giants Are Bringing The Internet To Rural India, But Is Data The Price?

In India, WhatsApp is also partnering with Osama Manzar and the Digital Empowerment Foundation to train community leaders in several states on how to address misinformation. The foundation is also part of a proposal which aims to adapt game based interventions to "vaccinate" people against misinformation, testing it with field experiments.

These researchers will now use the grants to complete their studies on how misinformation spreads and what steps WhatsApp can take while building its products. The researchers are presently being hosted in California to meet product leaders at the company.

But at the same time, WhatsApp has admitted its limitations in terms of actually controlling the platform. Given that messages are encrypted between users—and WhatsApp says that 90% of communication on the platform still happens between individuals rather than groups—WhatsApp will focus on educating users to tackle abuse. It has already started running public safety campaigns in India, including ads in print, online, and on the radio, and also appointed a public grievances officer, although that post is based out of the company's Menlo Park, California office, which could make it difficult for users to raise their concerns.