22/08/2018 9:38 AM IST | Updated 22/08/2018 9:38 AM IST

Facebook Is Rating Users On ‘Trustworthiness’ To Fight Fake News

People often report news as fake if they don’t agree with it, Facebook said


Facebook is rating users on how trustworthy they are, and the score that a person will affect how credible Facebook considers the links that these people are sharing, according to reports. The score is one of the measures that Facebook has brought into place in order to fight fake news around the world.

The company has been hiring fact checkers to identify and remove fake news being shared on the network, but the company has provided only limited resources for the initiative. In the month-long Karnataka election campaign in India, awash with fake news, the initiative was only able to identify 30 pieces of fake news.

In order to improve its credibility, Facebook is looking to its users now. Part of the problem, the company told the Washington Post, which first reported this story, is that user reports about content have become another way of gaming the system. It's "not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they're intentionally trying to target a particular publisher", Tessa Lyons, the product manager who is in charge of fighting misinformation, told Washington Post.

By tracking user behavior, Facebook aims to cut down on the disinformation that's being spread across the network, but this credibility rating isn't meant to be a single reputation score, but one measurement amongst others that Facebook tracks.

Responding to the Financial Times, Facebook said it was "plain wrong" to suggest that the company had a "centralised 'reputation' score" for users. "What we're actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible."

Facebook has not released details of the user rating system because it fears it could be gamed by people looking to propagate disinformation. It is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users, the Washington Post reported.