This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

CAA Protests: When Police Uses Your Face Against You

Facial recognition software and simple crowdsourcing for information can be used to identify people at protests.
NEW DELHI, INDIA - DECEMBER 30 : Protesters march towards the India Gate during a protest against the police brutality during the clashes, following days of violent protests across India against a new citizenship law, in Delhi, India on December 30, 2019. (Photo by Imtiyaz Khan /Anadolu Agency via Getty Images)
Anadolu Agency via Getty Images
NEW DELHI, INDIA - DECEMBER 30 : Protesters march towards the India Gate during a protest against the police brutality during the clashes, following days of violent protests across India against a new citizenship law, in Delhi, India on December 30, 2019. (Photo by Imtiyaz Khan /Anadolu Agency via Getty Images)

BENGALURU, Karnataka—If you’ve been following the conversation around the protests against the Citizenship Amendment Act (CAA), you would have noticed many social media posts asking protesters to wear face masks or paint their faces.

This is because there is a very real possibility that your face may be added to a facial recognition database by the government. On Saturday, The Indian Express reported that the Delhi Police is filming protesters, and then running the footage through its Automated Facial Recognition Software (AFRS) in order to identify alleged “rabble rousers and miscreants”. The police force had adopted AFRS in 2018 to locate missing children.

This is a clear instance of Indian authorities taking a leaf out of China’s playbook in the ongoing protests in Hong Kong, using CCTV footage and other recordings of protests to identify and crack down on dissenters.

For the latest news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

The Delhi Police, which reports to the Union ministry of home affairs, has been under a cloud since it forcibly entered the Jamia Millia Islamia campus earlier this month and brutally attacked students. It isn’t the only police force in India to use facial recognition software to track protesters. The New Indian Express had also reported that police at Osmania University in Hyderabad were seen making videos of the demonstrations. HuffPost India could not independently confirm multiple tweets about policemen at various protests walking around with Android tablets to identify protesters through facial recognition.

Writing for architecture and design magazine Dezeen on the Hong Kong protests, historian Owen Hopkins pointed out that facial recognition software poses the biggest threat to “those whose lives are already economically precarious, who can’t afford to lose their jobs, or whose behaviour is somehow different to “normal”: subcultures, migrants, those who identify as LGBTQ and other minority groups”.

In India, Prime Minister Narendra Modi himself had made a communal remark hours before the Delhi Police stormed the Jamia campus. Muslims, especially in Uttar Pradesh, have been the worst-affected by the police’s violent crackdown on protests, with even children not being spared detention and torture.

“Rather than make us safer, facial recognition only amplifies existing prejudices and further entrenches existing power structures. It doesn’t matter who wields it – whether state, private company or some seemingly benevolent entity – the technology itself is a fundamental threat to society,” Hopkins wrote.

The police is trying to build a bigger, and more widespread facial recognition network, that brings together its flawed CCTNS system with a number of other databases of faces, under the National Crime Records Bureau (NCRB), leading experts in the field to raise serious concerns about its misuse.

What can people do about facial recognition?

The crackdown on people through facial recognition is particularly concerning when you take into account the huge amount of documentation of protests over the past couple of weeks. Many Instagram handles, TikTok videos and Facebook and Twitter timelines have shared photos and videos of the protesters. In a spirit of solidarity, they have highlighted acts of protest, appreciated the best posters, and shared videos of the songs and slogans. But this has also meant that a lot of people now have their photos and videos up in public, often without explicitly getting their consent first.

This can sometimes have negative consequences for the people involved. When Ladeedah Farzana (who goes by Ladeedah Sakhaloon) was identified as one of the women who stood up against the Delhi Police in Jamia, she was instantly hailed as a hero. And then she was almost trolled off the Internet.

But while others may not be facing the same kind of reactions right now, they are still at risk of having their images scraped off the open Internet and used to populate databases of people who protested against the government. This is a real concern, and it doesn’t necessarily require a high-tech intervention to identify people taking part in protests either. For example, the UP police has been releasing photos and videos of protesters, asking people to identify them. Reportedly, the police printed posters with photos, announcing a reward of Rs 25,000 each for information on three “wanted” people.

A police personnel aims his gun towards protesters during demonstrations against India's new citizenship law in Kanpur on December 21, 2019. - Thousands of people joined fresh rallies against a contentious citizenship law in India on December 21, with 20 killed so far in the unrest. (Photo by STR / AFP) (Photo by STR/AFP via Getty Images)
STR via Getty Images
A police personnel aims his gun towards protesters during demonstrations against India's new citizenship law in Kanpur on December 21, 2019. - Thousands of people joined fresh rallies against a contentious citizenship law in India on December 21, with 20 killed so far in the unrest. (Photo by STR / AFP) (Photo by STR/AFP via Getty Images)

In order to minimize their risks, people need to start taking steps to beat facial recognition systems, taking a lesson from protesters in Hong Kong who have sustained their movement for months now, bringing a sizeable amount of International pressure against the Chinese government.

The Hong Kong protesters also tried using masks and face paint, these were soon banned by the police. In turn, the protesters started using lasers to ‘blind’ the cameras tracking them, while others toppled lampposts with cameras on them.

According to a report from The Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, there are some guardrails that people should be demanding when it comes to facial recognition software:

It recommends that there be limits on how long data should be stored; data sharing should be restricted; there should be clear notification when facial capture is being done, and minimum accuracy standards need to be met. Third party audits are also required; and collateral information collection (metadata) must be minimized, the report said.

Srinivas Kodali, an independent security researcher, wrote a Twitter thread on the uses of facial recognition during the protests.

“The Hyderabad Police again for example uses an app TSCOP, to check if your photo is any crime database. Hyderabad police has been randomly stopping people and checking if they are criminals,” he noted.

In terms of more direct action, Kodali said, “One can use a mask or paint their face. There are different ways to confuse a facial recognition system with assymetrical face structures by painting their face to resemble an animal or by wearing masks. However it will be difficult if the AI is equipped to track a citizen’s walking style.”

Even when facial recognition doesn’t work, it’s dangerous

While facial recognition is a dangerous tool in terms of privacy and security, they can pose a threat even when they don’t work properly, as there is little room for oversight and appeal. Bengaluru-based Pranay Prateek, co-founder of SigNozIO, tweeted a thread about his experience with automated policing systems based on computer vision when he was caught jumping a red light. “The traffic policeman had a device with fines registered on my vehicle number. There were some 3-4 fines which I knew nothing about. Interestingly, there was one fine for triple riding,” Prateek said. “This surprised me - as I never triple ride. When I asked the traffic policeman - as to who registered this - he said that it is automatically detected by CCTV camera.”

Prateek, who has worked with computer vision in the past, noted that these algorithms can often be very inaccurate. When challenged, the policeman told Prateek to go to the station, which he chose not to do to save time. But he added that with our photos being used to raise fines, not being able to access this information when being challaned or through a public website is unacceptable.

The AFRS was brought in by the Delhi Police in March 2018 to trace missing children. This same technology has seen mission-creep and is now being used to identify habitual offenders and “rabble rousers”. Except there’s not much reason to believe it actually works. Just six months after the technology came into use, the Delhi High Court was informed that the accuracy of the system is only 2%, according to the Delhi Police counsel. In August 2019, just before a tender for a National Automated Facial Recognition System was raised by the NCRB, the Ministry of Women and Child Development told the court that the number of children matched using AFRS was less than 1%, and that the system would match the pictures of boys with girls.

The technology was provided to the Delhi police by Innefu Labs, a Delhi-based technology company which, apart from facial recognition,also offers technology for social media analytics for law enforcement, and is a proponent of the concept of predictive policing, using videos and images along with social media posts, to try and predict where crimes will take place. Social media snooping is already happening in India with little to no oversight, and the proposed Data Protection Bill doesn’t have any safeguards against being spied on by the government.

Predictive policing, on the other hand, has been widely criticized around the world, and many academics have pointed out that it is “simplistic and harmful”.

“Predictive policing algorithms have the potential to increase accuracy and efficiency, but they also threaten to dilute the reasonable suspicion standard and increase unintentional discrimination in a way that existing law is ill-equipped to prevent,” writes Lindsey Barrett in the NYU Review of Law and Social Change. “If predictive policing technology were reliably accurate, the methodology were transparent, and law enforcement officers and judges understood its limitations, then the use of predictive policing at the border might not threaten individual rights. As it stands, the use of an unpredictable and poorly understood technology, in an area of law where a highly malleable evidentiary standard can justify a substantial intrusion on individual rights, poses a colossal problem”

So, an ineffective system that can’t properly identify people is nevertheless being used by the Delhi Police and doubtless other police departments around the country, in order to make a database of protesters. Once you’re “identified” by the database, you are now vulnerable to punitive action. As a reminder, the Pegasus surveillance software which was used to spy on Dalit rights activists and lawyers was most likely deployed by the Indian government itself.

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.