This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Little Brother Is Watching: India Is Over-run By Cheap Snooping Systems

Facial recognition systems being used in India can’t tell boys from girls. But that could be better than a system that actually works.
Representative image. (photo by Mike Kemp/In PIctures via Getty Images)
Getty Editorial
Representative image. (photo by Mike Kemp/In PIctures via Getty Images)

BENGALURU, Karnataka — In April last year, the Delhi Police claimed to have found 3000 missing children using an experimental facial recognition system. A year later, the Ministry of Woman and Child Development told the Delhi High Court that the system was so glitchy that it mistook boys for girls.

The Delhi Police’s experience is a useful reality check at a time when state police forces across India have embraced facial recognition technology, often at the urging of a booming surveillance startup sector.

While the Chinese government has attracted criticism for its centralised, all encompassing facial recognition dragnet; in India, a similar network is slowly falling into place in bits and pieces as federal and state agencies contract out discrete projects to different companies. The National Crime Records Bureau (NCRB) is pushing for an automated facial recognition system (You can read the details of the RFP here).

For the latest news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

The Punjab police has won awards for its deployment of a facial recognition system—PAIS—meant to track organised crime; while the UP police has also won FICCI Smart Policing award for its app Trinetra, developed by Staqu, the same company which made PAIS.

The end result is a patchwork of systems that have the potential to add up to something as intrusive as China’s much-feared grid. And unlike in China, the gradual expansion of the technology in India has occurred without anyone really noticing.

Will this much touted grid actually make our cities safer? Or will it simply become one more way for India’s poorly trained, politically addled, law enforcement agencies to create a climate of fear and target innocent civilians?

A growing body of technologists and ethicists fear the latter, even as tech startups insist their products are a cheap, effective way to improve policing.

“This is worrying. Facial recognition systems are a fundamental threat to privacy by their very nature. These systems collect data (your face) often without consent, without your knowledge, and mines it to a point of relevance that individuals have no control over,” tweeted Vidushi Marda, a lawyer and research analyst at Carnegie India. “To be clear—perfectly accurate and theoretically fair facial recognition systems are still problematic. Still inconsistent with the right to privacy under the Indian constitution.”

Whose data is it anyway?

State police departments have toyed around with facial recognition systems for years, but the technology really entered the public conversation when the NCRB announced it wanted to build an automated facial recognition system, to be deployed nationally.

Some of the sharpest criticism, ironically, has come from Indian companies who hoped to build it for the government.

Atul Rai, the CEO of Staqu, said the RFP was heavily biased against Indian companies, and could potentially create security issues.

Rai pointed out that the RFP requires a bidder to have an annual turnover of at least Rs 100 crore in each of the last three financial years, and a team of at least 50 coders. “They are not interested in how effective the solution is, but rather how big the company is,” he said.

“The criteria given are very biased, and this is a new field in India, so the most innovative companies are the ones who will not be able to participate,” Rai added.

Rai said the RFP requires compliance with US National Institute for Standards and Technology (NIST) standards.

“The whole document is very skewed towards US companies, and if they get the contract then not only are you not promoting Indian companies or helping Indian startups grow, but also, all the data could end up in the US,” Rai said.

“Those companies are also working for the US government and selling products to the intelligence agencies there,” he added. “To ensure that the data is properly secure, NCRB should be looking to work with Indian companies.”

Widespread adoption already happening

While the NCRB proposal has attracted much-deserved criticism, the wide-spread adoption of CCTV cameras in malls, offices, tiny neighbourhood grocery stores and many public spaces has created a ready market for companies like FaceOrbit, a security company that works malls, and recently installed its system across the outlets of a telecom company.

“Wherever you get video from, we can analyse it and help you take the appropriate action,” said Sanjay Sinha, founder and chief mentor of FaceOrbit. “We can do gesture analysis, face recognition, weapon detection, so if there’s any security issue, it can be flagged automatically.”

Gurgaon based ShepHertz has built a facial recognition powered video surveillance system for schools to use. It promises to detect intruders, manage attendance, track the staff, and offers a comprehensive analytics dashboard, as you can see in this video:

Siddhartha Chandurkar, Founder and CEO of ShepHertz told Digit the company came up with the idea for the technology after a seven-year-old student of Ryan International School was found murdered inside the school’s premises in September 2017.

Does the technology even work? Should it?

Thus far, facial recognition has proven to be ineffective across the world. A lack of studies on effectiveness mean that testimonies to its presumed efficacy comes from companies themselves, or from clients who have already spent huge amounts of money (often taxpayer money) on these systems.

“There is now a growing body of scholarship warning against its use,” Kritika Bharadwaj, one of the lawyers in the landmark 2017 Right to Privacy case, recently wrote in HuffPost India. “There is mounting evidence in other countries to show that facial recognition systems are less accurate in identifying ethnic minorities and women, leading to a higher possibility of misidentification—and therefore discrimination—against communities that are already more vulnerable.”

“Facial recognition is incredibly inaccurate,” Marda added. “It is particularly inaccurate for women, and people from vulnerable communities because the way in which these systems are trained is problematic. In 2015 Google Photos mistook an African-American person for a gorilla—this is in a relatively controlled laboratory setting—what happens in the real world with movement, shadows, crowded spaces, hats etc?”

“The problem is that we are fed an illusion of objectivity, accuracy, and there is a strong automation bias—when in truth these systems are clunky, and they disproportionately affect individuals who are most vulnerable.”

Beyond that, Marda also raised the point that even when this technology works, that isn’t a good outcome.

“Even if the AFRS is perfectly accurate, it is still problematic and contrary to our fundamental rights and principles of criminal justice,” she said.

“The AFRS would necessarily have to surveill everyone in order to be effective at picking out ‘criminals’ as mentioned in the tender document. It wouldn’t work unless the system was able to confirm whether or not you look like a particular suspect... This is a grave threat to privacy and autonomy, and should be reconsidered keeping in mind technical limitations and the legal framework in India.”

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.