This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Can AI Systems Be Unethical? The Answer Is Yes

Artificial intelligence systems can acquire human biases during their machine learning phase. What we need are frameworks to neutralise these biases.
Representative image.
gorodenkoff via Getty Images
Representative image.

Recently, David Heinemeier Hansson, creator of Ruby On Rails, tweeted that he got 20 times the credit limit on his Apple Card than his wife, even though they file joint taxes, and their credit history matches up. Apple co-founder Steve Wozniak also replied to the same thread, saying he got 10 times the credit limit as his wife even though they have no separate banking or credit card accounts, or assets.

In his tweets, Hansson noted: “She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like ‘I don’t know why, but I swear we’re not discriminating, it’s just the algorithm’.”

“It’s just the algorithm” is, of course, not a very useful response. Algorithms aren’t just being used for advanced computing — they are determining every aspect of our lives, from whether we can get a loan, or whether we’re eligible for insurance discounts, or if you’re going to be grilled by the police. And they can very easily make mistakes.

For the latest news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

This isn’t something new either—we’ve seen these issues come up again and again over the years. When Google Photos launched image recognition in 2015, it labelled two black women as gorillas. Google was quick to put out a press release stating it was taking care of the issue. But it could not resolve it till 2018, and the solution to the problem was to remove the word gorilla from the system!

One of the simplest examples of the unconscious decisions that people make that can be quite harmful is a soap dispenser that only worked for white people — it worked by emitting a small beam of light, and when this was reflected back at a sensor, it would release some soap. On dark skin, not enough light was reflected back, so the soap didn’t come.

A soap dispenser not working because of a poorly thought out algorithm might seem trivial, but we’re using algorithms to determine just about everything now. Facial recognition systems are based on machine-learning computer vision algorithms, and are being used to interrogate and detain people — even though the systems only have 2-3% accuracy, according to the government itself.

How algorithmic bias leads to unethical situations

We think of these algorithms as highly impartial and well thought out systems, but that’s actually very misleading. The machine learning systems are not omniscient intelligences, which is how technologists often present them, but a better metaphor is that these systems are like babies that start off not knowing much.

They then learn the lessons that we teach them—often including the things that we didn’t intend to teach them, a problem called overfitting.

That’s where we come up against the problem of inherent bias in all humans—algorithms can’t be unbiased, because the training data used to make the AI systems is also biased
 because the humans that created it have their own biases.

Bhuvana M. Koteeswaran, researcher at the Centre for Internet and Society, explained, “Machine-learning algorithms are created by humans who are themselves biased, and pass this on to the algorithms they create.”

The difference between natural and artificial intelligence is that a person strongly biased against a group of people would avoid showing their bias for fear of social censure. An artificially intelligent system doesn’t feel the same restraint—and because we don’t understand how these systems work, the biases seem like rational, reasonable choices.

This creates an uncomfortable as well as unethical situation. Systems are not unethical, even if they may be biased; the situations that get created are.

Koteeswaran, who performs research on bridging the gender gap across Indic Wikimedia communities, said, “Women are especially vulnerable when it comes to artificial intelligence because most of them only understand their local language. Government will need to ensure that the systems are localised before they’re implemented.”

Many engineers and data scientists claim not to know how the systems are learning or doing what they are doing. The term unexplainable AI is often used to explain such a situation. This is highly undesirable, especially in sensitive situations where it is essential to explain the system’s performance.

For example, in the case of artificial intelligence systems used in the education field, someone must be able to explain how it is imparting that education, to refine and improve the system.

A second and more effective method, which should also be the most preferred method of avoiding unethical situations, is creating data sets with minimum bias. Koteeswaran said, “the bias in training knowledge base can be minimised if we take diverse data in terms of diversity of culture, race, colour, gender, experience, and so on.”

The fresh data that the system is exposed to can have biases of its own. To avoid biased outcomes, data sets need to be neutralized in a manner that these biases disappear, or are at least minimised as much as possible.

Is it possible to prevent unethical situations?

Tony Fish, visiting lecturer for AI and Ethics at LSE, said, “if we teach our kids morality then why not teach the machines? However there is a problem here—we are trying to build the moral philosophy for machines in a couple of weeks when we [humans] have taken almost 50,000 years to do so.”

What this essentially means is that we are entering into a place where we haven’t developed a framework or moral philosophy for what we are building. This typically translates to building systems that could become potential weapons despite the original intent not being malicious.

Every technology is created with an intent to solve human problems. However, it has the potential to turn into a weapon against the same people. This should not mean we stop creating new technologies. What is needed is a debate on what we want.

As Fish put it, “we need a debate not in terms of right and wrong, but what we want in terms of outcomes.”

Pinaki Laskar, AI researcher, and CEO and Founder, FishEyeBox, a company that helps people compose original music with the help of AI, said, “If businesses want to survive, they will have to do this. If companies can allocate humongous resources to label the private data that maybe they are not even allowed to have, why can’t they allocate resources to ensure that as much of the bias as possible is removed from the training data set?”

Another way of managing ethics is to make public what is contained inside the so called AI black box. An example is the common voice program by Mozilla. Koteeswaran, who also works on this program, said, “anyone who wants to use the algorithm has complete access to it. Once the designers know how the algorithm works, they will be better equipped to develop their own unbiased systems over it.”

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.