Thoughts on AI: Will a bias-free AI even be human-like?

mediumThis post was originally published by Samagra Sharma at Medium [AI]

In the past few weeks, there has been much debate around existing biases in our day to day lives and how we should tackle them. This realization was mostly due to the ongoing protests against police brutality in the United States.

There were tweets and comments everywhere promoting the idea of a bias-free space. The Artificial Intelligence community wasn’t left untouched, and someone tried the existing language models for their developed biases. Here is how it turned out:

Well, this certainly looks bad and needs to be tackled. What followed was a series of comments and tweets where people promoted the importance of having a bias-free dataset/algorithm for an intelligent agent. This series of events made me think about a much subtle concept. Aren’t these biases what make us human? Think of it this way; when you write computer programs, you tend to create modules, functions, and objects.

All of these constructs are mostly a form of abstraction. We came up with these abstractions to make programming easier for us. We don’t always need to build solutions in a bottom-up manner. These abstractions make it easier for us as programmers to reuse structures that we have spent hours creating and to build on top of them. Don’t you think these biases play a similar role in the context of human reasoning? We don’t have to create a Bayesian Inference loop for every decision we make, we just have preferences, and we tend to choose on them.

Let me indulge you in a thought experiment. This experiment is a slight modification of the infamous trolley experiment. Imagine you are standing at a divergence of train tracks near a controller, which changes the train’s path. In this hypothetical scenario owing to certain unknown events, on one route lies your beloved parent tied to the track and on the other route lie any five participants from the 1927 Solvay Conference. For me, they are Einstein, Bohr, Heisenberg, Haber, and Madam Curie. The scenario is such that you have got just a minute to decide who to save. What will you do?

The trolley problem is a thought experiment in ethics modeling an ethical dilemma. It is generally considered to represent a classic clash between two schools of moral thought, utilitarianism and deontological ethics.

I am pretty sure that you won’t be doing a full-blown decision-theoretic analysis of the scenario. If you are anything like me, we would most certainly be living in a pre-Quantum Mechanical/Relativistic world with all the pioneers dead. This decision would have been a result of a combination of biases, such as the proximity bias or the self-serving bias. And obviously, a rational agent operating the levers would have saved the five leading scientists owing to their expected utility in the world.

The idea that I want to push forward is the fact that these biases are a fundamental part of our decision engine. Our experiences manifest themselves in the form of biases. Every single one of us has their version of a subjective reality where these biases have created heuristics for quick decision making. As in the case of physical responses, our body builds heuristics at the spinal cord that help us perform immediate reactions to external stimuli; these biases are the heuristics for a fast decision engine.

When we argue about removing these biases from intelligent systems, aren’t we pushing them away from human centrism? Do we want them to have a knowledge base reflecting the real world or an ideal world far from reality? Do we want them to make humane decisions, or do we want them to make idealistic utility maximising choices? I don’t think putting curtains on the harsh reality is a wise choice given the final test of performance for these agents would be in the real world. Maybe we can attach a sense of morality to the agents so that they are aware of the existing biases and can still judge the moral correctness of their decisions and opinions.

Spread the word

This post was originally published by Samagra Sharma at Medium [AI]

Related posts