How can AI develop Bias?

mediumThis post was originally published by Sriya Mikkilineni at Medium [AI]

Racial, gender, and other identity biases exist everywhere around us. In the workplace, political settings, and everyday society. Unfortunately, artificial intelligence systems have also been found to perpetuate negative stereotypes and biases. But how would a computer be able to replicate society’s biases? Aren’t computers just hunks of metal?

The short answer is that AI algorithms are often trained to replicate and utilize human behavior. For the purpose of this article, a simplified explanation of AI and data is as follows: algorithms use statistics to draw out and collect patterns in data sets and apply them to make educated predictions.

But looking at the issue more closely, there are two main ways the data that AI models receive cause bias: the data is unrealistic or represents existing prejudices.

Unrealistic Data Sets

Unrealistic data sets may be the result of bad sampling methods. Anyone who has taken a statistics course learns that there are both good and bad ways to create a sample. Some bad sampling methods include voluntary response and convenience samples. This can create undercoverage, or underrepresentation of the whole population, for the data that the AI is trained on. For example, a voice-recognition AI that is only trained with deeper recordings may struggle to recognize higher-pitched voices, even if their language is clear. This data set fails to represent a large portion of the populations (those with higher-voices) and can cause bias that favors the deeper-voiced groups of people. In a real-world setting, this can cause females (or males with higher voices) to have a less favorable experience with the voice-recognition AI than would a male.

Data With Pre-Existing Biases

Even if data sets are properly collected to represent reality, due to pre-existing prejudices in society, bias may be present. For example, when Amazon developed an AI tool for recruiting purposes based on historical data, it was found to discriminate against women. This is because the data that was used had historical evidence of gender bias when employees were hired, and the AI model picked up and learned this pattern. Amazon has since discontinued the use of this AI tool, but it still goes to show that AI bias has negatively impacted people in real-world settings.

These are just two ways that AI can develop and replicate human biases. Society has been built around such biases, but with each new day, more and more people are starting to speak out against discrimination. Similarly, it is our job to work towards reducing biases in AI. In the next part, I will address some ways to reduce biases in AI.

Featured image by Marc Kleen on Unsplash

Spread the word

This post was originally published by Sriya Mikkilineni at Medium [AI]

Related posts