Top five AI failures in 2020

mediumThis post was originally published by Sidra Ijaz at Medium [AI]

Photo by Possessed Photography on Unsplash

 We must reconcile ourselves to a season of failures and fragments ” — Virginia Woolf

Failures are our best teachers. They guide us to make better decisions, amends, start fresh, and let go of what is not meant for us. Every year, we see examples of AI failures from the tech Giants. These failures can help to define the artificial intelligence future — what works and what doesn’t. In this article, we will explore the top five AI failures in the year 2020. This year has revolutionized the digital world thanks to the implications of COVID-19. Our reliance on digital technology and artificial intelligence has dramatically increased due to the pandemic. Now as the year is coming to an end, let’s have a look at a few AI mishaps throughout this year.

The infamous Twitter bias

One of the most famous (or infamous) AI loopholes this year is Twitter’s image cropping algorithm bias. It appears that Twitter favors white faces while cropping the images. Several tweets became viral on social media showing the same issue — In an image with both colored and white people, twitter chooses to show white faces, cropping faces with other races. Here is an example:


Zoom followed the AI bias race

This year has been big for Zoom. COVID-19 has transformed this service into a powerhouse. Zoom also came under fire along with Twitter due to algorithm bias against black faces. A Ph.D. student Colin Madland tweeted about a black faculty member’s issues with the virtual background feature of Zoom. Zoom entirely removed the head of the black faculty member whenever he used virtual background. Here is the referenced tweet:


Faulty predictions of COVID-19 IHME model

Coronavirus pandemic also gave rise to AI-based COVID-19 prediction models. One such model is the University of Washington’s Institute for Health Metrics and Evaluation (IHME). It predicts the number of deaths, required hospital beds, and the time cases will rise in the US. This model was being used by the US government in policy making and pandemic handling. However, with time it was noted that this model was giving faulty predictions. According to this study, for July, IHME averaged 191% error in its four-week state-level death forecasts.

Google AI health tool failed in real-world

Google is the top investor in AI technology making it a giant in the AI revolution. However, the giant has recently faced a setback when its AI health tool didn’t work well in a real-world scenario. Google developed a deep learning model that predicts diabetic retinopathy by looking at scanned images of an eye. In the lab and controlled environments, the AI tool gave high accuracy. However, when the tool was tested in clinics in Thailand, the results were different. The accuracy rates dropped dramatically. According to Google, the major reasons behind the failure are environmental factors (such as room lighting). This reminds us of the similar failure story of IBM Watson healthcare, raising questions on the application of AI in health.

iPhone facial recognition and the COVID-19

The year 2020 was the year of corona-virus. The new normal demanded us to wear masks everywhere. Unfortunately, this important issue was not taken into consideration while designing facial recognition tools for iPhones. It is impossible to unlock your iPhone using FaceID while wearing a mask — and this issue is unresolved in the latest iPhone 12!

Image for post
Photo by Engin akyurt on Unsplash

We can see that as our lives in 2020, most of the top AI incidents also revolved around the COVID-19. AI demands special attention in the testing and QA department, keeping in view all the social and environmental factors in the real-world — whether they are biases or pandemics.

Spread the word

This post was originally published by Sidra Ijaz at Medium [AI]

Related posts