Why is AI so smart and yet so dumb?

towards-data-science

This post was originally published by Richard Cornelius Suwandi at Towards Data Science

The reason behind Moravec’s Paradox.

Photo by Stephen Andrews on Unsplash

Have you ever wondered why AI has such an easy time doing stuff that we find very hard? But having a hard time doing stuff that we find very easy?

In the 1980s, an AI researcher named Hans Moravec wondered the exact same thing. As Moravec put it:

“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

This curious observation was later known as Moravec’s Paradox. And even though it was formulated more than 30 years ago, it is still relevant today.

Image for post

Photo by Amanda Jones on Unsplash

Don’t get me wrong, AI certainly has come a long way. We’ve seen AI beating the world champion in the board game Go, the quiz game Jeopardy, the card game Poker, and the video game Dota 2. But on the flip side, AI is still having a hard time understanding a joke or interpreting people’s emotions.

Therefore, the big question is: “Why does AI struggle with the simple?”


The reason behind Moravec’s Paradox.

Image for post

Photo by Stephen Andrews on Unsplash

At the most basic level, the reason for Moravec’s Paradox is simple: We don’t know how to program general intelligence (yet). We’re already good at getting AI to do specific things, but most toddler level skills require learning new things and transferring them into different contexts. And to get machines to do this is actually one of the goals of Artificial General Intelligence (AGI).

“Today’s AI is brilliant at very narrow competencies, whereas humans are good at pretty much everything.”— Dr. Sean Holden, Cambridge University

Moravec also pointed out that the simple reason behind this is evolution. Things that seem easy to us are actually the product of thousands of years of evolution. In other words, they only seem easy to us because our species have spent thousands of years refining them.

“The oldest human skills are largely unconscious and they appear to us to be effortless. Consequently, it should not be a surprise that the skills that appear effortless, to be computation-heavy and difficult to be reverse-engineered by a man-made AI system”

Image for post

Photo by Brett Jordan on Unsplash

Moreover, the only way that we can teach an AI is by giving it a set of instructions to do a certain task. And since we’ve learned consciously how to do maths and win games, we know the exact steps needed to complete these tasks. Thus, we are able to teach these to an AI.

But how can you teach an AI to actually see, hear, or smell something? We don’t know all the steps required to do these tasks consciously. In fact, we need to break these tasks into logical steps to feed into an AI. Hence, it is incredibly difficult to teach these to an AI.


What Moravec’s Paradox actually taught us.

Image for post

Photo by Liam Charmer on Unsplash

Moravec’s Paradox has surely proven to us one thing— the fact that we’ve developed an AI that has beaten humans in Go or Chess doesn’t mean that AGI is just around the corner. But yes, we are one step closer.

It also shows why adult-level reasoning capable AI is an old hat, but AI with vision, listening, and learning capabilities are new and exciting. Of course, things are shifting, as AI is overcoming the Moravec’s Paradox.

More advanced AI is starting to mimic our evolutionary abilities. For instance, we’ve seen advancements in the Computer Vision field such as object detection and facial recognition, which could be thought of as the equivalent of sight for a computer. Also, thanks to Natural Language Processing (NLP), we now have personal assistants like Alexa that are capable of ‘hearing’ and understanding us. Likewise, AI is becoming capable of speech, like we’ve seen in these assistants or developments like Google Duplex.


The impact of Moravec’s Paradox and the future of AI.

Image for post

Photo by Andy Kelly on Unsplash

While the ultimate goal of achieving AGI remains elusive, Moravec’s Paradox has brought a significant impact on our present world. Contrary to the traditional assumptions, it suggests that reasoning which is high-level in humans requires very little computational power. On the other hand, sensorimotor skills which are comparatively low-level in humans require enormous computational power. With this in mind, as computational power increases, machines could eventually match and exceed human capability.

Ultimately, AI has seen highs and lows. It’s a field that is constantly saturated with ethical questions and scientific challenges. And although research on multitasking machines and AI with transferable skills is heating up, the debate remains on whether true human-level intelligence in machines is feasible (or desirable).

“No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.” — Marvin Minsky

Spread the word

This post was originally published by Richard Cornelius Suwandi at Towards Data Science

Related posts