Who is responsible when Autonomous Systems fail?

Artificial Intelligence is an emerging technology, but what happens when things go wrong. Where does the responsibility lay when autonomous systems fail?

Madeleine Claire Elish explores this very question.

As more autonomous and artificial intelligence (AI) systems operate in our world, the need to address issues of responsibility and accountability has become clear.

Read more below…

In 2018, a self-driving Uber car struck and killed a pedestrian. The company was cleared of criminal wrongdoing, but the human tasked with monitoring the system faces the prospect of manslaughter charges. As Madeleine Clare Elish writes, the outcome of this accident could be a harbinger of what lies ahead.
Spread the word

Related posts