Who is liable when Artificial Intelligence fails?

Fondia
Blogs
July 23, 2019

Ethics and compliance

If we take a look at Artificial Intelligence (AI) in the modern age, you might be surprised to learn that it is already all around us. Whether you are shopping for clothes online, or browsing your Instagram feed, a frighteningly accurate suggestion or advertisement may show up on your screen.

AI is, however, not limited to online shopping and social media, it is also making its way into transportation, investment and health care. It is already available to those who wish to purchase a self-driving car, or give a large sum of money for an algorithm to invest, or trust their health and well-being in the ‘hands’ of something non-human. The fast-paced incorporation of various AI systems into our daily lives brings about several unforeseen liability issues ranging from product liability to criminal liability.

Consider the following scenarios. An AI operated car runs over a pedestrian, a forest harvester that is partly operated by a human cuts down protected trees, or a software program that is incorporated with AI makes an incorrect diagnosis and suggests the wrong medical treatment. We are left with the question: who is liable for the damage caused in each of these scenarios? The person using the car? The person who programmed and trained the medical software program? The manufacturer or the operator of the forest harvester? The developer of the AI system? Or perhaps the AI system itself?

The first case over automated investment losses

The question of liability is a fascinating issue that can be answered with the classic phrase every lawyer learns in law school: ‘it depends’. Nevertheless, the current reality seems to be that the closest human tends to be blamed. For example, an article by Bloomberg, published in May 2019, describes an ongoing lawsuit, where a Hong Kong businessman lost over $20 million after trusting a large amount of his wealth to an automated investment platform. Without a legal framework to sue the technology, the businessman decided to blame the nearest human: the man who sold him the automated platform.

Although this is the first case concerning automated investment losses that has come to light, it is not the first one involving questions about algorithms’ liability. In 2018, Uber was testing a self-driving car in the United States that collided with a pedestrian and led to the pedestrian’s death. The car had been in autonomous mode at the time of the collision, and a safety driver had been sitting in the driver’s seat. The case went to court and a year later, Uber was cleared of criminal liability. However, the safety driver could still face a charge for vehicular manslaughter. The presented examples suggest that humans may be used as a ‘liability sponge’, as researcher Madeleine Clare Elish puts it, absorbing legal responsibility in cases that involve AI, regardless of the extent of their involvement in the course of events. This seems to be the direction we are headed towards in the handling of liability issues when AI is involved. A potential risk of this direction is that it may create uncertainty and hinder development of AI based products. Legislators and governmental institutions should therefore provide guidance on the matter to help innovators feel more secure when navigating the minefield-like world of liability.

How can we help you?

We’d love to hear more about your legal needs and talk about how we can help you solve them. Book a free meeting or call or email us.