Attempts to map out AI at the European Union level and possible mitigation of liability

Fondia
Blogs July 24, 2019

Ethics and compliance

Artificial Intelligence (AI) is rapidly becoming part of everyone’s lives. As a result, the European Union (EU) is looking for the best way to deal with the increased automation of our lives.

Artificial Intelligence (AI), as discussed in my previous blog post, is rapidly becoming part of everyone’s lives. As a result, the European Union (EU) is looking for the best way to deal with the increased automation of our lives.

On 8 April 2019, a high-level expert group on AI (AI HLEG), set up by the European Commission, released a document called ‘Ethics Guidelines For Trustworthy AI’. In the document, AI HLEG has identified three elements that should be scrutinized to determine AI’s trustworthiness, and it should be noted that these elements need to exist for the whole life cycle of an AI. AI should be:

1. lawful, complying with all applicable laws and regulations;

2. ethical, ensuring adherence to ethical principles and values; and

3. robust, from both a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.

The first required element is as an obvious but an essential one in the attempt to allocate liability to AI or the actors involved. These guidelines are the first step in mapping out how AI should function in society and what kind of expectations should be set for such technology, and explicitly urge developers (producers) to employ the requirements in the production process. These guidelines are necessary to help the EU as a whole achieve one of its main goals of harmonization.

Can innovators 'reduce' their liability?

A key principle in determining and eventually allocating liability in cases involving AI is the principle of explicability. This principle calls for AI systems to be transparent towards the user. If an AI system fails, it must be possible to find out what caused the failure, because if the decision-making of the system lacks transparency, this diminishes users’ trust and makes it challenging to find the liable party. The term ‘black box’, which is often associated with airplanes, also applies to algorithms as steps leading up to an error in the AI’s decision-making need to be traced, be it a case of an inaccurate user recommendation in an online store or, in more serious circumstances, a fatal error by an autonomous vehicle. By following the new guidelines for ‘trustworthiness’, producers of AI systems could presumably mitigate their liability in a worst-case scenario, since they would be in compliance with the guidelines set by the European Commission.

A concrete example of a producer proactively trying to mitigate liability is Tesla’s autopilot mode in which the car operator is required to ‘apply force’ to the steering wheel every 15 to 20 seconds, depending on the speed of the car. In this way, Tesla has arguably reduced the potential for operator inattentiveness, making the above argument of liability shifting from human to autonomous vehicle (and eventually manufacturer) less credible, at least when it comes to vehicles made by Tesla. The fact that Tesla is not the only manufacturer of self-driving cars shows that this technology will soon be more mainstream than it currently is. Tesla has clearly considered future liability questions with their system requiring attention and action from the person sitting in the car, arguably making the operator liable if they solely rely on the autopilot and expect no mistakes from the technology.