AI Liability Directive

The technology is developing rapidly and it’s not easy for the lawmakers to keep up. An increasingly important issue, both from end-users, companies, and the justice system, has been the liability issues related to AI.

AI systems are already being widely used and unfortunately, there has already been cases where the utilization of AI system is causing damage on individuals. We have seen examples of wrong and biased decisions by authorities using AI systems in decision-making processes for welfare and social security benefits, or by banks when granting loans, and the list goes on.

As the EU continues to regulate the digital market and economy, an AI Liability Directive has been proposed with the purpose of strengthening the legal protection of individuals suffering damage caused by AI systems.  

What?

The AI Liability Directive aims to establish rules on non-contractual civil liability for damage caused with the involvement of AI systems. This is to tackle the difficulties in terms of presenting the proofs linked with AI. Consequently, the Directive aims to ensure that justified claims are not hindered due to this difficulty in proofs and that individuals have equal possibilities in getting compensation for damages caused by an AI as well as those suffering damage in any other way.

Why?

With our current national legislations, a person/company who have suffered damage needs to show that a person has acted wrongfully or not acted when they could have. When applying this to AI, it can be incredibly difficult for a person to prove this if an AI system acts autonomously and with no human insight. The inputs and “acts” of the AI may even be completely invisible, a so called “black box” AI system.

Therefore, the AI Liability Directive aims to provide a more adapted regulation to ensure justified claims are not hindered due to difficulties in evidence.

How?

Two examples of what the Directive proposes is 1) to give courts the possibility to order AI-providers to disclose evidence and 2) to introduce a presumption of causality.

1) Possibility to order disclosure of evidence

The Directive proposes that a court should be able to order an AI system provider to disclose evidence about high-risk AI systems (according to risk-definitions proposed in the AI Act) claimed to have caused damage. When doing so, the courts need to limit the order to what is necessary and proportionate taking other interests into account, such as trade secrets and even information regarding the security of the nation.

As for the non-high-risk AI systems, this presumption only applies when the court determines that it is excessively difficult for the claimant to prove the casual link.

2) Burden of proof and presumption of causality

To facilitate that suffering individuals gets compensation when justified, the Directive proposes a presumption of causality between the suffered damage and the fault in the AI-system. The fault is some sort of non-compliance with Union or national law directly intended to protect against the damage that occurred and that this fault has influenced the output of the AI system or the failure of the AI system to produce such output.

The claimant still needs to show that it has suffered damage caused by the output of the AI. But the claimant doesn’t have to show that the output that caused damage was due to a fault in the AI system, unless the claimant could prove this with reasonable efforts (by requesting the court to order the disclosure of evidence, for example).

To simplify it, the below 1) is presumed if a claimant can show the below 2).

1) The fault in the AI-system affects a certain output.

2) The output of the AI-system causes an individual damage.

The Directive is still only a proposal and needs to be finally approved and secondly, implemented into national law since it is an EU directive. The AI Act is not yet approved either but is expected to be adopted soon.

Unlike the AI Liability Directive, the AI Act (once applicable) will be directly applicable in the entire EU with no need of national implementation. It is expected that it will be finally approved by the end of 2023 which would mean it would become applicable in 2025.

Fondia recommends our customers to implement and/or develop AI-driven tools consciously and to take a holistic approach to avoid unnecessary risks, or even sanctions or damages.

Fondia’s Data and AI team continues to monitor the development of this directive and other related regulations. Please feel free to contact us if you have any questions or need help.  

 

AI Liability Directive