Blog

European Commission proposes AI liability rules: the AI Liability Directive

Roeland de Bruin

Roeland de Bruin Senior advocaat

In a series of three blogposts, I share my first observations regarding the proposed EC “AI-package”, a set of proposals that aims for the creation of “trust” of citizens in AI technology that is to be developed and deployed in the European Union.

In the first blog I sketched the backgrounds that motivated the Union legislator to propose the AI-package. In this second post, I will analyse the proposed directive regarding the adaptation of extra-contractual liability rules to Artificial Intelligence (AILD). In the third and concluding post, the PPLD will be addressed.

Definitions

The AILD applies to “AI Systems” as defined in the proposed AI Act (AIA). There, AI Systems are defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (article 3(1) AIA). The “techniques” listed in Annex I, include for instance machine learning-, logic- and knowledge based-, as well as statistical approaches.

A special category is formed by “high risk AI-systems” (article 6 AIA), which are products, or product components “required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II”, as well as the systems specifically referred to in Annex III. Together, these lists cover a broad spectrum of AI-based technologies. For the purposes of this post, I’ll primarily use autonomous vehicles as an illustration because these will almost certainly be classified as “high-risk”.

The AILD furthermore refers to the AIA for the definition of a “user”, i.e. “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity” (article 3(4) AIA), and a “provider”, who is the “natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge” (article 1([2]) AIA).

Non-contractual fault liability

Although its purposes are clear (to create a trustworthy innovation-ecosystem for AI in the EU), the text of the proposed AILD is not what you would call an easy read. It consists of many complex formulations, and the integration of the definitions from the (proposed) AI Act make it a puzzling exercise to understand the contents of the evidentiary rules to be harmonised.

What it does not do, is harmonizing the material liability rules, whereas the earlier EP Proposal did provide for a “default” risk-liability regime for AI. This leaves significant differences between the member states intact. When for instance focusing on traffic liability, these differences are striking. In France for instance, the risk-liability regime incorporated by the Loi Badinter, which regulates the strict liability of drivers or keepers of motor vehicles when their vehicle is involved in an accident, can effectively be applied to AI-powered self-driving vehicles. In The Netherlands, strict liability is only regulated for non-motorised victims of accidents in which a motor vehicle was involved. Motorised victims however, have to establish liability on the basis of (for autonomous vehicles inadequate) fault liability rules. Even after the introduction of the proposed AILD the Dutch regime shall likely remain inadequate.

Preservation and disclosure of evidence, article 3 AILD

Article 3 AILD imposes procedural obligations on the provider or its subordinate (in the sense of article 24 or 28 AIA), or a user of a high-risk AI system. These actors are obliged to preserve and disclose evidence at their disposal, in cases where high-risk AI-systems (are suspected to) have caused damage (3(1) AILD). This entails that for instance an AV-manufacturer, or an AV-rental company, would have to store data which are processed in the AV’s systems, and provide access to potential victims of a crash. A (potential) claimant firstly has to request such evidence from the respective provider or user. Should the request be refused, the (potential) claimant can request a national court to order the proportionate (section 4) preservation (section 3) and/or disclosure (section 1) of such evidence. The (potential) claimant should underpin his request with facts and evidence “to support the plausibility of a claim for damages” (section 1, last part).

Should a defendant fail to comply with a court order regarding the preservation or disclosure of respective evidence, a court must presume the “defendant’s non-compliance with a relevant duty of care” (section 5). This is a rebuttable presumption. What would constitute effective rebutting has however not been harmonized in the AILD.

As I indicated in my first post and dissertation, victims face an uphill battle to prove a norm-violation and causality in AI-related liability claims, due to inter alia the complexity of the advancing AI-technology, the myriad of actors involved, and the massive amounts of data processed therewith. I think that the proposed obligations for defendants in combination with the enforcement mechanism through the proposed presumption of fault, are indeed likely to aid claimants who are experiencing difficulties in ensuring the information necessary to underpin a liability claim.

What I miss, however, are certain references to and qualifications of the contents of the potential evidence for a claim. It is for instance likely that such evidence would contain information that directly or indirectly relates to a natural person, i.e. personal data in sense of the General Data Protection Regulation (GDPR). Furthermore, certain “data processing issues” regarding fault-based liability claims remain unresolved – which may for instance follow from the Dutch traffic liability rules. So when for example a motorised victim seeks redress from the user of an allegedly faulty piece of AV-software, he must prove that the user acted negligently, or contrary to the applicable traffic rules. In order to do so, it might be necessary to assess the behaviour of the persons in and around the respective vehicles, which constitutes the processing of personal data in terms of the GDPR.

I am not sure whether this general obligation to preserve and disclose such evidence is formulated with the necessary precision to form a “lawful basis” for the processing of personal data in the sense of article 6(1)(c) GDPR, or to lift the general prohibition to process special category data in the sense of article 9(2)(g), or (f) GDPR. Interestingly, Article 3, section 4 AILD explicitly refers to the EU Directive 2016/943 on Trade Secrets, but fails to refer to the GDPR. Where the Union legislator explicitly mentions the GDPR inter alia in the AIA, which even creates a “new” exception allowing the processing of special category data (article 10(5) Proposed AI Act), it is at best odd that such mechanisms are not included in the AILD.

Rebuttable presumption of a causal link in the case of fault, article 4 AILD

Article 4 AILD sees to the causal nexus between the output of an AI system, or the failure to produce an output, and the fault of a defendant (i.e. the provider or his subordinate, or the user of an AI System). This provision applies to situations in which it is clear that the (lack of an) output of an AI system caused certain damage (section 1, sub c), and where it is clear (or a court presumes) that the user or provider of an AI did not comply with a specific duty of care (sub a). The AIA for example creates specific compliance-obligations for users and providers of AI-systems, such as the requirement to implement appropriate cybersecurity measures. Should it then be established that these measures were not taken, (sub a), that the system took the wrong decisions which caused damage (sub c), or that -keeping with the example- the lack of cybersecurity led to a system-hack, which then gave rise to the damage-inflicting decisions (sub b), a causal link between non-compliance and the system’s output may be presumed.

It must be noted that the provisions in this article thus do notsee to the causality between the norm-violation (or the AI-system’s output) and damage that occurred – which is a default requirement for establishing (and apportioning) liability. This provision rather serves as a matter of attribution of the defendant’s fault to the malfunctioning AI system.

There are even more specific rules when the AI-system concerned is a high-risk AI-system. The causal link may only be presumed in a limited number of cases, and is to be established by the claimant. These cases correspond with the obligations for providers (including subordinates and users) of high-risk AI systems, to be included in the AIA. These include for instance the use of validated data sets to train their algorithms, as well as requirements regarding the transparency of the algorithms, the need for “human oversight” over the AI-system, and the need to implement appropriate cybersecurity and resilience mechanisms. Furthermore, providers need to take “corrective actions” when AI-systems do not comply with the rules immediately.

Users of high-risk AI systems can face a rebuttable presumption of “causality” according to section 3, when a claimant proves that the user did not comply with his obligations to monitor the AI system (a); or exposed the AI-system to irrelevant input (for instance training) data (b). Article 4(4) excludes the applicability of this presumption, when the defendant can demonstrate that the necessary evidence and experience is at the disposal of the claimant. Article 4(5) limits the scope of the presumption for “regular” AI-systems to those cases where “the national court considers it excessively difficult for the claimant to prove the causal link”. Presumptions of a causal link can be rebutted, as follows from section 7.

It must be noted again that “that other causal relationship”, i.e. between a norm violation and damage that occurred is explicitly not included (article 4, section 1 under c) under the scope of the AILD, while that leaves a significant hurdle for victims intact. It can be imagined that there are several potential causes of certain damage, stemming from several (AI-related) sources. Take for example this scenario”: several AVs with potential algorithmic errors were involved in the same accident, and/or the victim ignored a red traffic light. It remains necessary to determine the precise cause of the damage that occurred, and it is still up to the victim to prove which specific cause underlies the damage. Failing to do so could leave the victim without compensation.

Concluding observations

All in all, I think that the proposed AILD in its current form does alleviate the evidentiary problems for victims of AI-related damage to a certain extent, through the (rather complex) system of presumptions. However, some significant issues for consumers are left unregulated, including for instance those regarding causality between norm violations and damage, and uncertainties regarding compliance with GDPR requirements. Furthermore, the proposed AILD does not resolve certain issues for consumers i.e. victims of AI-related accidents that result from differences between the national tort-liability regimes of the member states, while the European regulator now has the chance to eliminate those.

This post was authored by Mr. dr. Roeland de Bruin.Roeland de Bruin is practicing attorney at KienhuisHoving Advocaten, specializing in intellectual property, IT law and privacy, and assistant professor at the Molengraaff Institute for Private Law.In 2022 he successfully defended his doctoral thesisRegulating Innovation of Autonomous Vehicles: Improving Liability & Privacy in Europe“,supervised by prof. dr. Ivo Giesen, prof. dr. Madeleine de Cock Buning and prof. dr. Elbert de Jong.

Heeft u vragen?
Neem contact met ons op