Article

European Commission proposes AI liability rules: Introduction

Roeland de Bruin

Roeland de Bruin Senior lawyer

Exciting news from Brussels:

The European Commission has recently published a proposed set of rules governing Artificial Intelligence (AI) liability, two years after the European Parliament published its own proposal regarding liability for AI. The package consists of two proposals, regarding the revision of the Product Liability Directive (PPLD), and a directive regarding the adaptation of extra-contractual liability rules to AI (AILD). This package closely relates with the proposed AI-act as well.

In a series of three blog-posts, I share my first observations regarding the proposals, as these topics are important for AI-innovation in the EU, and – still – on top of my mind. I defended my PhD-thesis on exactly these themes last April, going by the title “Regulating Innovation of Autonomous Vehicles – Improving Liability and Privacy in Europe”. One of my key findings was that the current EU liability and privacy frameworks do not optimally facilitate innovation and acceptance of AI. I proposed several changes to inter alia the product liability regime and the national regimes addressing traffic liability. In short, the position of victims who need to establish fault, or defectiveness of products, damage, and causality needs to be improved significantly to assure effective remuneration of AI-related damage. At the same time, the likely damage-preventing effects of the proposed regulatory changes could contribute to consumer trust in AI. Trust is necessary for the uptake of AI-technology, and thus crucial for successful innovation.

In this first post, I will outline the backgrounds that motivated the Union legislator to propose the “AI-package”, and present a high-over evaluation of the proposed texts. In the second post I will analyse the AILD, and in the concluding post the PPLD will be addressed. These discussions will be mostly in light of the necessary improvement of the regulatory environment for innovators and consumers of AI in Europe.

Background

The current liability rules, i.e. the harmonised product liability framework, and the non-harmonised (often fault-based) rules governing AI-related damage-compensation, are often taken to form an obstacle rather than an incentive for innovation. This is for instance expressed in the AI liability package itself, and argued in academic literature, including my own PhD research. For instance, where fault and causalitymust be proven in order to establish liability under those rules, victims would need extensive knowledge of the underlying self-learning (often very complex and ever-changing) algorithms in order to assess whether or not these cause an AI-application to inflict damage. This can be prohibitively difficult. Take for example future fully Autonomous Vehicles (AVs): these will have to be equipped with many sensors and communication devices to them to navigate on the road, determine driving behaviour and to prevent collisions. That driving behaviour will have to be improved and adapted continuously, through the use of self-learning algorithms. Should nonetheless an accident occur, under current rules a victim would need to prove either a defect in a product, or a fault in the driving behaviour, as well as the causal relationship between that defect or fault and the damage that occurred. As technology becomes increasingly complex as a result of the use of self-learning technology and an increasing amount factors that could play a role in the origination of an accident, this will become a herculean task.

At best, proving norm violations and causality will become expensive and time consuming, and might stand in the way of effective remuneration of damages. In turn, this results in unjustified risks for victims, and could negatively impact consumer’s (including victims) trust in AI – as I argue in my thesis. The uncertainty resulting from the application of fault-based liability rules, is seen to negatively impact the willingness of innovators to invest in AI.

Similar problems follow from the Product Liability Directive (PLD), although it is sometimes questioned whether software and algorithms qualify as ‘products’. Under the PLD, victims for instance need to prove defectiveness of a product to establish liability. Furthermore, victims need to prove the causal relationship between a defect and damage. This cannot be achieved without thorough analysis of ever more complex AI-products, and data processed therewith. At the same time, there are ample opportunities for producers to escape liability on the basis of for instance the “later existence defence” (when a defect originated after market-introduction of the product) and the “development risks defence” (when a defect could not have been discovered by the producer at the time of market-introduction). The consequence is that AI-victims are not as well protected as originally foreseen in the PLD, because also here the risks are allocated to victims rather than being fairly distributed between them and the producers. This would likely negatively impact consumer trust in the preventive and reparative capacities of the PLD.

Against this background, the EC strives with the introduction of the AI-liability package to achieve “an economic incentive to comply with safety rules and therefore contribute to preventing the occurrence of damage”, and to “promote trust in AI […] by ensuring effective that victims are effectively compensated if damage occurs […]”, by taking “a holistic approach in its AI policy” through the adaptation of the PLD, and the adaptation of national (fault based) liability rules (quotes taken from the Explanatory Memorandum of the proposed AI Liability Directive (AILD), p.3.

Would the proposed AI-package likely improve trust?

I think that the proposed AILD in its current form does alleviate the evidentiary problems for victims of AI-related damage to a certain extent, through a (rather complex) system of presumptions, as I will elaborate in the second post. However, some significant issues for consumers are left unregulated, including for instance those regarding causality between norm violations and damage, and uncertainties regarding compliance with GDPR requirements. Furthermore, the proposed AILD does not resolve certain issues for potential victims of AI-related accidents that result from differences between the national tort-liability regimes of the member states, while the European regulator now has the chance to eliminate those. In France for example, the risk-liability regime incorporated by the Loi Badinter, which regulates the strict liability of drivers or keepers of motor vehicles when their vehicle is involved in an accident, can effectively be applied to AI-powered self-driving vehicles. In The Netherlands, strict liability is only regulated for non-motorised victims of accidents in which a motor vehicle was involved. Motorised victims however, have to establish liability on the basis of fault liability rules. This part of the system is in my opinion inadequate, as it will be hardly possible for motorised victims to claim remuneration from the owner or “driver” of an AV under the current fault-based liability regime. The proposed AILD will not improve these inadequacies of the Dutch regime. Such problems would have been to a large resolved under a risk-liability regime that had been proposed by the European Parliament.

As regards the PPLD, I think that the position of (potential) victims is improved. This results from the fact that software is now explicitly brought within its scope, as well as from the extension to (in)tangible components and related services. Furthermore the proposed obligations for manufacturers to keep the products safe after they were put into circulation contributes to the improvement, in combination with the procedural and evidentiary aids and the limitation of the later-existence. I think the PPLD reduces the (potential) victim’s risks of unjustified under-compensation, an that the preventive effects of the PPLD-regime can contribute to safer AI-products. In turn, this might contribute to consumer trust in AI-related technology. A point of improvement could be that a limitation of the development risks defence to the extent a producer is able to “fix” a problem that was discovered after marketing of a product, where he could not do that at the time of putting the product into circulation. All in all, where the EU legislator could be a bit more ambitious in the proposed AILD, the PPLD can be considered a major improvement for consumers.

This post was authored by Mr. dr. Roeland de Bruin.Roeland de Bruin is practicing attorney at KienhuisHoving Advocaten, specializing in intellectual property, IT law and privacy, and assistant professor at the Molengraaff Institute for Private Law.In 2022 he successfully defended his doctoral thesisRegulating Innovation of Autonomous Vehicles: Improving Liability & Privacy in Europe“,supervised by prof. dr. Ivo Giesen, prof. dr. Madeleine de Cock Buning and prof. dr. Elbert de Jong.

Heeft u vragen?
Neem contact met ons op