Article

Introduction AI-Act

Roeland de Bruin

Roeland de Bruin Senior lawyer

This document serves as a general introduction to the European AI Act, which aims to regulate the emerging and rapidly growing artificial intelligence industry. The Act has entered into force in August of 2024. Parties will have a transitional period to comply with the relevant obligations, as the first requirements are set to apply 6 months after the entry into force and the final obligations will apply 36 months after entry.

Important dates when sections of the regulation are set to apply are as follows:

  • 2 February 2025: Regulations regarding prohibited AI systems.
  • 2 August 2025: Regulations regarding fines and General-purpose AI.
  • 2 August 2026: Regulations regarding high-risk systems based on intended use.
  • 2 August 2027: Regulations regarding high-risk safety components.

The regulation distinguishes between three risk categories. Firstly, certain applications of AI systems are completely prohibited. Secondly, there are applications of AI that are classified as high-risk; these are permitted, provided they meet strict requirements. Thirdly, there are AI applications deemed to have limited risk and are therefore subject to fewer requirements. In addition to these three categories for AI systems, the act contains a separate framework for 'general-purpose AI models,' which in turn divides models “with systemic risk” from models without such risks.

Definition of an "AI System"

Before determining the risk category of a system, it is crucial to first verify if it meets the definition of an AI system as outlined in the regulation. Given the continuous development of AI, a broad and functional definition has been given, in order to ensure that potential future changes in the technology are also covered by this regulation. The starting point for defining the concept of AI is that it concerns a software application. Next, the characteristics that distinguish AI from traditional software are considered.

The primary characteristic that distinguishes AI from traditional software, is its ability to infer certain outputs, such as predictions, content, recommendations, or decisions, from human input, with a certain degree of autonomy, without these outputs being predefined by a human. In doing so, AI transcends basic data processing by being capable of learning, reasoning, or modelling. An example is the ability of AI to create an original image based on a text instruction, without the instruction having a predetermined format. This demonstrates that an AI system understands/interprets the text and can convert it into the desired result.

Another significant characteristic of AI is its self-learning capability, both before and after market deployment. This can be achieved through machine learning, whereby AI learns how to achieve specific objectives based on large amounts of data. Finally, it is important to note that AI can exist both as an independent product and as a component of another product.

Relevant parties

The following two parties are assigned the majority of the obligations arising from the regulation:

Provider
A provider is the natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

Deployer
A deployer is the natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Authorized representatives

When a provider from outside the EU seeks to introduce a high-risk AI system on the EU market, he must first designate a party established within the EU as a point of contact for certain obligations. This party is referred to as the authorized representative.

Importers & distributors

Importers and distributors of AI systems are also subject to certain obligations. These obligations primarily concern transparency requirements, ensuring that even in a complex supply chain, it can be clearly determined whether the regulation's requirements have been met.

Relaying liability

Even when a party does not fall under the aforementioned definitions, it may still have obligations under the AI Regulation. One shall be considered to be a provider and therefore be subject to obligations of the provider under the regulation, if that party performs any of the following actions;

  • Putting a name or trademark on a high-risk AI system already placed on the market or put into service;
  • Making a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system;
  • Modifying the intended purpose of an AI system which has not been classified as high-risk and has already been placed on the market or put into service, in such a way that the AI system concerned becomes a high-risk AI system.

Extraterritorial jurisdiction

The regulation is in certain cases also applicable to AI systems located outside the EU. That holds true when providers or deployers of AI systems have their place of establishment or are located in a third country, whereas the output produced by the AI system is used in the Union.

This provision is included in the regulation to prevent providers and deployers from circumventing the regulation by establishing themselves in a third country while receiving data from the EU as input, processing it, and then sending the output back to the EU.

Exceptions of applicability

The following AI systems do not fall under the scope of the AI regulation but are rather governed by specific AI legislation:

  1. When an AI system’s output is used exclusively for military, defense or national security purposes.
  2. When an AI system’s output is used exclusively for the purpose of scientific research and development.
  3. Certain high-risk AI systems for which it was deemed preferable to adjust sector-specific regulation in order to achieve the intended level of protection, being:

      • Civil aviation
      • Two- or three-wheel vehicles and quadricycles
      • Agricultural and forestry vehicles
      • Marine equipment
      • Rail systems
      • Motor vehicles and their trailers

Prohibited practices

The following AI systems are banned as they intrinsically violate the fundamental rights of citizens:

  • Any AI system that deploys subliminal techniques with the objective of distorting the behavior of a person, without their awareness. For example, showing an image for less than 50 milliseconds, which could influence behavior but in most cases escapes conscious perception.
  • Any AI system that is designed to exploit vulnerabilities of a certain person or group of persons, for example due to their age or disability.
  • Any AI system that assigns a social score to a person based on their behavior.
  • Any AI system that uses profiling to predict the risk of a person committing a crime.
  • Any AI system that that creates or expands facial recognition databases.
  • Any AI system that infers emotions of a person in the workplace or education institutions, except when used for medical or safety reasons.
  • Any AI system that categorizes persons based on their biometric data to deduce sensitive information, for example race or religious beliefs. Biometric data refers to all types of data that can be derived from an individual's physical characteristics, such as facial images or fingerprints, as well as gait and typing patterns.
  • Any AI system that uses real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement. For example, an AI system that tracks a suspect of a crime through the live feed of surveillance cameras.

High-risk systems

AI systems can be classified as high-risk in two ways. Firstly, the European Commission maintains a list (detailed in Annex III of the regulation) of intended uses of AI that are designated as high-risk. Examples include the use of AI in managing critical infrastructure or in the admission of students to educational institutions. The list is not definite, as the Commission retains the authority to add or remove types of intended use.

If an AI system falls under the intended use referred to in the annex but does not pose a significant risk to the health, safety, or fundamental rights of natural persons - such as when it does not materially influence the outcome of decision making and merely serves as a minor addition to human activity -the AI system shall not be considered to be high-risk. The provider is responsible for substantiating this claim and presenting their assessment to the relevant authorities.

Secondly, an AI system is considered to be high-risk when both of the following conditions are fulfilled:

  • The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by EU legislation listed in Annex I of the regulation. This includes products such as toys, elevators, and medical devices (for a complete list, see the high-risk document from Kienhuis Legal).
  • According to the relevant EU legislation, the safety component of the product, or the AI system as the product itself, is required to undergo a third-party conformity assessment. This is generally required when the relevant market surveillance authority deems the product to pose a risk to the health or safety of individuals.

Furthermore, Annex I, which lists the various products, is divided into two sections. The second section lists products where AI systems are considered high-risk, but which are not covered by the AI Act; instead, they are regulated by sector-specific legislation, as previously mentioned under 'exceptions to applicability.'

Requirements for high-risk systems

When an AI system is classified as high-risk, the provider of the system must comply with a substantial number of requirements. This document provides a brief overview of these measures. For a more complete and detailed description, please refer to "High-Risk AI" by Kienhuis Legal.

Firstly, providers are required to ensure that their systems meet the standard AI requirements outlined in Section 2, Chapter 3 of the regulation, the most important of which are:

  • Continuously evaluating the system to identify and minimize potential risks.
  • Using accurate and up-to-date datasets to train the AI to prevent bias.
  • Keeping detailed logs to trace the cause of any incidents.
  • To ensure a level of transparency in the system. This includes maintaining technical documentation that describes the system's limitations and presenting the system’s output in an understandable way.
  • Implementing appropriate security measures, such as protecting the system from hacking.

To demonstrate compliance with these requirements, providers must complete or commission a conformity assessment and register both the assessment and the AI system in the designated EU database. This creates an EU-wide overview of all the high-risk AI systems on the market. Even after the AI system is placed on the market, providers must monitor its performance to detect deviations and prevent serious incidents.

Limited risk

In addition to high-risk AI systems, there are also systems that carry specific risks but do not pose a significant enough threat to impose the strict regulations applicable to high-risk systems. As a result, a separate regime has been established for these limited risk systems, which primarily requires transparency towards deployers.

For example, a provider of an AI system that interacts directly with persons must clearly inform them that they are interacting with an AI system and not with a human. Furthermore, transparency obligations are imposed on providers of text or media-generating systems, deployers of emotion recognition or biometric categorization systems, and deployers of systems that create 'deepfakes'.

General-purpose AI

In addition to 'normal' AI, there is also general-purpose AI (hereafter: GPAI). While “regular” AI systems are designed for a specific purpose, such as identifying individuals in photos or predicting the weather, the output of GPAI models can be highly diverse. For example, a GPAI model can similarly be used to translate text, to create drawings, and to write code. Furthermore, GPAI may even be equipped with the capacity to learn from human input and sometimes even develop new skills that the designers did not foresee.

At the core of every GPAI system is an AI model, the algorithmic foundation upon which the rest of the system is built. Because GPAI has such a wide range of applications, these models are also used as the basis for "regular" AI systems. As a result, the obligations related to GPAI, unlike those for regular AI systems, are directed at the GPAI models rather than the systems.

Under the regulation, GPAI is classified into two categories. Firstly, there is standard GPAI, which mainly has transparency obligations, such as providing a description of how the model was trained. Additionally, GPAI providers are required to keep information and documentation up to date, ensuring that those who wish to integrate the GPAI model into their own AI systems have sufficient knowledge about the model to meet the other requirements of the regulation. Secondly, the AI Regulation addresses General Purpose AI with systemic risk.

GPAI with systemic risk

Additional obligations apply when GPAI is classified as a model with systemic risk. A model is considered to have systemic risk if it possesses high impact capabilities. A GPAI model is automatically deemed to have high impact capabilities if the amount of computation used during its training is greater than 10^25 floating point operations.

The European Commission can, on its own initiative, designate GPAI models as models with systemic risk if they meet the criteria for significant impact. The Commission uses specific criteria to that end, such as the size of the datasets used or the number of end-users. Additionally, providers of such models must notify the Commission as soon as their model develops significant impact capabilities. They may also request an exemption from the obligations for systemic-risk GPAI models, citing specific characteristics of their model that, despite having significant impact capabilities, indicate it should not be classified as posing a systemic risk.

When a GPAI system falls into the risk category, it must comply with additional requirements. The system must undergo a thorough model evaluation following standardized protocols to identify system risks and vulnerabilities. The provider must then create a plan detailing how these risks will be mitigated. Additionally, the provider must ensure an adequate level of cybersecurity. If an incident occurs, it must be reported immediately to the relevant authorities. By May 2025, the EU will have issued codes of practice to help GPAI model providers demonstrate compliance with these requirements.

Enforcement

The AI Act establishes a system to enforce compliance with its obligations. The relevant authorities are empowered to impose fines on parties that do not comply with the regulation. These fines are categorized into four levels. The highest level is for violations of the prohibition on certain AI applications, with fines reaching up to € 35,000,000 or 7% of the company’s global annual turnover. A fine of up to € 15,000,000 or 3% of global annual turnover applies to parties that fail to meet their obligations concerning high-risk or limited-risk AI systems. The lowest fine, up to € 7,500,000 or 1% of global annual turnover, is imposed for providing incorrect, incomplete, or misleading information to the body responsible for conformity assessment. Additionally, there is a separate penalty for violations related to GPAI, with fines up to € 15,000,000 or 3% of global annual turnover.

The amounts specified are maximum penalties and do not necessarily reflect the amount of the final fine. In determining the fine, authorities must take into account all relevant factors, including the nature, severity, and duration of the violation, as well as the financial situation of the offending party.

In addition to public enforcement, the EU aims to improve the ability of private parties to claim compensation for damage caused by an AI system from its provider or deployer. To achieve this, the Product Liability Directive will be updated to include AI systems within its scope. The concept of "defect" in the directive will also be broadened to encompass defects that occur after a product has been on the market, such as those caused by new functionalities in a self-learning AI. Furthermore, the new AI Liability Directive will introduce rebuttable presumptions. For instance, it will be presumed that a causal link exists between the AI system's output and the fault of the provider or deployer if they have failed to meet certain obligations for high-risk AI systems. For more information, please refer to the Kienhuis Legal blogs on these two directives.

Disclaimer

Please note that this is a general summary of the regulation, and some exceptions to the main rules have been left out. For more detailed information, please refer to the additional resources provided by Kienhuis Legal on the AI Act. For any questions, feel free to contact: Roeland de Bruin.

Do you have any questions?
Please contact us