Article

High-risk AI-systems and requirements

Roeland de Bruin

Roeland de Bruin Senior lawyer

The majority of provisions in the AI Regulation (Act) is directed at the providers and deployers of high-risk AI systems. Therefore, it is important to determine when an AI system could qualify as “high risk” in sense of the Regulation. There are two ways in which an AI system can be classified as high-risk.

Firstly, an AI system may be designated as high-risk if it is listed as such by the European Commission (Commission). The AI Act lists nine areas where the usage of AI may be considered high-risk. The Commission reserves the right to expand this list in the future if new types of AI usage emerge that pose a comparable threat to those already listed. Conversely, an AI usage may be removed from the list if it no longer poses a threat to the health, safety, or fundamental rights of individuals.

Secondly, an AI system can be classified as high-risk when it is used as a safety component within a specific product or product group, or when the AI-system is a product itself, that is already subject to certain product safety requirements under existing EU legislation. These requirements exist because these products, even without AI, pose risks to user health and safety. When an AI system acts as a safety component, it should therefore also meet AI-specific safety standards.

This document will first discuss the systems classified as high-risk by the European Commission, along with the applicable exceptions. Next, the use of AI as a safety component within certain products will be examined.

1. High-risk due to intended use

The nine areas where the use of AI can pose a high risk to health, safety, and fundamental rights are listed in Annex III of the regulation. However, it is important to note that not all AI applications within these areas are automatically classified as high-risk. In some cases, AI use is considered "normal" and is therefore not subject to significant obligations under the regulation, while in other cases, the use of AI may be prohibited entirely. Therefore, where applicable, the prohibited types of use will also be listed for each area, followed by an explanation of high-risk uses, including some examples. For a more detailed overview of prohibited AI applications, please be referred to the document 'Prohibited AI applications' by Kienhuis Legal. Finally, a general exemption will be discussed, which might mean that certain uses within these areas are not classified as high-risk.

The areas where the application of AI can pose a high-risk are:

1.1 Biometrics

Biometric data are all types of data that can be derived from an individual's physical characteristics, such as facial images or fingerprints, as well as gait and typing patterns.

1.1.1 Biometric identification

Biometric data can be used by AI to recognize individuals in photos or videos, a process known as biometric identification. When real-time biometric identification is used in public spaces, such as through live footage from a camera in a marketplace, with the aim of detecting or preventing criminal activity, the use of AI is in principle prohibited.

Most other forms of biometric identification are considered high-risk. However, an exception is made regarding AI systems that use biometric data to verify a person’s identity for the purpose of granting access to a device or location. Examples include FaceID to unlock a smartphone or an iris scan to open a door.

1.1.2 Biometric categorization

When biometric data are used to categorize individuals into different groups, this is referred to as biometric categorization. It is prohibited to do so with the intent to infer sensitive information, such as race or sexual orientation.

Other forms of biometric categorization are considered high-risk. However, if individuals are categorized based on biometric data to exclusively support another service, the system is not classified as high-risk. For example, categorizing individuals based on body size to select the correct clothing size in an online fitting room is not considered high-risk, provided that the AI system can exclusively be used for such a purpose.

1.1.3 Emotion recognition

An AI system using biometric data to recognize individuals' emotions is considered high-risk as well. The use of such a system in the workplace or educational settings is, in principle, prohibited. Note that the recognition of pain and fatigue does not fall under emotion recognition. For example, an AI system that detects driver fatigue is not classified as high-risk.

1.2 Critical infrastructure

AI systems intended to serve as safety components in critical infrastructure are classified as high-risk. This includes both critical digital infrastructure, such as DNS servers or data centers, and traditional infrastructure, such as roads, water, and gas pipelines. It must specifically concern the safety components of the infrastructure, not the infrastructure itself. Components solely focused on the cybersecurity of digital infrastructure also do not fall under this category. Examples of AI systems that do fall under this category include systems that measure water pressure in pipelines or systems that ensure fire safety in data centers.

1.3 Education

When AI systems are used in education and vocational training to determine the admission of students to certain educational institutions or levels within those institutions, to evaluate students' learning outcomes, or to monitor students' behavior during exams, they must be classified as high-risk.

1.4 Employment

AI systems used to evaluate job applications and candidates for open positions are considered high-risk. This also applies to systems used to identify potential candidates, for example via social media such as LinkedIn. Furthermore, AI systems that evaluate the performance of current employees for purposes such as promotion, termination, or for allocating tasks within a company must also be considered high-risk.

1.5 Essential public and private services

1.5.1 Public services

AI systems used to determine whether people are entitled to essential government benefits and services, or to establish the amount of these benefits, are classified as high-risk. Examples include welfare benefits or housing assistance. Another crucial public service is the management of emergency calls. Therefore, an AI system that prioritizes emergency calls is also considered high-risk.

1.5.2 Private services

Additionally, AI systems that calculate the premiums for important private services, such as health and life insurance, are high-risk too. AI systems assessing for instance creditworthiness for the purpose of granting a loan or mortgage are similarly considered high-risk. However, AI systems that attempt to detect fraud in the offering of financial services or ensure the financial stability of credit institutions and insurance companies by calculating capital requirements are exceptions and are excluded from the high-risk category.

1.6 Law enforcement

In the investigation and prosecution of criminal offenses, AI systems may be used to assess the risk of a person committing or re-committing a crime, or the risk of a person becoming a victim of a crime. Both of these applications are classified as high-risk. If the likelihood of a person committing a crime is assessed solely through profiling (the automated collection and processing of personal data to analyze and predict behavior), the use of AI systems is prohibited. However, when AI systems use profiling for the detection, investigation, or prosecution of criminal offenses they are permitted, though they are still classified as high-risk in this context.

Additionally, AI systems that assess the reliability and quality of evidence in law enforcement, including lie detectors, are considered high-risk too.

1.7 Migration

AI used to process applications for asylum, visas, or residence permits must be classified as high-risk. This classification also applies if the AI system is used solely for supportive tasks related to these applications, such as assessing the reliability of evidence or evaluating security risks associated with migrants. Similarly, an AI system used to verify the identity of individuals applying for these permits is also considered high-risk, except when the system is used exclusively to verify the authenticity of presented travel documents.

1.8 Judiciary

When an AI system is used within a judicial body to assist in researching and interpreting facts or laws, it is classified as high-risk. However, if the system is not used for actual decision-making but only as a tool for administrative tasks, such as the anonymization of judicial decisions, it is not classified as high-risk. The same rules apply to alternative dispute resolution when the decision has legal implications for the parties involved

1.9 Democratic processes

Any AI system used to influence the voting behavior of persons in elections or referenda must be classified as high-risk, unless their output does not directly influence voters. Examples of systems that fall under this exception include AI systems that optimize a political campaign from an administrative or logistical perspective.

1.10 Exceptions

If an AI system falls under one of the nine specified types of use, there may be circumstances that could lead to the exclusion from the high-risk category. This can be the case when the system does not pose 1) a risk to the health, safety, or fundamental rights of individuals because its use does 2) not materially influence the outcome of a decision-making process – to be proven by the provider of the respective system. Nevertheless, an exception cannot be invoked when the AI system leads to profiling of people.

If a provider of an AI system believes their system qualifies for this exception, they must document their reasoning before placing the system on the market and register it in the designated EU database. The relevant market surveillance authority can still decide to classify the AI system as high-risk if they determine that the system does not meet the criteria for the exception.

As the term "not significantly influencing the decision-making process" remains ambiguous, the regulation provides guidance to help providers clarify the concept. It outlines four scenarios where AI is presumed not to significantly affect decision-making. However, this list is not exhaustive, and providers may present additional arguments beyond these four.

The four scenarios where AI is considered not to significantly influence decision-making are:

  • When the AI system is intended to perform a narrow procedural task, such as alphabetically organizing documents.
  • When the AI system is intended to improve the final result of human work in minor ways without affecting it substantively, such as an automated spell-check system.
  • When the AI system is intended to recognize decision-making patterns of individuals and identify and highlight potential deviations from those patterns without making improvements itself.
  • When the AI system is designed to perform a preparatory task, such as translating source documents, to allow further human processing.

2. High-risk safety components

In addition to the potential high risk an AI system can pose to individuals due to its intended use, an AI system may also be classified as a high-risk system if it performs an essential safety function within another product.

2.1 Main rule

As mentioned in the introduction, many high-risk AI systems serve as a safety function within specific “dangerous” or otherwise impactful products or product groups that are regulated under Union product safety law. To determine which products are considered high-risk under the AI Act, one must refer to Annex I of the regulation. When an AI system functions as a safety component within a product covered by one of these listed directives or regulations, the first step toward classifying it as a high-risk AI system is taken.

The second step is to determine whether the safety component, under the relevant legislation, requires a third-party conformity assessment. If this is the case, and the safety component is an AI system, then that system is classified as high-risk. However, not all safety components require a conformity assessment. Whether such an assessment is needed depends on the specific product safety directives and regulations. For example, in the case of elevators, a directive article requiring a conformity assessment states:

“Where the market surveillance authorities of one Member State have sufficient reason to believe that a lift or a safety component for lifts covered by this Directive presents a risk to the health or safety of persons or, where appropriate, to the safety of property, they shall carry out an evaluation in relation to the lift or the safety component for lifts concerned covering all relevant requirements laid down in this Directive. The relevant economic operators shall cooperate as necessary with the market surveillance authorities for that purpose.”

In summary, two cumulative conditions must be met before an AI system functioning as a safety component can be classified as high-risk:

  • The AI system must function as a safety component within a product that falls under one of the listed types of legislation in Annex I.
  • In accordance with the relevant legislation, the safety component must undergo a conformity assessment by a third party.

2.2 Integration into existing legislation

Products covered by the safety directives and regulations must already fulfill numerous obligations based on that legislation before they can be placed on the market. Since adding AI to a product can introduce additional obligations under the AI Act, the EU legislator has attempted to ease some of the ex-ante compliance burdens for providers. One way to do this, is by allowing some procedures and documents that are required ex the AI Act, to be combined and integrated into the procedures that were already required under the sector-specific safety legislation.

For another category of products, the integration process goes even further. Although an AI system within these products is classified as high-risk under the AI Act, these products are not directly subject to the Act itself. Instead, all relevant obligations are incorporated directly into the applicable safety legislation to ensure full integration. While the obligations imposed on the AI system generally remain the same, they may be adjusted to meet the specific requirements of the relevant sector where necessary.

2.3 Regulated by AI Act

The following products fall under the scope of the AI Act when they contain an AI system as safety component:

  • Machinery[1]
  • Toys[2]
  • Recreational craft and personal watercraft[3]
  • Lifts[4]
  • Equipment intended for use in potentially explosive atmospheres[5]
  • Radio equipment[6]
  • Pressure equipment[7]
  • Cableway installations[8]
  • Personal protective equipment[9]
  • Appliances burning gaseous fuels[10]
  • Medical devices[11]
  • In vitro diagnostic medical devices[12]

Each footnote cites the relevant EU legislation, along with the specific article that determines whether the product requires a third-party conformity assessment.


2.4 Regulated by sector-specific legislation


[1] Directive 2006/42/EC. (art. 12)

[2] Directive 2009/48/EC. (art. 42)

[3] Directive 2013/53/EU. (art. 44)

[4] Directive 2014/33/EU. (art. 38)

[5] Directive 2014/34/EU. (art. 35)

[6] Directive 2014/53/EU. (art. 40)

[7] Directive 2014/68/EU. (art. 40)

[8] Regulation (EU) 2016/424. (art. 40)

[9] Regulation (EU) 2016/425. (art. 38)

[10] Regulation (EU) 2016/426. (art. 37)

[11] Regulation (EU) 2017/745. (art. 52)

[12] Regulation (EU) 2017/746. (art. 48)

The sector-specific directives and regulations concerning the following products and sectors will be amended to ensure that the products (including the embedded AI systems) they govern comply with the high-risk AI system requirements as outlined in the AI Act:

  • Civil aviation[13]
  • Unmanned aircraft[14]
  • Two- or three-wheel vehicles and quadricycles[15]
  • Agricultural and forestry vehicles[16]
  • Marine equipment[17]
  • Interoperability components of rail systems[18]
  • Motor vehicles and their trailers[19]
  • Components for the protection and safety of vehicle occupants and their trailers[20]

Each footnote cites the relevant EU legislation.



[13] Regulation (EC) no. 300/2008.

[14] Regulation (EU) 2018/1139.

[15] Regulation (EU) no. 168/2013.

[16] Regulation (EU) no. 167/2013.

[17] Directive 2014/90/EU

[18] Directive (EU) 2016/797.

[19] Regulation (EU) 2018/858.

[20] Regulation (EU) 2019/2144.

3. Requirements for high-risk systems

When an AI system is classified as high-risk, the provider of the system must comply with a substantial number of requirements. The following is a general overview of these requirements:

3.1 Risk management system

Any AI system designated as high risk must have a risk management system in place. This structured system is designed to identify foreseeable risks to the health, safety, and rights of individuals. Measures must then be proposed to mitigate these risks to an acceptable level. When assessing potential risks, the expected knowledge and experience of a deployer should be taken into account. The risk management system must be run continuously during the use of the AI system but, most importantly, must be initiated before the system is placed on the market.

It is the responsibility of the provider to fulfill this obligation. However, if the deployer is a body governed by public law or a private entity providing public services (such as an educational or healthcare institution), they must also perform another risk assessment concerning the potential consequences for the fundamental rights of the individuals involved.

3.2 Data management

When an AI system is developed based on training datasets, these datasets must be as error-free and complete as possible. Additionally, specific requirements apply to the content of these datasets. The provider must assess the suitability of the datasets and implement measures to prevent bias in the data. This is to ensure that AI systems operate properly and safely and do not become a source of discrimination. To prevent AI systems from possessing certain biases that could cause discrimination, providers are even allowed to process special categories of personal data, provided that appropriate safeguards are in place to protect the data.

3.3 Technical documentation and log files

Any high-risk AI system may only be placed on the market when its technical documentation has been drawn up. This technical documentation must be prepared in such a way that authorities can assess whether the requirements under the AI Act are met. Examples of what this documentation should include are: the manner in which the system was developed, a description of the datasets used, and the safety measures implemented. Additionally, the AI system must have the capability to automatically record information about certain events throughout its entire lifecycle; this is known as keeping log files.

Both the technical documentation and the log files must be retained by the providers for a certain period, depending on the situation.

3.4 Transparency and human oversight

Since the AI system produces output intended for human use, it is crucial that this output is easily interpretable by people. Consequently, the regulation mandates that the provider designs the system to be as user-friendly as possible and supplies clear and understandable instructions for deployers. Deployers should have a clear understanding of the system's relevant capabilities, and any significant errors made by the system should be evident as such to the deployer. The provider must take into account the expected level of expertise of the average deployer of the AI system.

3.5 Quality management system

Each provider of high-risk AI systems is required to implement a quality management system. This should not be confused with the risk management system. While risk management focuses on minimizing specific system-related risks, the quality management system aims to uphold certain standards throughout the organization. This system must detail how the provider plans to meet specific obligations, such as outlining planned testing procedures or establishing a framework for accountability among those responsible for developing the AI system.

The system must be documented systematically and orderly, but the format itself is not strictly prescribed. Additionally, the extent of the system should be proportionate the size of the provider's organization.

3.6 Conformity & registration

Before a high-risk AI system is placed on the market, the provider must demonstrate that it meets all the requirements set out by the regulation. This is done by drawing up an EU declaration of conformity, which describes how compliance with the relevant provisions of the regulation has been achieved. The provider may draft this declaration themselves when the system has been designed according to the harmonized standard procedures provided by the European Commission. If this is not the case, the conformity assessment must be conducted by a notified independent body.

Finally, the provider of a high-risk AI system must register both himself and the system in the designated EU database. This obligation also applies to deployers who, due to their role as body governed by public law or a private entity providing public services, are required to establish a risk management system.

3.7 Aftermarket obligations

Once the AI system is placed on the market, the provider still has ongoing responsibilities. They must be able to demonstrate compliance with the regulation if requested by the relevant authorities. Providers are also required to continuously monitor the AI system and track its performance throughout its entire lifecycle to ensure it remains compliant with all applicable requirements.

Additionally, the provider must report all serious incidents involving the AI system to the relevant market surveillance authorities. Serious incidents are defined as those that result in injury, infringements of fundamental rights, or property damage.

Disclaimer
This document is for informational purposes only and no rights can be derived from it. For further independent study, you may consult other informational materials on the AI Act provided by Kienhuis Legal. For specific questions, please feel free to contact: Roeland de Bruin.

Do you have any questions?
Please contact us