The rise of AI presents numerous opportunities to advance society. However, as with many emerging technologies, there are also significant downsides to the proliferation of AI. Malicious actors may misuse AI for wrongful purposes, transforming it into a powerful tool for manipulation, exploitation and social control.
To mitigate these concerns, the European Union has sought to regulate the AI sector by prohibiting certain applications of AI that could result in serious violations of fundamental rights of citizens. In particular, the rights to non-discrimination, privacy, and the rights of the child are seen by the EU as being potentially under threat.
To protect these rights, the EU has formulated eight prohibited applications of AI. This document outlines these prohibitions, along with any potential exceptions. Since not all prohibitions are absolute, it is crucial to determine whether a proposed use of AI falls within an exception category and is thus permitted under certain conditions.
These prohibitions apply not only to parties that put such AI systems on the market but also to those who deploy them. It is therefore essential to verify, prior to using an AI system, whether it falls within one of these prohibited categories or not, as the AI Act reserves its most severe penalties for violations of these prohibitions, with fines of up to € 35,000,000 or 7% of a company’s annual turnover.
1. Manipulative techniques
It is under no circumstances permitted to use manipulative AI techniques in a manner that significantly disrupts an individual's behavior, by severely hindering their ability to make informed and free decisions. This is the case when individuals make decisions they would not have made without the influence of AI, resulting in behavior that, due to the AI's output, causes or is reasonably likely to cause harm to themselves or others.
Examples of such manipulative techniques include the use of specific audio and visual stimuli that are undetectable for humans yet still influence their behavior.For example, showing an image for less than 50 milliseconds, which could influence behavior but in most cases escapes conscious perception. But also less intrusive methods, where individuals are aware of the techniques being used but are nonetheless unable to make free choices.
Not every technique influencing a person’s behavior can be classified as manipulative under the AI regulation. It is specified that standard advertising practices are excluded from this category, although one might argue that these practices do, to some extent, also influence people's freedom of choice. Yet, for a technique to fall under this prohibition, the level of manipulation must be significantly higher. An example of this is the use of virtual reality in such a way that a person can no longer distinguish between reality and fiction.
2. Exploiting vulnerabilities
In addition to using general manipulative techniques, an AI system can also be designed to exploit specific individuals who are in a vulnerable position. Certain individuals may be more susceptible to exploitation due to factors such as old or young age, having a disability, extreme poverty, or belonging to a particular minority group. An AI system that exploits these characteristics to disrupt the behavior of these individuals in a way that causes or is likely to cause harm to themselves or others is prohibited.
However, this prohibition does not extend to potential medical applications of AI, which could benefit individuals with, for instance, mental disabilities. It is essential that AI use in such cases complies with applicable law and medical standards and that explicit consent is obtained from the patient or their legal representative.
3. Social scoring
An AI system that assigns individuals a score based on their behavior over a certain period and attaches consequences to it conflicts with the EU principle of non-discrimination. Such systems may inherently contain a bias against certain groups of people, leading to unfair and harmful treatment of individuals or communities. For this reason, AI-driven social credit systems, like those currently implemented in certain parts of China, are prohibited. This ban applies regardless of whether the system is used by a government or a private entity.
However, this does not mean that AI cannot be used to evaluate individuals based on their behaviour and attach consequences accordingly. For example, using AI to assess eligibility for social benefits is generally allowed, although it may have significant implications for the individuals involved. The prohibition specifically targets the use of a general score that negatively affects individuals' activities in areas beyond the original purpose for which the data was collected.
4. Use of profiling within law enforcement
When an AI system is used to estimate the likelihood that an individual will commit a criminal offense, it may conflict with the principle of presumption of innocence. Consequently, it is prohibited to deploy AI systems that, based solely on personal characteristics (such as place of birth, number of children, or type of vehicle) and profiling (the automated collection and processing of data about a person to infer and predict their behavior), analyze the probability of that person committing a criminal offense.
However, the use of AI is permitted when assessing individuals within the context of a criminal investigation if there already is a reasonable suspicion of involvement in a criminal offense, based on a human assessment of objective and verifiable facts.
5. Facial recognition databases
The large-scale and untargeted scraping of facial images from the internet and surveillance camera footage constitutes a violation of the privacy rights of individuals involved. When this is done with the intent to create or expand a facial image database, the use of AI is prohibited.
6. Emotion recognition
Considering that individuals display emotions in diverse ways, there is a potential for an AI system that recognizes emotions to have discriminatory effects when its outputs are linked to consequences. Therefore, the use of AI emotion recognition systems in the workplace or educational settings is prohibited, unless the system is employed exclusively for medical or safety purposes, such as by a company physician. Recognizing pain and fatigue does not fall under the category of emotion recognition. Therefore, an AI system that detects when truck drivers are becoming fatigued is not prohibited.
7. Deducing sensitive data
AI systems that classify individuals into groups based on personal biometric data (such as height, eye color, or gait) with the aim to infer or deduce special categories of personal data concerning those individuals are prohibited. Special categories of personal data includes: political beliefs, union membership, religious or philosophical beliefs, race, sex life, or sexual orientation.
An exception to this prohibition applies when such a system is used in law enforcement for the purpose of filtering or labeling legally obtained personal biometric data.
8. Real-time identification
Real-time remote biometric identification systems use live camera footage from public spaces to recognize individuals based on their personal data. With adequate camera coverage, such a system could track a person’s location throughout a city. The use of these systems for law enforcement purposes, such as tracking a suspect of a crime, is generally prohibited unless specific conditions are met.
The system may only be deployed for the following purposes:
- The targeted search for victims of abduction, human trafficking, sexual exploitation, or other missing persons.
- The prevention of an imminent threat to a person's life or physical safety.
- The prevention of a terrorist attack.
- The localization of an individual suspected of a serious criminal offense, such as terrorism, human trafficking, sexual exploitation of children and child pornography, drug trafficking, arms trafficking, murder, grievous bodily injury, trafficking of human organs, trafficking in nuclear materials, or kidnapping.
In any of these cases, the use of a real-time identification system by law enforcement is permitted only if prior judicial authorization has been obtained, except in urgent situations where judicial review must be conducted afterwards.
Disclaimer
This document is for
informational purposes only and no rights can be derived from it. For further
independent study, you may consult other informational materials on the AI Act
provided by Kienhuis Legal. For specific questions, please feel free to
contact: Roeland de Bruin.