CLM Bitai & Partners | View firm profile
- Introduction
Most people have likely encountered some form of artificial intelligence system, whose application has become increasingly popular around the world. In addition to individual use, many companies are implementing or planning to introduce such systems for various purposes. The rapid development of technology and growing significance of AI systems emphasize the need for a regulation to ensure their ethical, safe and transparent operation.
The regulation on artificial intelligence, no. 2024/1689 (hereinafter: AI Regulation) was adopted by the European Parliament on June 13, 2024, with the aim of establishing a unified legal framework for AI systems within the European Union.
The AI Regulation applies to any person or organization that develops, installs, places on market, applies or uses an AI system in the European Union. It also includes people are settled or has place of resident outside the territory of the European Union but use the result produced by the AI system within the territory of the European Union. In this respect, anyone carrying out activities related to the AI system must act in accordance with the rules of the AI Regulation.
Some provisions of the AI Regulation enter into force in several stages, so the persons and organizations concerned will need to take several steps to reach full compliance with the AI Regulation. Generally speaking, the requirements of the AI Regulation in its entirety will apply from August 2, 2026.
However, certain provisions and requirements of the AI Regulation will enter into force and apply before August 2, 2026, e.g. the provisions on prohibited AI systems will apply from February 2, 2025, the provisions on general purpose AI models and other provisions related to the implementation of the AI Regulation, such as sanctions, the establishment of EU-level AI-related bodies and organizations, will apply from August 2, 2025. The conditions for the classification of a high-risk AI system as such will enter into force from August 2, 2027.
In addition to complying with the AI regulation, the persons and organizations concerned must also comply with several other legal provisions and consider other aspects when developing, installing, placing on the market, applying and using AI systems. These include, for example, compliance with intellectual property law, labor law, personal rights, personal data protection law, the prohibition of unfair business-to-consumer commercial practices, the prohibition of misleading business partners, child protection law or even competition law. In the field of competition law, the importance of data and the importance of who owns this data, who can collect, organize, process and use it efficiently and systematically has increased in recent times, in which AI technologies play a prominent and growing role. The possession and use of data can change market positions, roles, market structures, the intensity of competition, create or strengthen dominant market positions, affect acquisitions, pricing practices, market behavior and strategies, and facilitate unlawful collusion.
- Prohibited AI Practices
As mentioned above, the closest deadline for compliance is February 2, 2025, when the EU Regulation’s rules on prohibited AI practices enters into force. To this end, all persons concerned will need to review their systems using AI technology to determine whether they are in breach of any of the prohibited practices of the AI Regulation and, if so, to cease operating those systems by February 1, 2025.
The Regulation lists the following AI practices as prohibited ones:
- Systems using manipulative or deceptive techniques;
- Systems exploiting vulnerability;
- Systems used for social scoring;
- Systems designed to assess the likelihood of criminal offenses;
- The creation of facial recognition databases;
- Use of emotion recognition systems in workplaces or education institutions;
- Systems used for biometric categorization of individuals;
- Real-time, remote biometric identification systems used in public places for law enforcement purposes
- Systems using manipulative or deceptive techniques
On the one hand, systems that use subliminal elements (sound, images, video) (e.g. flashing windows or subtitles) that people are not able to perceive consciously, but significantly influence or impair the individual’s ability to make informed decisions without the person concerned being aware of it, are covered by the prohibition.
In addition to systems that subconsciously influence individuals, systems using manipulative, deceptive techniques which distort, induce or deceive individuals into unwanted behavior, such that people are unable to control or resist it, are also prohibited.
A further condition for a prohibited practice to occur is that the prohibited AI practice must cause, or be reasonably likely to cause, substantial harm to that person or a specified class of persons.
Virtual reality can help to weaken informed decision-making by allowing greater control over what stimuli are presented to individuals. Examples of such techniques include the use of pop-ups or distracting messages on online platforms to draw attention to offers available for a limited period of time, urging and pressurizing the customer to make a purchase decision as soon as possible, otherwise they will miss out on favorable conditions.
The Regulation underlines that the prohibition of manipulative or deceptive techniques should not affect legitimate practices in the treatment of medical conditions which are in accordance with the law and the applicable health standards.
- Systems exploiting vulnerability
This category includes techniques that take advantage of the age, disability, social or financial situation of a person or group of persons to distort their behavior by exploiting their vulnerability. The provision is intended to protect vulnerable groups from discrimination and manipulative techniques.
Such a system could be a technical violation if, for example, the company concerned uses AI to evaluate performance or to recruit employees or to sort CVs, but the AI system does not evaluate employees or sort CVs according to objective criteria, but bases its decision on the above-mentioned characteristics or vulnerabilities of the individual and thus discriminates against, for example, the elderly, women or people living in poverty.
- Systems used for social scoring
In this case, we are talking about systems that assess or classify individuals over time based on data associated with their behavior in different circumstances, or with their known, inferred personal characteristics or personality traits, and use these assessments in a context that goes beyond the one in which they were collected. This means that inferences are drawn from the behavior or personality of individuals that are unrelated to the personality trait or behavior being classified, for example, other conclusions are drawn from religious group affiliation or political views, credit ratings or employment, which may lead to adverse or unfavorable treatment, may violate the right to dignity and the prohibition of discrimination.
- Systems designed to assess the likelihood of criminal offenses
The EU Regulation prohibits the creation and use of AI systems that determine or assess the likelihood of committing a crime in general or certain criminal offences based on an individual’s profile, characteristics, personality traits (e.g. nationality, vehicle type, level of debt) and their assessment.
The prohibition does not apply to AI systems that are not based on an assessment of the profile, personality traits or characteristics of individuals, but rather on other aspects of risk assessment, such as the assessment of the risk of financial fraud by businesses based on suspicious transactions.
- The creation of facial recognition databases;
This prohibition specifically targets facial recognition databases created, using “web scraping” techniques or CCTV footage. “Web scraping” usually refers to the use of automated software that collects, extracts and copies information from websites without a specific purpose.
The AI Regulation does not prohibit developing facial recognition databases in general but restricts their creation through methods and for certain purposes mentioned above.
- Use of emotion recognition systems in workplaces or education institutions
The purpose of such a system is to infer or identify the emotions, moods, intentions of an individual based on biometric data such as facial expressions, facial expressions, reactions, gestures, movements or even voice. The AI Regulation does not impose a general prohibition on such systems, but specifically prohibits their use in workplaces or educational institutions. The ban does not cover systems that detect physical condition, such as fatigue, which may be used for pilots to avoid accidents.
Biometric data are any personal data relating to the physical, physiological or behavioral characteristics of a natural person obtained by means of specific technical procedures which allow or confirm the unique identification of a natural person, such as facial image, fingerprints.
- Systems used for biometric categorization of individuals
This point in the article listing prohibited AI practices prohibits the development of systems that categorize individuals based on their biometric data in order to draw conclusions about an individual’s race, political opinions, religious or philosophical beliefs, sexual life or orientation.
This prohibition does not extend to the lawful filtering or categorization of biometric data sets obtained in accordance with EU or national law, according to biometric data, such as sorting images by hair or eye color, which may be used, for example, in law enforcement.
- Real-time, remote biometric identification systems used in public places for law enforcement purposes
These systems identify individuals remotely by comparing biometric data with database records without their active participation. Real-time means that identification, occurs simultaneously or with minimal delay. This clause prohibits real-time AI systems used for biometric identification, in case they are used for law enforcement purposes, in a publicly accessible place and produce discriminatory effects or distorted results.
The AI Regulation sets out a number of exceptions, for example law enforcement, detection, prevention, prosecution and enforcement of sanctions, for which the use of real-time AI systems for biometric identification is allowed.
In summary, it is worthwhile to evaluate and consider all points in detail, to examine the specificities and technical characteristics of the system, the purpose for which it was created, its impact, in order to ensure that no system using prohibited AI practices is developed, placed on market or used.
In order to ensure that the company concerned complies with the AI Regulation, it is recommended to involve legal and information security experts from the very beginning of the development process, either when the AI system is being used or, if the AI system is specifically developed to meet the needs of a particular company, at the very beginning of the development process, to ensure compliance with the AI Regulation from the initial phase, thus saving time, energy and money.
Failure to comply with the provisions on prohibited AI practices can result in severe sanctions, with a maximum fine of €35,000,000 (roughly HUF 14 billion) or a fine of 7% of the organization’s total global revenue in the previous year.
- An outlook on the next steps beyond prohibited AI practices to comply with the AI Regulation
Even those who, after reviewing their AI systems, conclude that their systems do not engage in prohibited AI practices, but do, for example, use systems that enable emotion recognition, should not sit back, as these systems, which are not subject to the prohibition, are classified as high-risk AI systems under the AI Regulation and must therefore be operated in accordance with the requirements for AI systems classified as such.
Entities that fail to comply with additional obligations under the AI Regulation, beyond the provisions on prohibited AI practices, may be fined up to a maximum of €15,000,000 (roughly HUF 6.2 billion) or up to 3% of their total global annual revenue in the previous financial year.
As mentioned above, following the entry into force of the provisions on prohibited AI practices, the next deadline for compliance with the AI Regulation is August 1, 2025, by the time organisations will need to be ready to comply with the requirements for general purpose AI systems.
A general-purpose AI system is a system that can be used for a wide range of tasks, serving multiple purposes. Importantly, the AI system and its obligations may change as the AI system develops, for example if additional AI models are integrated into the AI system. Accordingly, a general-purpose AI system is not only a system developed specifically for such a general purpose, but also a system where a general-purpose AI model is integrated into an existing AI system, making it suitable for different purposes. In such a case, the system should be considered as a general-purpose AI system and the obligations applicable to general purpose AI systems should apply.
In the case of general-purpose AI systems, the determination of the generality of the system is not the only classification task, as it is also necessary to assess whether the general-purpose AI system poses a systemic risk. The AI Regulation establishes a set of criteria according to which the relevant party must assess whether the system poses a systemic risk.
Systemic risk means that the system in question has high-impact capabilities, i.e. it meets or exceeds the capabilities of the most advanced general-purpose AI model or has a significant impact on the EU market through its impact on, for example, public security, fundamental rights. If an AI system is found to present a systemic risk, the person concerned will be obliged to notify the Commission of the European Union.
Developers and marketers of general-purpose AI models will have to complete a number of tasks to comply with the AI Regulation, in particular to prepare and keep up to date technical documentation for the system, to enable the capabilities and limitations of the AI system to be well understood and transparent, and to implement a policy for compliance with copyright and other related rights.
In addition to complying with the AI Regulation and completing the obligations set out therein, the use, operation, development and marketing of AI systems may also affect a number of other areas of law as described above, and therefore, when introducing AI systems, depending on the purpose and use of the AI system, the organizations concerned may need to review their data protection and employment practice, documentation, but the AI system may also affect intellectual property, consumer protection, competition law or other areas of law. Furthermore, in addition to the documentation obligations required by the AI Regulation, it is worthwhile to regulate the use of AI systems and applications in their internal organization, and to create policies and internal documents for this purpose.
The expert team of CLM Bitai & Partners has all the necessary knowledge about the latest regulations. If you have any questions, do not hesitate to contact us!
Sources:
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
- Autoriteit Persoonsgegevens (NL) – Department for the Coordination of Algorithmic Oversight (DCA), Manipulative, deceptive and exploitative AI systems Prohibitions in EU Regulation 2024/1689 (AI Act), September 2024
- CMS AI Academy Lecture – Focus on intellectual property rights, HR, labour and competition law
- Kertész István: Kötelező jogi és etikai megfelelőség az AI alkalmazásában – Hogyan kerülheti el a büntetéseket?
- Huixin Zhong, Eamonn O’ Neill, Janina A. Hoffmann: Regulating AI – Applying Insights from behavioural Economics and Psychology to the application of Article 5 of the EU AI Act, Central for Doctoral Training in Accountable, Responsible and Transparent AI, University of Bath, United Kingdom
- Matija Franklin, Philip Moreira Tomei, Rebecca Gorman: Streightening the EU AI Act – Defining key terms on AI manipulation, 2023.