News and developments
The latest version of the Artificial Intelligence Act proposed by the European Parliament redefines AI systems and the way they are classified
The Parliament’s position represents a significant departure from the versions formulated to date – and this applies to both the European Commission’s version[2] presented in April 2021, and the Council’s version[3] adopted in December 2022. The most significant changes, primarily relating to definitions and the way AI systems are classified, are described below. This will have a direct impact on the obligations of providers, users (operators), importers, and distributors of AI systems.
An AI system redefined
In its position, the Parliament adopted an approach to defining an AI system that differs entirely from the earlier proposals. The current position is that an AI system is a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. This is a much broader definition than that adopted to date, and closely resembles the definition adopted by the OECD in 2019[4] and used to some extent in US law[5].
The Council’s definition of a general purpose AI system has also been revised. The EP has now defined this as an AI system that can be used in or adapted to a wide range of applications for which it was not intentionally and specifically designed.
Developments regarding prohibited AI systems
In the proposal, the EP has significantly expanded the list of AI systems that pose an unacceptable risk and which it would be prohibited to place on the market, put into service, or use.
The list of prohibited AI systems now also includes the following:
Author: Agnieszka Wachowska, Piotr Konieczny
Footnotes [1] https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf [2] https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF [3] https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf [4] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText [5] https://www.congress.gov/bill/116th-congress/house-bill/6216/text#toc-H41B3DA72782B491EA6B81C74BB00E5C0
- biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics;
- systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
- systems that infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions;
- systems that analyze recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorization and strictly necessary with respect to a specific serious criminal offense..
- to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems;
- for use as safety components in the management and operation of the supply of water, gas, heating, electricity, and critical digital infrastructure;
- to influence the outcome of an election or referendum or the voting behavior of natural persons when exercising their vote;
- to be used by social media platforms designated as very large online platforms under the DSA in their recommender systems to recommend to recipients of the service user-generated content available on the platform.
- identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law, where this may be caused by that model;
- processing and incorporating into those models only datasets that are subject to appropriate data governance measures;
- designing and developing the foundation model in order to achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity;
- designing and developing the foundation model, and making use of applicable standards to reduce energy and resource use and to increase energy efficiency and the overall efficiency of the system;
- drawing up extensive technical documentation and intelligible instructions for use in order to enable downstream providers to comply with all of their obligations under the AI Act,
- to comply with transparency obligations;
- to train, and where applicable, design and develop, their models in such a way as to ensure adequate safeguards against the generation of content in breach of Union law;
- to document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
- penalties for breach of obligations under the AI Act, which in the view of the Parliament should be higher,
- the time limit for application of the AI Act, making it two, not three years from the effective date,
- creating and using all AI systems and foundation models based on principles of trustworthy AI,
- guaranteeing the right to request a clear and meaningful explanation of the role of an AI system in decision-making procedure.
Author: Agnieszka Wachowska, Piotr Konieczny
Footnotes [1] https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf [2] https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF [3] https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf [4] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText [5] https://www.congress.gov/bill/116th-congress/house-bill/6216/text#toc-H41B3DA72782B491EA6B81C74BB00E5C0