News and developments
The first-ever legal framework on AI: EU Commission proposes a new Regulation
A new AI Regulation shall lay the foundation in order to achieve these goals. A few days ago, the provisional draft proposal of the planned AI Regulation was leaked. It already provided some interesting insights. On April 21st 2021, the proposal of the European Commission has been officially published. It deviates in some areas from the leaked previous version:
The key points of the proposal:
Aim and scope
The Regulation harmonizes the rules for AI systems, prohibits certain practices, provides specific requirements for high-risk AI systems and obligations for operators and new transparency rules. AI systems is defined broadly and future-proof. It shall cover all AI-based technologies which uses one of the techniques mentioned in Annex I – both stand-alone or incorporated into hard- or software:- Machine learning;
- Logic- and knowledge-based approaches;
- Statistical approaches, Bayesian estimations, search and optimization methods.
Banned AI applications
The proposal lists a number of prohibited applications. This includes for example social scoring or applications that manipulate human behavior and circumvent users' free will (eg toys using voice assistance encouraging minors to conduct dangerous behavior).Strict regime for high-risk AI applications
High-risk AI systems include, for example, automatic facial recognition in public spaces, credit rating scoring systems, robot-assisted surgery, biometric identification, AI enabling transportation or CV sorting software for recruitment purposes. The provision of high-risk AI systems is subject to the following obligations:- Implementation of a documented quality management system including written policies, procedures, instructions in order to ensure an accurate analysis;
- Technical documentation of the high risk AI system;
- Logging of the activities generated by the AI in order to ensure traceability of results;
- Conformity check and labelling with a CE marking prior to putting it on the market;
- Clear and adequate information to users;
- Human oversight measures in order to minimize risks;
- High level of security and quality of datasets feeding the system in order to minimize discriminatory outcomes.