News and developments

Work on the AI Act. The financial sector's perspective on artificial intelligence regulations

 At the end of 2023, European Union bodies concluded trilogue negotiations on the draft Artificial Intelligence Act (AI Act)[1]. Representatives from the European Commission, the European Parliament and the Council reached an agreement on the AI Act, the aim of which is to ensure the protection of fundamental rights, democracy and the rule of law in relation to the use of AI solutions within the European Union, while boosting innovation and making Europe a leader in the field of AI.

Status of work on the AI Act

The news of the concluded trilogues was well received. Interest in the AI Act proved to be so high that two unofficial consolidated versions of the regulation were ‘leaked’ onto the internet on Monday, 22 January. One of the people who decided to share the consolidated text of the AI Act is Luca Bertuzzi, technology editor of Euractiv, who reported that “given the enormous public attention on the AI Act I took the rather unprecedented decision to publish the final version of the text”[2]. Moments later, a consolidated version of the regulation was published by Laura Caroli, Senior Policy Advisor at European Parliament[3]. Afterwards, the official version of AI Act was ultimately adopted at a meeting of ambassadors within the Committee of Permanent Representatives of the Member States (COREPER) on 2 February. The vote on the proposed text of AI Act in the Committees of the European Parliament (IMCO, LIBE) was held on February 13 and a month later (on the plenary session on Match 13) the vote on the regulation took place in the European Parliament. Therefore, it is worth mentioning out the latest arrangements regarding the content of the AI ​​Act from the financial sector’s perspective.

Banned AI systems

Recognising the potential threats to citizens’ rights posed by certain applications of artificial intelligence, EU lawmakers decided to include in the catalogue of banned AI systems:

  • biometric categorisation systems that use sensitive characteristics;
  • systems that manipulate human behaviour and exploit human vulnerabilities (due to age, disability or socio-economic situation);
  • so-called ‘social scoring’ systems, based on social behaviour or personal characteristics;
  • emotion recognition systems in the workplace and educational institutions;
  • systems for making risk assessments of natural persons in order to assess or predict the risk of committing a criminal offence (based solely on the profiling of a natural person or on assessing their personality traits and characteristics);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • An exception has been provided for the use of (generally prohibited) remote biometric identification systems in public spaces with respect to the strictly defined cases related to law enforcement. The use of such systems will be limited for the purposes of: searching for victims of certain crimes, preventing terrorist threats, and for the purposes of locating and identifying suspects of crimes identified in the regulation.

    When it comes to financial institutions, the ban on the use of the above systems will be most noticeable in two areas. The first one is onboarding and contact with customers, where AI systems could ease the burden on staff, as well as eliminate the risk of human error. However, financial entities will not be allowed to use AI systems that perform biometric categorisation based on sensitive customer characteristics, such as race or gender, for the purpose of customer verification process. The use of systems that manipulate human behaviour and exploit human vulnerabilities, such as age or disability, will also be considered unacceptable when dealing with customers. In light of the above, financial institutions will be obliged to carefully select the AI systems they use in their dealings with customers so as not to risk violating their fundamental rights or engaging in the aforementioned prohibited practices.

    The second area worth mentioning is the monitoring of employee behaviour. In domestic law, this issue is regulated, for example, in Article 222 of the Labour Code, which provides for situations in which employee monitoring may be permissible. The application of such a solution could lead to discrimination against employees who, for various reasons, show negative emotions at work and consequently affect their promotion prospects or the results of their periodic evaluation. Therefore, the ban proposed by the EU legislator's, prohibiting in principle the use of AI systems that recognise human emotions in the workplace, seems justified. It is worth mentioning that the use of emotion recognition systems in the workplace is to be allowed by way of exception and only to ensure the safety and health protection of staff.

    High risk AI systems

    Regarding high-risk AI systems (i.e. systems that may cause significant harm to health, security, fundamental rights, the environment, democracy or the rule of law) it should be noted that among the entities obliged to carry out a mandatory assessment of the impact of high-risk AI systems on the fundamental rights of EU citizens will be those from the banking and insurance sectors. The catalogue of high-risk AI systems that can be used by financial market players includes systems used to assess customers' creditworthiness. As indicated in recital 37 of the preamble to the AI Act: “In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services”. This means that before using AI systems for assessing creditworthiness, banks will have to take into account the impact of such systems on citizens' fundamental rights. It should also be borne in mind that EU citizens will be guaranteed the right to lodge complaints about the operation of AI systems with the competent supervisory authorities and to receive explanations concerning decisions based on high-risk AI systems. In connection with the above, banks and insurance companies using the aforementioned systems will have to take into account the risk of potential complaints and fulfil their obligation to provide explanations for decisions taken with regard to their customers based on high-risk AI systems.

    It is also worth mentioning that the AI Act imposes a number of obligations on users of high-risk AI systems. Financial institutions that choose to use such systems will be obliged to:

  • use high-risk AI systems in accordance with the accompanying operating instructions;
  • monitor the performance of high-risk AI systems and inform their supplier or distributor of all identified risks and incidents;
  • store incident logs generated automatically by the AI system, if such logs are under their control;
  • ensure that the natural persons, assigned by the financial institution to oversee the high-risk AI systems, have the necessary competence, training and authority as well as the necessary support;
  • maintain the logs, automatically generated by that high-risk AI system, as part of the documentation kept pursuant to the relevant financial service legislation.
  • Support for SMEs and the development of FinTech

    AI Act emphasizes the need to provide the conditions for the development of AI systems without undue pressure from industry giants. An important aspect in this respect will be the use of the so-called regulatory sandbox (such as the one provided by the UKNF) and real-world testing. This solution will be beneficial for both local (European and domestic) technology providers, including those providing services to the FinTech sector, as well as for the supervised entities themselves. On the one hand, AI system providers will be able to test the designed solutions in a secure environment. Successful results of tests conducted in the regulatory sandbox will help increase the marketability of their services. On the other hand, entities in the financial sector will be able to make use of services offered by proven AI system providers, which can contribute to improving the security level of the technology solutions used.

    Proposed sanctions

    It should also be borne in mind that non-compliance with the provisions of the AI Act may result in severe fines of up to €35 million or 7% of total annual worldwide turnover (in the case of infringements related to the prohibited AI systems). Fines will not only be imposed on providers of AI systems, but also on users of such systems who use them in a non-compliant manner. For this reason, financial institutions should be careful when choosing AI systems and use them in a manner that complies with the regulation.

    Entry into force of the AI Act

    There are a number of regulatory challenges to be met by financial market players who intend to use AI solutions. In principle, the AI Act is to be applied within 24 months of its entry into force, with the provisions on banned AI systems already taking effect within 6 months and the regulations on general purpose AI models and penalties within 12 months of the regulation coming into force. Financial institutions should not delay work to ensure compliance with the new legal requirements.

    Author: Hubert Łączkowski

    Footnotes

    [1] Draft Regulation of the European Parliament and of the Council of the EU laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.

    [2] https://www.linkedin.com/feed/update/urn:li:activity:7155091883872964608/.

    [3] https://www.linkedin.com/posts/dr-laura-caroli-0a96a8a_ai-act-consolidated-version-activity-7155181240751374336-B3Ym/.