Navigating the legal landscape of AI: challenges and opportunities in Poland

  1. INTRODUCTION

Artificial intelligence (AI) represents one of the key areas of technological development worldwide. The European Union is playing a significant role in this process, striving to become a global leader in the field of AI. Poland, as an EU member, actively participates in these efforts by investing in research and development and supporting innovations centred around AI. However, along with the dynamic development of AI, numerous regulatory challenges have arisen at both the EU and national levels.

The purpose of this article is to discuss the current state of the AI business environment in Poland, to identify key legal challenges and to propose a systematic approach to AI management in organisations. We will analyse the most important legal areas affecting the offering and use of AI solutions, such as the AI Act, the GDPR, intellectual property rights, consumer rights and cybersecurity regulations. Additionally, we will present practical guidelines for building effective AI management systems tailored to the specific needs of organisations. We will also outline the principles of enforcing AI-related legal requirements by various supervisory authorities.

  1. DYNAMIC DEVELOPMENT OF AI IN POLAND

In recent years, Poland has become one of the key players on the European technological scene, especially in the field of artificial intelligence (AI). The dynamic development of this field is driven by the growing interest of companies and institutions that recognise the potential of AI in improving various aspects of business operations.

Enhancing Work Efficiency

More and more Polish enterprises are implementing AI-based tools to increase efficiency and competitiveness. According to the Microsoft Work Trend Index 2024 report, in Poland, as many as 61% of employees use generative artificial intelligence daily.

Employees in various sectors use advanced analytical systems, process automation and chatbots to support customer service. For example, in the financial sector, AI is used in connection with credit risk analyses, fraud detection and investment portfolio optimisation. In the HR sector, AI tools help with recruitment, employee sentiment analyses and personalised training. AI also helps in the medical sector by aiding in disease diagnosis, personalising treatment and automating the analysis of medical data, leading to faster and more precise patient care.

Creative departments, including marketing and sales teams, can use advanced marketing tools to increase the effectiveness of their activities, improve customer communication, and achieve better business results. These tools offer a wide range of functions that support various aspects of marketing and sales activities, including personalising customer communication, collecting and analysing data from various marketing channels such as social media, websites, and email campaigns, allowing real-time tracking of campaign effectiveness, identifying the best communication channels, and optimising campaigns. They also enable the testing of different campaign versions to choose the most effective solutions and support the content creation process (new campaigns, blog content, social media posts and promotional materials).

One of the latest and most promising trends in Poland is the use of artificial intelligence by programmers to write code. Tools like GitHub Copilot, based on advanced language models, have become invaluable aids to programmers in connection with their work. Thanks to AI, programmers can generate code snippets based on short function descriptions, automatically complete code, and even identify and fix errors. The integration of AI in the programming process significantly increases productivity, reduces the time needed to write and debug code, and allows developers to focus on more creative and complex tasks. Programmers also use code quality analysis tools that utilise AI to assess performance, compliance with standards and potential security threats. As a result, AI not only supports the software development process, but also improves the quality and reliability of the final products, which is crucial in Poland’s dynamically developing IT sector.

Delivering AI-Based Products

Offering AI-based products is also becoming increasingly popular in Poland. The EY study “How Polish Companies Are Implementing AI” reveals that 21% of medium and large manufacturing firms are already using AI in their operations. Additionally, 41% of these companies are currently in the process of integrating AI solutions. Only 9% of those surveyed do not have any plans to adopt AI within the next two years.

Technology companies, both startups and established enterprises, invest in developing intelligent solutions that can revolutionise the market. Examples include advanced image recognition systems used in security, medicine and retail, as well as natural language processing (NLP) systems used for customer service automation and social media opinion analysis. The Polish technology market is also witnessing a growing number of innovations in the Internet of Things (IoT) sector, where AI plays a key role in creating intelligent devices and management systems. For instance, smart sensors and energy management systems help optimise resource usage, which is crucial for both the environment and the economy.

The prospects for AI development in Poland are promising. The increasing number of government initiatives supporting innovation, the development of technological infrastructure, and collaboration between the public and private sectors create favourable conditions for the further growth of this technology. Poland has the potential to become one of the leaders in the field of AI in Europe, although this will require overcoming certain challenges. A key challenge to the development of artificial intelligence in Poland is the legal requirements, both at the EU and national levels, which cover various areas of law. These regulations primarily concern data protection, responsibility for decisions made by AI systems, and the ethics and transparency of algorithms. Additionally, new EU initiatives, such as the Artificial Intelligence Act, introduce additional requirements for risk assessment, certification and oversight of AI systems. At the national level, labour law, civil law and sector-specific regulations, such as those for the financial and medical sectors, also affect the implementation and use of AI solutions. Compliance with these diverse legal requirements is essential for building trust and social acceptance of AI technology, but also poses a significant challenge for companies striving to innovate.

LEGAL REGULATIONS

AI Act Regulations

In May 2024, the Council of the European Union approved the AI Act, which had been previously adopted by the European Parliament. An EU regulation is a legal act that applies directly in all member states of the European Union, meaning it does not require implementation into national legal systems, and the resulting obligations directly apply to the entities to which it is addressed. The aim of the AI Act is to protect fundamental rights, democracy, the rule of law and the environment from high-risk artificial intelligence, while simultaneously stimulating innovation and establishing Europe as a leader in the field of artificial intelligence.

The biggest challenge for Polish businesses and the public sector with the entry into force of the AI Act is understanding whether and to what extent the regulation applies to them. In simplified terms, the regulation states that its provisions apply to AI system providers and users, potentially meaning everyone. However, whether a given entity will be subject to obligations imposed by the AI Act, and if so, which ones, will depend on the specifics of the systems they use and, above all, the purposes for which they are used. Importantly, the regulation introduces a very broad and imprecise definition of an artificial intelligence system. Consequently, providers and users may struggle to assess whether the systems they create or use are indeed AI systems subject to the AI Act. Until more specific guidelines from the European Commission or the Polish regulator are issued, such assessments may pose difficulties and lack a uniform approach.

Increasingly, European regulations rely on a risk-based analysis mechanism. This is also the case with the AI Act. On one hand, this course of action is highly desirable as it allows for tailoring the scope of obligations to specific circumstances and avoids burdening entities with uniform obligations that may prove unnecessary in certain cases. On the other hand, the lack of specific requirements shifts the responsibility for complying with legal requirements without violating the regulations onto businesses and public entities. The AI Act classifies AI systems into four categories: unacceptable, high-risk, low-risk and minimal-risk systems.

The use of unacceptable risk systems is prohibited. This applies to all AI systems that are contrary to the values of the European Union, such as those that violate fundamental rights. For example, the AI Act prohibits AI systems that use subliminal or manipulative techniques aimed at distorting a person’s behaviour or those used to infer emotions in workplaces and educational institutions.

Certain AI systems can be classified as high-risk systems. These systems are subject to the most requirements and obligations. These include biometric identification systems, systems used in critical infrastructure and systems intended for recruitment purposes, including those used to filter job applications or evaluate candidates (the systems listed in Annex III of the AI Act).

The consequence of a system being recognised as high-risk will be a need to meet many additional requirements and obligations. This will particularly involve the obligation to have a risk management system in place for such AI systems. Also, the data used to train high-risk AI systems will need to meet certain quality criteria, and their management will need to adhere to the criteria specified in the AI Act. The regulation also sets forth specific requirements for logging events, ensuring the transparency of high-risk AI systems and human oversight of their operation.

It is also worth noting that introducing a high-risk AI system will require the conduct of a fundamental rights impact assessment (FRIA). This is similar to a data protection impact assessment (DPIA) under the GDPR (Article 35 GDPR). The provisions of the AI Act explicitly refer to DPIAs, allowing in certain cases to treat an FRIA as a supplement to a DPIA.

In the case of low-risk systems, the AI Act only imposes transparency obligations, while minimal-risk systems are not subject to any requirements. This means that AI systems that do not fall into higher risk categories are not covered by the AI Act.

Private and public entities must be very aware of the obligations resulting from the AI Act and be capable of identifying AI systems within their organisation, evaluating them, and implementing appropriate documentation and compliance management processes (more on this in the AI Governance section below).

This implies a need to develop internal competencies in this area or hire specialised external entities. Issues related to artificial intelligence are highly technological, requiring mixed competencies and collaboration with experts from various fields. Adapting to new regulations may impose additional burdens on entities subject to the AI Act, including a need to hire appropriate personnel, prepare documentation, and ensure compliance and responsiveness to changes. Organisations should anticipate this before the AI Act fully comes into force. Preparing for compliance is a process that, depending on the complexity of the organisation and the number and diversity of systems, can be a time-consuming endeavour.

The entry into force of the AI Act also presents a challenge for lawyers. The regulation is not a typical act in law, so providing legal advice on artificial intelligence will require an understanding of the relevant technology and keeping abreast of any legal changes associated with the regulations.

To understand and correctly apply the AI Act, soft laws, such as guidelines from the European and national authorities, as well as case law, which are currently lacking, will be of great significance.

Hence, it is crucial to seek the assistance of AI experts. It is important to remember that using AI systems or creating them without adequate preparation, i.e. without ensuring compliance with the regulations, will expose entities to high fines of up to EUR 35 million or 7% of their total annual worldwide turnover in the preceding financial year, whichever is higher.

Although the AI Act is often presented as a comprehensive AI law, ensuring compliance with only this regulation will not guarantee full legal compliance in connection with creating or using artificial intelligence. Introducing and using AI systems can impact data protection, copyrights and consumer rights, and may require the implementation of cybersecurity requirements. When implementing, developing, or using AI systems, all potentially applicable regulations, including industry-specific regulations, must be considered. Although there are currently no national laws on AI use in the healthcare or financial sectors, restrictions may arise from existing regulations or guidelines from chambers or authorities (e.g. the Code of Medical Ethics or guidelines from the Polish Financial Supervision Authority).

GDPR Regulation

Artificial intelligence relies on the use of vast amounts of data, including personal data. Personal data can be processed during the model-building, training and testing phases, as well as during the actual use of AI. The AI Act does not provide for exemptions to the application of the GDPR, or vice versa – both regulations apply concurrently. Therefore, both providers of and entities using AI systems must comply with the GDPR if the use of AI involves processing personal data.

This primarily means ensuring compliance with GDPR principles such as legality, transparency, privacy by default and data minimisation. First and foremost, a provider or user of an AI system must determine their role in data processing and assess whether they are a data controller or a data processor. A data controller must have a legal basis for processing data and fulfil numerous obligations under the GDPR, while a data processing agreement must be concluded with a data processor, in respect of which it must be ensured beforehand that the processor guarantees the implementation of appropriate measures for the protection of the data entrusted thereto. Fulfilling information obligations can be challenging in processes that utilise a very broad range of information based on which an AI system makes decisions and generates output. In the case of black-box systems, the data controller may not be able to explain how an automated decision was made by the system. It is then necessary to explain clearly and understandably what sort of data the controller uses in the process, how the data may influence the decision, what criteria were applied and what potential effect the processing might have on the data subject.

Another issue is the quality and accuracy of data. Tools based on algorithms can only be used if they have been tested and the quality of the assessments, analyses or predictions obtained is sufficiently high. There are no specific guidelines or indicators in this regard, but the expected quality should depend on the purpose – relatively low prediction accuracy may be acceptable in online advertising, whereas medical diagnostic algorithms should meet more stringent quality criteria.

Ensuring compliance with the principle of data minimisation in AI systems can be difficult. This principle dictates that only the specific data necessary to achieve the purpose of processing should be used. However, AI systems, especially for learning or training purposes, use very large amounts of data. A solution can be the use of pseudonymised or anonymised data, which does not constitute personal data.

Deleting personal data from AI systems can also be a challenge. On the one hand, it may not be technically possible in some cases, and on the other hand, deleting data may reduce the knowledge about the individual that the algorithm can analyse, potentially negatively impacting the quality of the analysis performed by the algorithm.

When performing a risk analysis as required by the GDPR, additional risks specific to AI systems must be considered, particularly those related to discrimination, potential AI hallucinations or new types of attacks, such as the deliberate and massive feeding of erroneous data into an AI system to alter its functioning. In principle, a Data Protection Impact Assessment (DPIA) should be conducted for AI systems.

It is also important to remember that while analyses conducted under the GDPR and the AI Act both take a risk-based approach, the subject of the analyses is different. Under the GDPR, the risk of violating the rights and freedoms of individuals is assessed in terms of loss of data confidentiality, availability or integrity. Conversely, under the AI Act, various risks are assessed, one of which may be the risk of processing personal data contrary to the GDPR (e.g. due to AI hallucinations). However, an assessment conducted from the perspective of the AI Act also takes into account the impact on other fundamental rights, and even the environment or other significant areas. GDPR focuses on data security and is an element of the broader AI risk assessment.

Therefore, the results of a risk analysis based on the AI Act may lead to identifying different threats, the possible effects of such threats and, consequently, different safeguards than those identified in a GDPR risk analysis. Both processes (i.e. analyses conducted under the GDPR and the AI Act) should, however, be consistent – the risk analysis methodology under the GDPR can be useful in connection with conducting an AI risk analysis.

Intellectual Property Law

A number of challenges related to artificial intelligence are also encountered in Polish copyright law. Primarily, Polish copyright law assumes that a work is a manifestation of human activity. Consequently, copyright protection cannot be granted to works created by AI (e.g. images, music, texts). On the other hand, a work is considered original if the author was able to express their creative abilities in connection with its creation by making free and creative choices (CJEU judgment of 4.10.2011 in joined cases C-403/08 and C-429/08 Football Association Premier League and others). Thus, if the user’s prompt is very complex and the human decides whether the AI should generate further variants of the work and directs the algorithm’s output through subsequent prompts and corrections, there arises the question of whether a copyright should nevertheless be granted to the user due to such advanced and creative instructions given to the AI tool. This assessment is complicated by the fact that AI generates output based on existing works, while copyright law protects works characterised by uniqueness and originality. These issues are currently unresolved as Polish law does not provide clear regulations in this area.

Legal uncertainties also arise in the area of using materials to build and train AI models. Training AI involves the use of vast amounts of data, which data is often protected by copyright. Utilising works in the machine learning or model training process requires a legal basis, usually in the form of consent from the creator or the copyright holder, or by exercising an exception to this rule. Given the large volume of materials used, obtaining consent or entering into a licensing agreement is challenging. The exceptions related to the permissible use of works in Poland (despite the lack of consent from the rights holder) are constructed in such a manner that there is no clear understanding of the scope of use thereof. Similar issues are present in other countries, often leading to copyright infringements and legal disputes. The problem is thus not only national, but also global, and should be addressed through the implementation of regulations concerning the licensing of data used to train AI models to avoid copyright infringements.

Intellectual property rights can also be violated in the process of using AI systems, for instance when third-party works are used in the system without the user having the rights thereto. Therefore, an organisation using AI systems should establish guidelines for the permissible use of AI systems by employees. It is particularly important to remember that in the case of widely available AI systems, one user’s input may be used to create output for another user, increasing the risk of infringing the intellectual property rights of others or even compromising information confidentiality. The best solution is to use a dedicated AI system for the organisation. This makes it easier to verify and ensure that only content for which the organisation has acquired rights or licenses is used in the AI system.

Consumer Law

The widespread use of AI-based solutions, coupled with a lack of transparency in services directed at consumers, can lead to the infringement of their interests. The lack of transparency and informing consumers about how AI systems evaluate their situation and affect them is a significant issue. Consequently, consumers do not make informed decisions, particularly in purchasing. This is especially evident in AI-based personalisation processes. Advanced analytical tools can assess individual consumer interests and detect potential weaknesses, such as poor emotional or physiological states. Based on this knowledge, it is possible to display ads that influence consumers and adjust prices. Individuals in difficult personal or professional situations may be particularly susceptible to such actions. This issue was recognised by the Office of Competition and Consumer Protection in Poland, which published a scientific paper on AI-based personalisation and its impact on consumer rights (A. Nowak, Personalizacja oparta na sztucznej inteligencji a ochrona praw konsumentów. Zagadnienia wybrane, UOKiK, Warszawa 2022). It is important to note that this issue has been acknowledged in Poland. The authority’s approach in this regard is crucial since there are no clear regulations governing these matters, and the risk of violating collective consumer interests or engaging in unfair market practices is high.

AI-based personalisation often relies on collecting large amounts of personal data, including demographic data, shopping preferences, browsing history and other online behaviours. This can lead to violations of consumer privacy as data is often collected without their knowledge or consent. Additionally, consumers may be influenced to make purchases they would not normally make or to use products and services that are not suitable for them. Moreover, AI algorithms may exhibit biases derived from training data that contain prejudices. This can lead to the discrimination of various consumer groups. Consequently, certain social groups may be treated unfairly, for instance, through higher prices, worse offers or inappropriate recommendations, which violates the principle of the equal treatment of consumers.

Consumer protection, therefore, requires appropriate regulations, transparency in business activities, and educating consumers about how their data is used and their rights in the context of protecting their rights.

Cybersecurity

As with any new technology, the development of artificial intelligence brings new threats, including those related to cybersecurity. AI systems are vulnerable to specific cyber threats. Attacks on AI systems can lead to the manipulation of input data and results, which can have serious consequences, including having an impact on individuals or, in extreme cases, even on their health or lives.

To ensure the security of AI systems, it is necessary to establish appropriate standards and certification procedures. Entities that create and use AI systems should be prepared for potential threats and be able to counteract them. Security measures should be appropriately tailored to the threats and the attractiveness of the resources used in the system. Regular monitoring and audits should be part of the security systems for AI systems, as well as updating security procedures in response to new threats.

When considering the security of AI systems, at least a few regulations should be taken into account. If personal data is processed within the systems, it requires inclusion in a DPIA. For AI systems that are part of critical infrastructure covered by the NIS2 directive, it will be necessary to meet the general cybersecurity requirements. The AI Act also imposes security obligations on AI systems depending on the level of risk these systems may pose.

Artificial intelligence is currently so widely used and developing at such a pace that it could soon become present in every area of social and professional life. Therefore, it is crucial for organisations to appropriately prepare to ensure compliance with legal requirements in this area if they already use or plan to use AI solutions in the future.

Taking action in this area is not facilitated by the fact that the NIS2 directive has not yet been implemented into Polish law. The provisions of the previous National Cybersecurity System Act are still in force, and the final shape of the new regulation is unknown. This situation does not favour the possibility of the making of adequate preparations to meet cybersecurity requirements by the entities that will be required to do so, and time is running out (the deadline for implementing the directive is 17 October 2024).

  1. RESPONSE TO LEGAL CHALLENGES – AI GOVERNANCE

Recent data from the Work Trend Index shows that 79% of AI users in Poland are adopting artificial intelligence independently, which means they are missing out on the benefits of strategically leveraging AI on a larger scale. More importantly, this trend exposes entire organisations to the risk of disclosing private company data and lowers cybersecurity standards. While 69% of Polish business leaders believe their organisation must implement AI to remain competitive, 49% fear that their leadership lacks a clear plan and vision for its implementation.

Given the numerous legal challenges posed by the development of artificial intelligence (AI) technology, organisations must implement systemic AI management solutions (AI Governance). AI Governance is a set of principles, procedures and governance structures aimed at ensuring the responsible and legally compliant use of AI within an organisation. The main objectives of AI Governance are to ensure compliance with current legal regulations, minimise risks associated with AI usage, protect privacy and personal data, and promote the ethical and transparent use of AI technology. Effective AI management allows organisations to seamlessly integrate AI-based solutions into their operations while ensuring compliance with regulations and standards. AI Governance aids in risk identification and management, and in creating an environment where technological innovations are used safely and efficiently. In practice, this means organisations can not only meet regulatory requirements, but also build trust among customers and business partners, improve operational efficiency, and make more informed and responsible decisions in relation to the implementation and use of AI.

There is no one-size-fits-all approach; each solution must be tailored to the specific needs and context of an organisation. Below is a description of the key elements of building an AI management system using specific examples and indicating best practices.

Identifying Organisational Needs

The first step in building an AI Governance system in an organisation includes key goals and activities aimed at ensuring compliance with legal and regulatory requirements and effectively implementing AI solutions. This process begins with identifying the client’s needs and expectations, which allows for a precise understanding of the specific requirements and challenges faced by the organisation. Next, an assessment is conducted to determine whether and what further legal and regulatory actions related to AI Governance should be taken to ensure compliance with the current regulations.

A crucial element of this stage is engaging key personnel from various departments to discuss planned and existing AI solutions. These collaborative sessions facilitate the exchange of knowledge and experiences while also helping to identify areas requiring special attention from the legal and regulatory perspectives. Based on the information gathered, recommendations for further actions and target solutions are developed, forming the foundation for the ongoing development and implementation of the AI Governance strategy within the organisation.

Documentation – Formal Compliance

The second step in building an AI Governance system in an organisation focuses on the development and implementation of documentation that precisely addresses all of the issues identified in the previous step, ensuring formal compliance with the legal regulations, and evaluating the completeness and adequacy of AI solutions.

A key element of this stage is defining the scope of internal regulations and developing a governance and oversight model that includes management structures, roles and responsibilities, information flow, escalation paths, and monitoring and control mechanisms. As part of this process, the organisation creates a comprehensive set of AI Governance documentation and tools. These include, among others, AI policy, “AI by design” / “AI sandbox” procedure, AI usage guidelines for employees, AI solutions provider selection procedure, data handling procedure, AI transparency policy, standard clauses for AI use in contracts with suppliers and business partners, and a list of permitted AI tools. This documentation ensures that all identified issues are addressed and that AI solutions are implemented effectively and in compliance with regulatory standards.

Additionally, it is necessary to supplement and amend existing documentation in other areas to ensure a comprehensive and legally compliant framework for AI implementation and management. This includes, but is not limited to:

  • Risk Analyses: A thorough review and update of risk analyses to account for potential AI-specific risks should be conducted to ensure that all scenarios are adequately covered and mitigated. IT and cybersecurity procedures must also be revised to incorporate AI technologies, address new vulnerabilities and ensure that robust protection measures are in place.
  • HR Documents: HR documents should be updated to reflect the roles and responsibilities related to AI governance, including training requirements and ethical guidelines for employees working with AI systems.
  • Procurement: Existing procedures should be adjusted to include criteria for selecting AI solutions and vendors, and so as to ensure that all purchases align with the organisation’s AI policy and regulatory standards.
  • GDPR Compliance: GDPR documents should be thoroughly reviewed and updated to ensure compliance with data protection regulations, particularly in the context of the data processing capabilities of AI. This involves ensuring that all data handling practices meet the stringent requirements of the GDPR and implementing appropriate data anonymisation techniques.
  • Contracts with Contractors and Employees: Contracts with contractors and employees should be amended so that they include specific clauses related to the use of AI, and detail the obligations and responsibilities of all parties in relation to AI governance. This may entail the inclusion of clauses on data privacy, intellectual property rights and the ethical use of AI technologies.

The aim of these activities is to create a coherent and legally compliant environment for implementing and managing AI solutions. By ensuring that all relevant documents and procedures are aligned with applicable standards and regulations, the organisation can effectively mitigate risks, protect sensitive data, and foster a culture of ethical and responsible AI use. This comprehensive approach not only enhances compliance but also builds trust among interested parties, including customers, employees and regulatory bodies, thereby supporting the sustainable growth and innovation of AI within the organisation.

Operational Implementation

The third step in building an AI Governance system in an organisation involves operational implementation aimed at ensuring the full compliance and effective functioning of AI solutions in the organisation. Operational implementation aims to ensure the principles set forth in the adopted documentation are effectively incorporated into all of the relevant procedures. This includes, among others:

  • Assigning Appropriate Roles and Responsibilities: determining who in the organisation is responsible for various aspects of AI Governance, including data management, algorithm oversight, and compliance with legal regulations.
  • Developing and Implementing Processes and Tools: creating detailed operational procedures and tools to support AI management, such as performance monitoring systems, problem escalation mechanisms, and platforms for analysis and reporting.
  • Employee Training: conducting training programmes to provide employees with the necessary knowledge and skills to effectively use AI solutions in line with adopted standards and regulations.

During this stage, it is crucial to ensure that all aspects of AI Governance are effectively integrated into the organisation’s daily operations. This requires not only creating appropriate governance structures, but also implementing monitoring and control mechanisms that enable ongoing tracking and evaluation of AI system performance.

Operational implementation is essential to ensure that AI Governance principles are not just theoretical but are actively applied in the organisation’s day-to-day activities. Through effective operational implementation, an organisation can not only meet regulatory requirements but also maximise the benefits of using artificial intelligence.

Risks of Not Implementing AI Governance

A lack of AI Governance in an organisation carries a range of risks that can have serious consequences for an organisation and its stakeholders. First and foremost, organisations may encounter issues related to non-compliance with legal regulations, which can result in financial, reputational and legal consequences. Improper data management, lack of transparency in algorithm operations or a failure to adhere to privacy protection principles are just some of the aspects that can lead to significant legal violations.

Another critical risk is the possibility of errors and irregularities in AI system operations, which can lead to incorrect business decisions. Without proper oversight and control, algorithms may operate unpredictably, potentially causing financial losses, reputational damage, and even threats to data and system security.

A lack of AI Governance can also result in ethical and trust issues. Organisations that do not adhere to ethical standards in AI usage may face criticism from customers, business partners and the public. Unmonitored AI use can lead to discrimination, injustices and other negative social impacts.

In summary, a lack of AI Governance carries with it significant legal, operational, ethical and reputational risks that can severely impact an organisation’s functioning and growth. Therefore, appropriate AI management is essential to ensure its safe and effective use.

  1. REGULATOR

The AI Act proposes the establishment of the European Artificial Intelligence Board at the European level and the creation of national bodies in each member state to ensure the effective enforcement of the new regulations at the national level. In Poland, the Ministry of Digital Affairs is leading the work on implementing the AI Act into Polish law. In April this year, it announced pre-consultations aimed at users of AI to gather opinions on whether the relevant functions should be performed by existing public administration bodies or if entirely new institutions should be created.

From the beginning of the discussions in Poland, various bodies have been vying to be entrusted with competencies in the area of artificial intelligence. Among the bodies considered were the Office for Personal Data Protection, the Office of Competition and Consumer Protection and the Financial Supervision Authority.

Experience with these existing bodies has shown that the choice of the appropriate authority responsible for AI in Poland will be very significant. The approach of existing bodies to communicating with entrepreneurs and citizens, the level of detail in the published guidelines and the educational activities undertaken thereby have varied greatly. Considering that both the AI Act and other national regulations (briefly discussed in this publication) require clarification and interpretation, it is necessary to designate a body that will take a very active approach. The regulator’s practice in a given area plays a crucial role in the application of regulations, the effective enforcement of such regulations and building awareness among the entities targeted by the regulations.

The consultations conducted by the Ministry of Digital Affairs indicated that most supported the idea of the creation of a new body to supervise AI matters and act as the notifying authority. Advocates for establishing a new entity believe that the existing administrative bodies in Poland lack the appropriate competencies and resources to fulfil such a role. Furthermore, the designation of a new institution is expected to help avoid potential competency disputes and conflicts regarding task priorities as existing bodies might have a tendency to prioritise their current responsibilities.

As a result, the Ministry of Digital Affairs has decided to pursue this approach. Therefore, the AI Supervisory Commission will be established in Poland, which is set to begin functioning at the beginning of 2026. The Ministry of Digital Affairs agreed with the feedback it received during the pre-consultation phase, emphasising that, currently, no office in Poland has been delegated tasks and statutory competencies related to artificial intelligence. According to the Ministry of Digital Affairs, creating a new body to oversee AI matters will be cheaper and more effective than assigning new tasks to offices that deal with entirely different areas. The Polish approach is not isolated – Italy, as the first EU member state to do so, has already proposed a law establishing a new AI authority. The Ministry of Digital Affairs plans to prepare a Polish AI law in 2025.

The implementation of the AI Act into Polish law and the launch of the new AI supervisory authority in Poland will require significant effort, particularly when it comes to selecting appropriate staff with an understanding of the relevant technology, laws, and social and business issues. The administration responsible for implementing and overseeing AI regulations should prioritise informational and educational activities directed at employers, employees, technology companies, the scientific community and the general public. These groups are responsible for the safe and ethical development, implementation and use of artificial intelligence. They should therefore be aware of the risks associated with its creation and use. Knowledge of legal regulations and threats can be crucial to avoiding mistakes related to the irresponsible use of AI technology.

At the national level, it will also be necessary to introduce regulations regarding what are referred to as regulatory sandboxes for AI systems to facilitate the development and testing of innovations before they are brought to market or put into use.