-
What are your countries legal definitions of “artificial intelligence”?
There is no legal definition of artificial intelligence in national legislation and international conventions, so we do not have a legal definition of artificial intelligence in Poland either. Attempts are being made to take a descriptive approach, and so in the Policy for the development of artificial intelligence in Poland since 2020 (AI Policy), the authors refer to the definition of:
Artificial Intelligence (AI) is trying to be defined as a field of knowledge including, among others, neural networks, robotics and the creation of models of intelligent behaviours and computer programs that simulate these behaviours, including machine learning, deep learning and reinforcement learning.
The main definition that may be relied upon stem from the AI Act: “a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic – and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”. OECD defines AI as “an automated system that, for a given set of objectives defined by humans, is capable of making predictions, formulating recommendations, or making decisions that impact real or virtual environments. AI systems are designed to operate at various levels of autonomy.”.
-
Has your country developed a national strategy for artificial intelligence?
By resolution of 28 December, 2020, the Council of Ministers established the “Policy for the development of artificial intelligence in Poland from 2020”, also known as the “AI Policy”. The executor of the AI Policy is the minister responsible for computerization. The policy describes the actions that Poland should implement and the goals that it should achieve in the short term (until 2023), medium-term (until 2027) and long-term (after 2027), aimed at the development of Polish society, the Polish economy and Polish science in the area of artificial intelligence.
“AI development policy in Poland” is a document that supports and complements others, including the Responsible Development Strategy, the Coordinated Plan of the European Commission for the development of artificial intelligence in the European Union, as well as the work of international organizations, including the OECD.
The document takes into account not only the international, legal or technical and organizational dimension of the use of artificial intelligence, but also the ethical dimension. The “Policy for the Development of Artificial Intelligence in Poland” implements two national and five international strategic documents and is a Polish response to EU programming documents.
In addition, the Ministry of Digital Affairs undertakes many initiatives in the field of developing the principles of using artificial intelligence, including focusing on individual areas such as education.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
On December 28, 2020, the “Policy for the development of artificial intelligence in Poland from 2020”, also known as the “AI Policy”, was established. However, it is a document that specifies the actions that Poland should implement and the goals that it should achieve in the short term (by 2023), medium-term (by 2027) and long-term (after 2027), aimed at the development of Polish society, the Polish economy and Polish science in the field of artificial intelligence.
The regulations for the use of artificial intelligence systems may also apply, which define the rules for the use of the system and the responsibility for the generated content. However, they do not strictly regulate the legal aspects. .
In Poland, claims related to the violation of rights by artificial intelligence may be based on different national laws, depending on the specific case and type of violation, in particular:
- Act on the Protection of Personal Data: In the event of a breach of privacy or personal data protection by systems based on artificial intelligence, you can refer to the provisions of this Act, which regulates the collection, processing and protection of personal data.
- Civil Code: The provisions of the Civil Code regarding civil liability may be applied in the event of damage caused by artificial intelligence. It is possible to refer to the provisions on tort or contractual liability, in particular in the event of a violation of personal rights, liability for a dangerous product.
- Copyright and Related Rights Act: If artificial intelligence infringes copyright by generating copyrighted content, it is possible to refer to the provisions of that law to protect your creativity.
- Act on Combating Unfair Competition: these provisions are important in protecting trade secrets.
- Consumer protection regulations: In the event of a violation of such rights, reference may be made to the relationships between consumers and service providers, in particular information obligations.
- Act on Competition and Consumer Protection: If artificial intelligence acts in a way that violates competition law or the collective interest of consumers, you can refer to the provisions of this Act
- Database Protection Act: here, especially over time, regulations can become important.
Currently, however, Poland is focusing on a legislative draft regarding the implementation of the Artificial Intelligence Act and the selection of national authorities to effectively enforce the new regulations at the national level.
How the regulations will work in practice will be crucial. Both the European Commission and the Member States have an important role to play here.
The implementation of the Artificial Intelligence Act requires the establishment or selection of those institutions that will supervise and regulate the AI market from among the existing ones. The Ministry of Digital Affairs points out that at the national level, it will also be necessary to introduce regulations regarding the so-called regulatory sandboxes in the field of artificial intelligence systems, so as to facilitate the development and testing of innovations before they are placed on the market or put into use.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
A defective artificial intelligence system contained in the product, causing a product defect, may be subject to the provisions introduced by the Act on the protection of certain consumer rights and on liability for damage caused by a dangerous product of the new section of the provisions on the liability of entrepreneurs for the so-called “dangerous products” to the Civil Code. These provisions are an expression of the implementation into the Polish legal order of the provisions laid down in the EU Directive 85/374/EEC (Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products). This should ensure the same level of protection across the EU.
Previously, liability for a defective product was based on the provisions on contractual and tort liability. In addition, the consumer could assert rights on the basis of the provisions on warranty and guarantee. The amendments were not intended to exclude these liability regimes, but only to introduce an additional liability regime based on the principle of risk.
In accordance with the provisions of the Civil Code, liability for damage caused by a dangerous product may be borne by the manufacturer, quasi – the manufacturer, the importer and the seller. This responsibility is based on the principle of solidarity, which means that the injured party can assert their rights from all of the above-mentioned entities or from only one of them. The choice in this respect is always up to the injured party.
National standards relating to cybersecurity, including the GDPR and the provisions on the collective interests of consumers, are also important. The AI Act regulation aims to regulate the applications of artificial intelligence in order to ensure security and protection of the rights of entities. The AI Act defines, among others, the requirements for liability for damage caused by artificial intelligence systems.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Currently, in the absence of an alternative, the legal basis for claiming compensation for damage caused by the operation of systems based on artificial intelligence are the general provisions of individual Member States regarding liability for damage.
The Polish Civil Code contains a rich catalogue of such events, other than direct human actions. As an example, we can point to Article 435 of the Civil Code, referring to the responsibility of the operator of an enterprise set in motion by the forces of nature. The above provisions could accordingly constitute a basis for tort liability for events caused by artificial intelligence. However, it may be the cause of events that may lead to a violation of the goods and rights of other entities, which in turn may lead to damage to these entities.
Due to the various activities of artificial intelligence, the damage may be material or non-material, as well as it may be caused by a human using artificial intelligence or without its participation. The biggest problem arises when the autonomous actions of artificial intelligence have caused damage. Regardless, the law gives certain foundations that should be used in the context of AI – for example, the institutions of responsibility for the vehicle and for the enterprise set in motion by the forces of nature. One can also look at animal responsibility as an interesting concept that can be reached by analogy (i.e. Articles 431 and 435 of the Civil Code).
In addition, in the light of intellectual property law, depending on the case of violation of someone else’s rights, provisions may also apply, for example standards on criminal liability in the Act on Copyright and Related Rights, as well as Industrial Property Law and the General Criminal Code.
As far as EU legislation is concerned, the current regulations do not regulate the issue of liability for damage caused by artificial intelligence systems, while
this situation may be significantly affected by the proposal of the European Commission of 28 September 2022, in which it proposes a new EU directive on liability for artificial intelligence – the Directive of the European Parliament and of the Council on the adaptation of the provisions on non-contractual civil liability to artificial intelligence (AILD).
It is also worth noting that Member States may adopt or maintain national provisions which are more favourable to the grounds for justifying a non-contractual civil claim for compensation for damage caused by an artificial intelligence system, provided that such provisions are compatible with EU law.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Damage caused by AI is the responsibility of a natural or legal person. It’s all about the operator and the manufacturer, because to a certain extent they have the ability to control the operation of artificial intelligence and the risks associated with it.
In Poland, liability for damage caused by the artificial intelligence system can be divided between various parties involved in the process of creation, implementation, use and victim of the damage. Here are the general rules for the division of responsibilities:
Developer: The AI system developer may be liable for damage resulting from faulty design, programming errors or negligence in the system development process. The Developer is obliged to ensure that the system is safe and in compliance with applicable regulations.
Implementer: The entity implementing the AI system, i.e. the company or organization using the system, may be liable for damage if it has incorrectly implemented the system or failed to ensure proper supervision over its operation.
User: The user of the AI system may be liable for damage if he/she improperly uses the system or does not follow the guidelines for its use.
Victim: The person affected by the AI system may be entitled to compensation for the damage suffered. The victim may pursue compensation claims against the developer, implementer, user or others responsible for the damage.
The division of liability for damage caused by the AI system in Poland may be determined on the basis of applicable civil law, consumer law and other regulations regarding liability for damage.
The AI Act regulation aims to regulate the applications of artificial intelligence in order to ensure security and protection of the rights of entities. The AI Act defines, among others, the requirements for liability for damage caused by artificial intelligence systems. According to the AI Act, operators of artificial intelligence systems may be liable for damage caused by their systems, including material, non-material damage and death. Operators may be required to provide liability insurance or other forms of financial security in the event of damage.
In the context of liability for damage caused by AI systems, the AI Act imposes obligations on operators to ensure that their systems are safe, compliant with the law and do not cause damage to users or other entities. Operators must also ensure transparency and adequate documentation of processes related to artificial intelligence systems.
The division of responsibilities between developers, implementers, users and victims of damage, in accordance with the provisions of the AI Act, may be dependent on the specific circumstances and applications of artificial intelligence systems.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Currently, in the absence of an alternative, the legal basis for claiming compensation for damage caused by the operation of systems based on artificial intelligence is the general national provisions on liability for damage. These provisions make the possibility of obtaining compensation for damage conditional on the claimant proving a number of conditions, which usually include: the culpable act or omission of the person who caused the damage, the occurrence of damage in a certain amount and the existence of a causal link between the conduct of the perpetrator and the damage suffered. The problem arises when, within the cause-and-effect chain, existing between human activity and the resulting damage, artificial intelligence activity appears, characterized by autonomy and complexity. The degree of complexity of decision-making processes by systems based on artificial intelligence means that it may be a challenge for the person seeking compensation to prove that the specific construction and operation of artificial intelligence or specific input data entered into these systems by a potential perpetrator actually contributed to the result of the artificial intelligence system, which resulted in the damage. This can in many cases be a real obstacle to the injured party obtaining compensation.
Nevertheless, in Poland, there are provisions allowing to apply to a civil court for the preservation of evidence or obtaining information held by entities responsible for the construction, implementation or from operators of the necessary information and evidence. This may be of material importance to the assessment of the breach and its scope, and to the taking of evidence.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
There are specialized insurances for the technology industry. Such policies may include protection against risks related to programming errors, data breaches, or other technical problems related to AI. However, currently there is probably no offer on the market that directly responds to business activities based on the risks associated with designing and implementing or using AI.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
The inventor indicated in the patent application filed with the Patent Office of the Republic of Poland may only be a natural person (human).
Polish Industrial Property Law literally does not directly indicate this requirement. However, its provisions should be considered in the context of other applicable regulations, including implementing regulations on patent applications, the Civil Code, as well as the stance of the doctrine. As a result, as in most legal systems in the world, it is assumed that artificial intelligence, as devoid of legal personality, cannot be considered as the creator for the purposes of a patent application.
To the best of our knowledge, both the Patent Office of the Republic of Poland and national courts have not so far considered cases concerning applications for inventions in which AI has been indicated as the inventor. At the moment, therefore, we do not have national decisions in this regard.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Pursuant to the Polish Copyright and Related Rights Act in force, the subject of copyright protection is a work that constitutes any manifestation of creative activity of an individual nature, established in any form, regardless of its value, purpose and manner of expression. The protection resulting from the Act is granted to the creator. It is presumed that the author is a person whose name has been shown in this capacity on copies of the work or whose authorship has been made public in any other way in connection with the dissemination of the work.
In one of the judgements of the Court of Justice of the European Union, he pointed out that “in order for an object to be considered original, it is necessary and at the same time sufficient that it reflects the author’s personality, manifested in his free and creative choices (…). On the other hand, where the performance of an object is conditioned by technical considerations, rules or constraints that leave no room for creative freedom, that object cannot be seen as displaying the originality necessary to be considered a work” (judgement in case C-683/17 of 12 September 2019.).
Therefore, it should be assumed that artificial intelligence, as deprived of its own personality, which allows making free and creative choices, is not capable of creating works within the meaning of copyright law. Therefore, the images generated by it do not benefit from protection under these regulations. A separate topic is the use of the generated image as a basis for further human processing. It can be assumed that with an appropriate level of creative modifications, the person making them becomes the author of the new work.
One view is in favour of treating AI as a tool in the hands of the creator. With this assumption, the introduction of specific, precise commands defining parameters (“prompts”) is enough to recognize a person using AI as the creator.
According to our knowledge, Polish courts have not yet ruled in this respect, so currently the considerations on the subject are not unequivocally confirmed in the jurisdiction.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
When treating artificial intelligence as a tool for building a company’s competitive advantage, it is worth taking your time to implement it to minimize threats, both in the IT and legal areas. Systems using artificial intelligence mechanisms support a wide spectrum of organizations’ activities, such as marketing, HR, R&D, product creation and implementation or customer service. The use of AI in such sensitive areas as making strategic business decisions, including financial ones, for the company is also increasing intensively.
The implementation of AI tools should be based on considering numerous aspects, which include, among others, goals, applications, compliance with applicable regulations, cybersecurity, human resources or financial resources. It is therefore important that the implementation in the organization is preceded by the process of collecting information on the requirements of potential users, as well as the introduction of internal regulations and guidelines. In addition, the systems used should be monitored for bias, data security should be taken care of and transparent practices should be created, not forgetting to integrate internal processes and guidelines, e.g. with innovation regulations or work regulations.
An example of a problem that should be noted is the phenomenon of the “black box” i.e. a system whose operation is incomprehensible and impossible to be clearly explained by a human being.
Let’s also not forget that systems are as good as the data on which they are based. Thus, if the data were incorrect or burdened by the aforementioned bias, it may be that the decisions made on the basis of the data were wrong. Consequently, the decision to automate some decision-making processes should take into account human supervision and control. Even a well-prepared inquiry (“prompt”) to artificial intelligence will not replace years of experience in business, detailed knowledge about the company or insight into the needs of customers.
The enormous effectiveness of AI must not overshadow the considerable risk of lack of control over content and the so-called “hallucinations” of artificial intelligence. The latter are referred to when an algorithm, not knowing or being sure of the answer to a question, generates false information. Enterprises should take care of the implementation of quality control mechanisms and supervision over the content created by the algorithm.
Companies, in order to remain credible and maintain customer trust, should ensure respect for privacy by maintaining personalization within an ethical framework. Also in this space, it is crucial to use the collected customer data in a manner consistent with applicable regulations. In the context of ethics, let’s not forget about situations in which artificial intelligence is used in contacts with customers or employees. It is an absolute necessity to properly inform that they are interacting with a chatbot and not a human.
-
What privacy issues arise from the use of artificial intelligence?
The use of artificial intelligence is associated with many threats and problems in the field of privacy protection, both at the stage of training and actual use.
Those on the last line to the entry into force of the AI Act, as well as previously adopted regulations such as the Digital Services Act and the Digital Markets Act, refer to the already existing provisions of the General Data Protection Regulation, in order to make an even more comprehensive reference to privacy issues.
AI collects, stores and processes huge amounts of data which in itself creates privacy violations. It may be doubtful whether the data is collected and used with the knowledge and consent of the persons concerned, or on another, legally justified basis. Huge computing power can also lead to a situation where data from various sources is processed resulting in re-identification.
Serious privacy violations can be caused by the use of AI for continuous monitoring in a specific scope, e.g. using biometric data or facial recognition. Enforcement of legal standards will be important in this respect.
AI can perpetuate and replicate existing biases, especially when the data used to train it is biased. This raises the risk of discrimination, which, when combined with the problem of lack of transparency and explainability, means that the decisions taken not only duplicate prejudices, but are also often difficult or even impossible to justify. It is crucial to minimise risk by limiting automated decision-making in certain areas and incorporating human oversight where we are legally and ethically obliged to do so.
It is common to use AI to profile users and create recommendation systems. This method of use raises wide legal and ethical doubts, and in this context it is particularly important to protect particularly vulnerable groups, e.g. minors.
Finally, the challenge is uncontrolled data disclosure and cybersecurity issues.
It is therefore important to ensure that AI is developed and implemented in accordance with applicable regulations and ethical standards, in a transparent manner and in accordance with the principle of “privacy-by-design”.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
Web scraping (obtaining and extracting information from websites and storing it in an orderly form) as such has not been separately regulated by the Polish and EU legislature. As a rule, it is not illegal in Poland, assuming that it does not violate generally applicable regulations such as the protection of personal data or copyright. However, the principles and guidelines developed must be followed to minimise the legal and ethical risks in this regard.
In the context of privacy, it is important to determine whether the data obtained from websites are personal data, sensitive in detail. If so, it is necessary to meet the obligations arising from the provisions on the protection of personal data (including the GDPR), including in the scope of the information obligation and the exercise of the rights of the data subject. The subject of consideration is, among others, the issue of the basis on which data can be processed.
The use of data obtained as a result of web scraping constituting a work within the meaning of copyright law without the consent of the authorized entity (e.g. license) may result in an allegation of violation of its rights. Unless, of course, one of the exceptions in the form of fair use, such as personal use or the right of quotation, applies.
Not without significance for this issue will be the implementation of the Polish provisions of the Directive on Copyright and Related Rights in the Digital Single Market, especially in the field of text and data mining. At the moment, the final wording of the law is being worked on, which in its current wording introduces a new form of permitted use.
The regulations on the protection of databases, regulated by a separate legal act in Poland, should also be taken into account.
From the perspective of the provisions on unfair competition, it has not been confirmed so far that web scraping prohibited acts, i.e. acts contrary to the law or good manners and infringing the interests of other entities. This does not exclude that, for example, scraping data from websites that explicitly prohibit it in the regulations or in a way that hinders traffic on the website (burdening servers) or leading to the acquisition of trade secrets can be considered to violate the interests of their owners.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
If the website regulations explicitly prohibit web scraping, third parties are obliged to comply with them. In Poland, the prohibition of data scrapping may be enforced in accordance with applicable legal provisions on the protection of personal data and counteracting unfair practices. In particular, in accordance with the Polish Act on the Protection of Personal Data and the provisions of the Civil Code, the collection of personal data without the consent or knowledge of the website users may be considered a violation of the law. The terms and conditions of use of the website, including the prohibition of data scraping, may constitute an important part of the agreement between the user and the website owner, which may enable the enforcement of such provisions. Violation of the provisions of the regulations of the party exposes the scraping entity to financial claims, including damage.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
At the moment, there are no official guidelines on artificial intelligence issued by privacy authorities. In Poland, there are no uniform, legally binding guidelines on artificial intelligence. However, there are various strategic documents and recommendations that set out the recommended rules of conduct in this regard.
The Personal Data Protection Office (UODO) takes initiatives in the field of data protection in the context of artificial intelligence. The Department of New Technologies of UODO was established, whose tasks include, among others, preparing opinions on data processing in IT systems and technological solutions. The Office supports legislative bodies in regulatory activities, including those aimed at adapting Polish law to EU regulations such as the AI Act. A proposal for UODO guidelines in this regard is to be created.
Webinars and publications are organized, including the official UODO Bulletin, addressing issues related to the use of data in AI systems.
The policy for the development of artificial intelligence in Poland adopted by the Council of Ministers in 2020 contains recommendations on ethics, security and responsible use of AI. In turn, a number of activities and principles developed by the Ministry of Digital Affairs should be indicated, in particular regarding: the data-based economy, financing and development, education, as well as law and ethics.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The Personal Data Protection Office (UODO) is dealing with a complaint regarding ChatGPT. OpenAI is accused, among others, of processing data in an unlawful, unreliable and non-transparent manner. We are therefore talking about a potential violation of many provisions on the protection of personal data, including, among others, failure to comply with the information obligation.
As part of the complaint, UODO will examine not only the issue of the proper exercise of the Complainant’s rights, but will also take steps to clarify doubts as to the compliance of OpenAI’s personal data processing model with applicable regulations on the protection of personal data.
-
Have your national courts already managed cases involving artificial intelligence?
To the best of our knowledge, the national courts in Poland have not issued any interesting rulings on AI so far.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Currently, the executor of the AI Policy is the minister competent for computerization, who submits to the Council of Ministers, by 1 September of a given year, information on the implementation of activities under the AI Policy for the previous year.
Currently, however, there are ongoing pre-consultations in Poland for the proposed implementation of the Artificial Intelligence Act from which it follows that the AI market supervisor should be a completely new body.
According to the announcement of the deputy head of the Ministry of Digital Affairs, the draft law introducing an artificial intelligence supervisory authority will probably appear at the beginning of 2025.
The AI Act will require such bodies to become operational by 2026.
In the public sector, there are also information activities carried out to study various processes related to AI and to expand awareness and knowledge about artificial intelligence, including the NASK Center for Artificial Intelligence Applications and Data Analysis or OPI PIB – Information Processing Center – National Research Institute.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
According to a study conducted by KPMG in 2023, half of the respondents who have already implemented artificial intelligence use AI to support marketing activities. 46% of enterprises declare that they use artificial intelligence in the field of production, and 42% also use it in the highly problematic process of controlling the supply chain in recent years. Only 23% of companies use AI to manage pricing and promotions, which is puzzling because the return on investment in AI in this area is the largest and fastest. Tools based on artificial intelligence are most often used in the information technology, media and communication sectors (25%) and life sciences (21%). Organizations that have implemented AI-based tools in the vast majority (62% of indications) do not monitor the effectiveness of its implementation, but among organizations that have such indicators, they all declare that their activities are effective.
In 2024, the use of artificial intelligence in business developed through the use of ChatGPT and other publicly available systems that allow for the optimization of the work and operation of enterprises.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
The legal sector uses artificial intelligence resources to find an ally in the preparation of repeated and schematic tasks. AI-based systems primarily provide the ability to quickly search for necessary or needed documents, as well as relevant fragments along with their basic legal analysis, which avoids additional work related to manual data verification.
In addition, the use of AI can speed up data analysis, facilitate the writing and verification of contracts and letters, or help in decision-making to some extent. These are promising prospects that lawyers can use to increase the speed and quality of services provided. These tools are programmed to process a large amount of data and draw conclusions on that basis.
In addition, artificial intelligence, based on patterns, can more easily capture a certain repetitive pattern, e.g. in documents related to a given case. This affects the efficiency and effectiveness of the lawyer’s work, who can focus on the strategy and merits of the case, and not on repetitive activities or the collection of information.
However, it is necessary to verify all work results generated by artificial intelligence from the point of view of responsibility. The lawyer is responsible for the content transmitted, and it often happens that artificial intelligence indicates decisions, judgments or legal acts that have never taken place. By signing a letter that contains, for example, erroneous jurisdiction, the lawyer certifies false information, for which he is responsible.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges:
- Correctness of the generated information and incomplete scope of information – artificial intelligence technologies rely on the information that was provided to them, so there is no certainty as to the correctness and reliability of the generated data (burdening AI with, among others, bias, “hallucinations”). It is recommended to verify their correctness from the formal and substantive point of view. Similarly, artificial intelligence, relying on its resources, generates content that, in its opinion, is appropriate and responds to a given problem. The challenge may be to identify whether the AI results are comprehensive and complete
- Data protection and professional secrecy – lawyers are covered by secrecy due to the profession of public trust. Therefore, the problem may be to formulate an appropriate question or provide appropriate AI content due to sensitive data that should not be disseminated
- Loss of jobs or the need to change qualifications – artificial intelligence is able to replace many human skills needed to perform a given task. In the long run, this may prove problematic due to the lower demand of employees who will be replaced by modern technology or will have to acquire new skills and change their working habits
- Responsibility for the generated content – the attorney is responsible to the client, court or authority, so the challenge may be responsibility for the resulting data and legal steps taken on their basis, including the division of legal responsibility between human operators, AI programmers and AI systems.
- Meeting legal standards and guidelines – the creation of AI tools, their implementation and use, as well as responsibility for the results, involves meeting a number of national and EU legal requirements and the application of various legal acts. Therefore, meeting these legal requirements may impose significant costs of legal advice or lead to conscious legal risk taking related to non-compliance with guidelines or legal standards.
Opportunities:
- Increased productivity – artificial intelligence helps to perform repetitive activities, and the lawyer is able to focus on strategy and action
- Additional research – artificial intelligence supports lawyers in faster search for information, judgments, deadlines, which allows for an increase in the amount of data obtained and processed in a shorter period of time
- Data analysis – artificial intelligence analyzes data faster, creating summaries and generating content needed to develop a strategy, this is especially useful in intellectual property law, which covers many areas and registered rights
- Automation – artificial intelligence supports lawyers in faster creation of repetitive legal documents, e.g. contracts, saving time and eliminating possible errors
- Faster communication with the client – using chat bots, it is possible to respond faster to the client’s needs during the first contact and reduce the waiting for the first contact from the lawyer
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Poland closely follows European legislative changes concerning artificial intelligence and actively participates in the creation of harmonised European legislation.
The act on artificial intelligence will most likely come into force in mid-June. Therefore, arrangements regarding the implementation of the regulation and the creation of a new office dealing with artificial intelligence are ongoing in Poland.
As the ministry points out, this body is expected to, among other things:
- be a single point of contact for the general public and other partners at Member State and EU level,
- consider applications for marketing authorization or commissioning of high-risk AI systems and complaints of persons subjected to AI systems,
- receive reports of serious incidents,
- cooperate with the Commission, the Advisory Forum and the AI Board.
Currently, it is primarily important to base the legal standards and implementing acts on the AI Act, DORA, AILD, including the appointment of the relevant national authorities.
According to the “Policy for the development of artificial intelligence in Poland from 2020”, the objectives to be achieved by 2027 include such activities as:
1. Analysis and elimination of legislative barriers and administrative burdens for new enterprises dealing with artificial intelligence by:
- creating conditions to increase the flexibility of the labour market through appropriate changes in legislation and consultations with employers and trade unions in this respect;
- preparing new types of licenses for algorithms and ICT solutions enabling the open use of AI technologies (produced from public funds) by the public sector;
- updating the law in terms of ensuring access to data, including sensitive data (e.g. medical data) and the conditions for the functioning of trusted spaces for sharing this data, taking into account the protection of privacy and personal data;
- preparing and updating the legal system in terms of the possibilities of practical implementations of artificial intelligence, which concern not only algorithms, but also data processing in the cloud using edge computers, the use of Internet of Things solutions (IoT) in the context of industry, public data collection,
- as well as the security of citizens’ data and the sharing of “industrial” data;
- preparing and updating the law in terms of practical implementations of autonomous drones using artificial intelligence, which will be used in agriculture to inspect crops and infrastructure of protected facilities;
- consulting with the academic, social and business community in order to develop and regularly update the directions of promotion, changes in law and activities aimed at eliminating legislative barriers and administrative burdens in a dynamically changing environment.
2. Taking actions in specific areas related to the development of artificial intelligence, in particular for efficient and quick access to data and their use by all participants in economic life, regardless of the size of the institution by:
- promoting solutions related to data openness, including through the development of the Digital Administration Sandbox, the Open Data portal, digital repositories created in the cultural sector and commercial and academic solutions based on open data, as well as pilot sector trusted data spaces;
- enabling access to high-speed infrastructure solutions – among others, computing centres (including GPUs) and broadband connections (including those based on 5G or newer networks) on which calculations can be performed.
3. Support for programs preparing society for changes caused by the development of algorithmic economy in Poland by:
- creating new and aggregation of existing knowledge bases and educational materials within one contact point for people retraining in the field of modern technologies;
- continuing the development of the offer of courses, fields of study and interdisciplinary scientific and research programs (including online and hybrid ones, i.e. those that combine full-time education with online education) organized in cooperation with representatives of the business community and combined with elements of career counseling and building a network of industry contacts.
4. Preventing unemployment and flexibly creating new jobs in the labour market for disadvantaged groups by:
- information, educational and retraining programmes aimed at counteracting the formation of unemployment;
- training and retraining courses for representatives of the most endangered professions, including encouraging the acquisition of qualifications and development of skills in the field of modern technologies, among others by developing market qualifications included in the Integrated Qualifications System.
5. Defining permanent programs to support artistic and creative activities in the field of AI by:
- co-organizing exhibitions operating in a continuous mode from works created and co-created with the help of AI;
- regulating the issue of intellectual property of works created using AI;
- organization of international competitions for works created with the help of AI and the support of Polish artists winning in competitions organized abroad.
Poland: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Poland.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?