News and developments

Legal Guide to AI by Lesniewski Borkiewicz Kostka & Partners.

Mateusz and Grzegorz are both attorneys and managing partners at LBKP law firm (lbplegal.com).

They help clients navigate the complex and dynamic legal landscape of new technologies. With over 12 years of experience in this field, they have made a name for LBKP as “special task force” in the IT law.

Please note, that section 1 of this guide is for IT-law geeks, Section 2 for semi-geeks, and Sections 3-5 are for those looking for practical Q&A.

Table of content

  • AI – what is it (legally speaking) and how the world is handling it (also legally speaking)?
  • Is use of AI legal? Key practical problems
  • Who is or will be responsible for AI?
  • How to develop AI tools?
  • How to use AI tools?

  • AI – what is it (legally speaking) and how the world is handling it (also legally speaking)
  • “In the rapidly evolving realm of technology, Artificial Intelligence (AI) stands as a groundbreaking force that has not only transformed industries but has also raised a multitude of legal questions and challenges.”

    ChatGPT, asked to introduce itself in the context of legal challenges

    Legally speaking, as of today there is no legal (statutory) definition of AI in the European Union nor in any other major legal jurisdiction. Probably the closest to AI definition that we have is definition of automated decision-making in GDPR that can include some AI systems. Article 4(4) of the GDPR defines automated decision-making as "a decision which is based solely on automated processing, including profiling, and which produces legal effects concerning the data subject or similarly significantly affects him or her." However, this definition does not cover most of AI systems as we currently conceive them.

    Thus, legally speaking AI is currently “just” a technology or a set of technologies and is regulated in the same way as any other technology – through multiple different legislations applicable to certain contexts or uses. It can be used for good or to cause harm, its use can be legal or illegal – all depends on situation.

    Different parts or components of this technology can be owned by different persons or companies (for example copyrights to certain program code, or ownership of databases), but the idea of the AI is public. And since more and more tools and knowledge regarding AI is available to all, theoretically everyone can use AI tools or create new AI tools.

    Yet, as AI is “a groundbreaking force”, different countries struggle to describe this technology for legislative purposes. In the past, lawmakers would rarely consider creating new legislation on international level specifically for single technology. However, past few years proved that more and more technological breakthroughs require quick legal response – not to look far, just think of cloud computing, blockchain, and now AI.

    Why is this important? Everyone agrees that AI has potential to change economic and social landscape all over the world. Obviously, it is happening already, and the process speeds up every single day - which is as exciting as scary. The tempo of new developments makes it difficult to predict outcomes. Therefore, having some legal rules in place to ensure that AI is used in a way that benefits everyone is of crucial importance. And since it is “a global phenomenon”, it would be great to universally agree what AI legally is, at least. Most likely, not going to happen on a global scale. Some countries are trying to define AI due to its purpose or functionalities, other due to the technologies used, and some mix different approaches. But many of key jurisdictions are trying to align on the definition of AI and find common principles. This is important to avoid practical problems, especially for providers of global AI solutions, as soon they will face numerous compliance issues. Only at least basic interoperability across jurisdictions will allow AI to reach full potential.

     EU approach to AI

    Let us take a dive into definitions introduced by the EU in “AI Act” [1]. It defined ‘artificial intelligence system’ (AI system) as:

     

    machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

     

    If a system does not meet one of the above conditions, it is not an AI system under the AI Act.

    However, the definition of an artificial intelligence system has been systematically changed since work on the AI Act project began.

    The definition refers to the definition created by the OECD in November 2023 (details below).

    In November 2023, OECD updated their AI definition included in OECD AI Principles[2], which was the first intergovernmental standard on AI (adopted in 2019). It is significant especially due to the number of governments worldwide that have committed to the document.

    Now, without any further ado:

     “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

    Current OECD AI system model[3]

    Other than including definition, the OECD Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI.  They include:

  • inclusive growth, sustainable development and well-being;
  • human-centred values and fairness;
  • transparency and explainability;
  • robustness, security and safety;
  • and accountability.
  • And consequently, legislation in the countries that committed to the OECD AI Principles, should focus and reflect the above, at least in theory.

     Other countries’ approach to AI

    Naturally, one of the most active jurisdictions when it comes to AI are the USA. According to the National Conference of State Legislatures[4], in the 2023 legislative session at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and 15 states and Puerto Rico adopted resolutions or enacted legislation. More than 120 initiatives were taken by individual States – and this number is only related to general AI issues (legislation related solely to specific AI technologies, such as facial recognition or autonomous cars, is being tracked separately).

    The approach across the U.S. varies. As a curiosity, in California in May 2023 California Assembly Member Essayli introduced a bill urging the U.S. government to impose an immediate moratorium on the training of AI systems more powerful than GPT-4 for at least six months to allow time to develop much-needed AI governance system – its status is currently “pending”, however, it does not seem likely to be adopted.

    Regarding definition of AI, there is no uniform legal definition in the U.S. However, one of key AI related legislations - National AI Initiative Act of 2020 - established the National Artificial Intelligence Initiative Office and defined artificial intelligence as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” It further explains that “artificial intelligence systems use machine and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.” However, this document focuses mostly on organization of the AI Initiative Office to support growth of this technology across the U.S. rather than on regulating the AI itself.

    The U.S. did commit to the OECD AI Principles. Nevertheless, there are also other hints on what should be expected next from federal legislation on AI. “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People”[5] is a place to start. It was published by the White House Office of Science and Technology Policy in October 2022 and lists 5 principles to “help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs”. The principles include:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback
  • Definition of AI systems covered by the Blueprint includes two elements: (i) it has potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access and (ii) it is an "automated system". Automated system is further defined as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.” Just to clarify, “passive computing infrastructure is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity.”

    As for some other key jurisdictions[6], neither of the below have any universally recognized legal definition, but:

  • China has recommended national standards and local regulations that base some definitions on AI functionalities;
  • Hong Kong, in the Guidance on the Ethical Development and Use of Artificial Intelligence issued by the Office of the Privacy Commissioner for Personal Data refers to artificial intelligence as “a family of technologies that involve the use of computer programmes and machines to mimic the problem-solving and decision-making capabilities of human beings”;
  • Japan has “AI Strategy 2022”, which was issued by the Cabinet Office’s Integrated Innovation Strategy Promotion Council, which suggests that “AI” refers to a system capable of performing functions that are deemed intelligent.
  • Singapore has defined “artificial intelligence” within the Model Artificial Intelligence Governance Framework issued by the Infocomm Media Development Authority and the Personal Data Protection Commission as a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).

  • Is use of AI legal? Key practical problems
  • Like any tool, artificial intelligence can either assist certain processes in a legally permissible way or lead to violations. The legality of AI use depends on the specific purpose, industry, and potential risks linked to the technology. It is generally allowed, as most of technologies are, but it is essential to follow various laws and rules that differ depending on the location / jurisdiction. Since the technology is now mainstream and the regulatory landscape continues to evolve, this can be a true challenge.

    Given the wide range of regulations potentially affecting the application of AI, governments and different global organizations underline the importance of creating guidelines and standards to manage the ethical, social, and legal aspects of AI across different jurisdictions, industries and potential uses.

    Most of experts agree that in the context of current legal framework the following areas should be highlighted to consider when discussing AI:

  • privacy laws: AI technology often deals with personal data, and there are privacy laws like the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States that may apply to safeguard individuals' privacy;
  • intellectual property: AI technology can raise issues related to patents, copyrights, or trade secrets. Companies must ensure that their use of AI doesn't infringe on the intellectual property rights of others;
  • anti-discrimination laws: entities shall ensure that AI systems don't result in unfair or discriminatory outcomes, for example in areas like employment or financial services;
  • consumer protection: companies implementing AI in consumer-oriented applications need to follow consumer protection laws. This involves providing clear information about how products or services meet quality and safety standards in particular.
  • What is important, the above areas should concern not only customers who use AI solutions but also (or in particular) companies who create and operate them. If we are looking for court cases currently being handled, we would find for example:

  • lawsuit against GitHub, Microsoft, OpenAI - allegation of unlawful use of code created by other programmers to build and train the Copilot service, which uses AI to write software;
  • lawsuit against Stability AI - allegation of unlawful use of images collected in the Getty Images database;
  • lawsuit against Stability AI, Midjourney, DeviantArt - allegation of infringement of the intellectual property rights of creators - graphic artists.
  • Other significant action against AI creators was interim emergency decision issued on 30 March 2023 by the Italian Data Protection Authority (Garante), ordering OpenAI LLC to immediately stop the use of ChatGPT to process the personal data of data subjects located in Italy, pending further investigation. Although in late April 2023  Garante declared that OpenAI implemented measures allowing ChatGPT to operate in Italy once again (for now), different data protection authorities across the globe are investigating its operations.

    Therefore, it is crucial for organizations to stay up to date on the legal and ethical aspects of AI, especially since the trend is to further regulate the rules of using AI specifically and to do so in as general way as possible.  Considering that new technologies and potential uses of AI emerge every day, lawmakers refer to many general clauses or even ethical standards to create rules that will be able to stay on top of whatever comes next. However, the more general rules are, the more difficult it is to comply with them. Just imagine being legally required to simply “be a good person”, whereas life is not simply black or white.

    Currently, lawmakers at the EU-wide level, have finished work on the AI ACT regulation and are working intensively on the AI Liability Directive (AILD). These two key pieces of legislation will create an important regulatory framework for AI in the EU.

    The main objectives of the AI Act include ensuring the safety and fundamental rights of individuals in the development and deployment of AI systems. Key features of the proposed regulation include:

  • risk-based approach: the Act categorizes AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk defines specific prohibited AI practices, such as for example those that manipulate human behaviour in a way that could cause harm or exploit vulnerable groups. With other categories, the higher risk AI system poses, the higher regulatory requirements become. As for high-risk systems, there are obligations like: the issue of conformity assessment, documentation, transparency, information obligation, establishment of a risk management system, obligations of importers, distributors, obligation to automatically record events, regulations on notifications and notification to authorities.
  • transparency and accountability: the Act emphasizes transparency and accountability in the development and deployment of AI systems. It includes provisions for providing users with information about the AI system's capabilities and limitations.
  • data and training data quality: the Act addresses the importance of data quality, ensuring that the data used to train AI systems is unbiased and representative. It aims to prevent discriminatory outcomes resulting from biased training data.
  • enforcement and penalties: the Act proposes significant fines for non-compliance with its provisions, of up to 35 000 000 EUR or 7 % of global turnover (well, the 4 % or 20 000 000 EUR in penalties introduced by the GDPR apparently turned out to be not entirely convincing). National supervisory authorities will be responsible for the enforcement.
  • Meanwhile, AILD is intended to partially regulate general rules for civil liability for the “actions” of artificial intelligence, or rather its creators or operators (so-called non-contractual liability).

    And what does the world outside the EU look like?

    In October 2023, The President of the United States signed The Order on Safe, Secure, and Trustworthy Artificial Intelligence which is intended to "set new standards for AI security, protect the privacy of Americans and American women, promote equality and civil rights, protect consumers and male and female workers, promote innovation and competitiveness," and "strengthen American leadership in the world." It is, however, primarily a political declaration, but certainly shows direction for further actions.

    Worth mentioning is also a bill called the AI Disclosure Act of 2023. It was introduced in the summer of 2023 in the Congress to try to address issues of AI transparency and accountability. The status of the project is pending. The bill, if enacted, will require any content created from generative AI to carry a disclaimer stating: “Disclaimer: this output has been generated by artificial intelligence” in order to inform users that what they are engaging with is a product of AI rather than human creation.

    As you can see, regulatory landscape changes very fast in the context of AI. Compliance of the AI systems themselves and then compliant use of the AI system is and will be challenging. However, since most of regulations come from similar principles, it is possible to create and implement AI systems in conscious way to mitigate future risks or setbacks.

  • Who is or will be responsible for AI?
  • Before answering question “who is responsible for what”, it is important to understand that responsibility for AI is a very complex topic – many different persons and entities participate in the value chain. At the same time, it must be clearly said that AI itself does not have a separate legal personality and is not likely to have it in foreseeable future.

    When we think of responsibility for AI, there are the following key areas:

  • responsibility for breaches at the stage of development of AI – good examples are: using copyrighted works as a training database against licensing terms, or processing massive amounts of personal data without proper legal grounds or without executing other obligations from GDPR;
  • responsibility for breaches at the stage of operation of AI (direct acts and omissions of the AI provider/developer/operator) - for example disregarding data subjects’ requests received by the operator of AI system;
  • responsibility for the results of work of AI or – for example a system defect causing the generation of erroneous content or making automated decisions leading to discrimination, or generation & illegal use of deep fakes or scripts/scenarios to commit fraud or scam.
  • As pinning responsibility for scenarios listed in the letters a-b above seems intuitive (AI system developer (or provider – thus called by AI Act)), lets focus at the very bottom of the AI chain (letter c).

    Whoever uses functionalities of AI or results of AI work, may be liable for the legality of the process or its results.

    Firstly, the use of the results of AI work is usually subject to the terms of the license agreements for these tools (at least the so-called catalogue of permitted uses, included in the regulations or an annex thereto). If it allows commercial use, an user can, for example, sell the generated graphics or text.

    However, it must be remembered that AI may generate the content that violates the rights of another person - may draw too much from some sources or in a wrong way. After all, it operates on a specific database, engine and statistics. No one can guarantee us that AI's answers will be unique, true and complete.

    To give example, if you create a prompt for generative AI and as a result you receive a text or graphics or code that happens to infringe third party’s copyrights, it is your responsibility. AI does not have copyrights as only works of a human can be copyrighted. If we try hard to find “copyrights” in the AI’s works, the creators of the results would be either the creators of the AI system and/or the authors of the prompts. However, most experts agree that results of AI works are not copyrighted. They can be subject of general proprietary rights though, and most of T&C states that such rights belong either to the “creative commons” of the users of the AI tool or to the person who issued a prompt, often depending on a license type you use when accessing AI tool (free vs paid).

    At the same time, in most jurisdictions, a person whose copyrights were infringed can take legal action directly towards a person who infringes their rights and claim damages. It does not matter if this happens intentionally or not. You officially become responsible for AI.

    Similarly, if for example you apply AI in your internal hiring process, and due to some fault in the system you start applying discriminatory criteria or start making automated decisions that harm individuals (employees or job candidates), you are likely to be liable towards them and towards public authorities (likely you are facing a GDPR incident).

    As of now, if you get in one of the above-mentioned situations, it can be very difficult to pursue claims against AI operators, if you used their services. To claim damages (or recourse) from them, you would need to prove what happened, then prove cause and effect relationship (not as easy as sounds), then start legal proceedings in a far-away jurisdiction. In the meantime, you will probably find out that you accepted T&C for the AI system that excludes or significantly limits liability of the service provider.

    Future, future...

    In the case of the general AI, the European lawmakers emphasize that the issue of liability should center mainly around the “operator”, i.e. the entity/person who, in simple terms, is responsible for the design process, the input of data into the system, developing an AI system, placing it on the market or just using it in professional activity. However, this does not mean that it fully excludes liability of the end-users of AI works – you still will need to use it with diligence.

    In addition to potential claims, potential penalties are to be a mobilizing factor for AI providers to comply with the law - AI ACT introduces the possibility of penalties of up to 35 000 000 EUR or 7% of global turnover.

    In the context of personal data, in the event of a breach of personal data processing using AI, the personal data controller faces liability under the GDPR, including penalties of up to 20 million EUR or 4% of global turnover (and, of course, faces potential lawsuits from individuals).

    AILD, on the other hand, regulates non-contractual civil liability.

    This applies to situations in which damage caused by AI systems arose as a result of random events or incidents between entities that were not bound by contract. By way of example, a debt collection AI system misidentifies a debtor and blocks his access to services or an AI system that is supposed to generate individually-targeted medications administers the drug to a person who is not a patient, resulting in damage to health, etc.

    To put things simple, AILD assumes a presumption of a causal connection between the resulting damage and the action or inaction of AI. Accordingly, the entity responsible for the AI action must prove that it shouldn't be held liable to rebut the presumption.

    What is more, the AILD empowers courts to order companies to disclose relevant evidence regarding specific high-risk artificial intelligence systems suspected of causing harm. Non-disclosure can be considered as confirmation of allegations of the plaintiff.

    Of course, it still does not regulate every possible situation that can happen in practice, but it does at least set some rules for dealing with violations committed by artificial intelligence and the resulting damagesWhat is important, criminal liability for use of AI works is not addressed on the EU level. Thus, issues like defamatory or fraudulent use of deepfakes will still be regulated locally.

  • How to create AI tools?
  • As mentioned, there is still a lot of “work in progress” when it comes to law making in the context of AI systems. However, based on general principles declared in multiple jurisdictions, draft laws and current EU legislation that can be applied to AI systems, simplified legal guide to creating AI tools is as follows:

    Stage 1: define the functionalities and purposes of the AI system – not only intended but also other possible. In this context:

  • make sure that functionalities and purposes are legal, ethical and do not create opportunities to abuse. Consider also how results of AI system’s work may be used by third parties;
  • identify any compliance requirements (this may vary based on in particular location of your company and intended target markets) and confirm that you are able to meet them;
  • create a list of all potential risks and mitigation measures in relation to the functionalities and compliance regulations;
  • consider carrying out different risk-analysis, in the context of compliance requirements you are subject to. For example, if the system will use personal data as part of training database, execute Data Processing Impact Assessment. This will help you understand the scope of the project and challenges ahead
  • Stage 2: define the input database / training database. Make sure you are legally allowed to use it. There are many discussions on allowed use of publicly available data and so on, but this must be always considered case by case. Key risks include:

  • infringement of IP rights (copyrights, trademarks, rights to databases);
  • unlawful processing of personal data;
  • inability to execute rights of data subjects (GDPR allows for multiple rights of persons whose data is being processed and many AI systems are created in a way that makes it difficult to execute them);
  • infringement of trade secrets or confidential information;
  • make sure that the database is correct, has good quality data and will not create potential for biases or statistical errors.
  • Stage 3: Cybersecurity – implement solutions that are appropriate for your AI system and identified risks. This is not only about protecting effects of your work and investment, but also a compliance matter. In some cases specific requirements may be applicable, especially if you process personal data (GDPR) or if the system is designed to be used by financial sector (DORA).

    Stage 4:  Development tools – make sure that if you use any third parties’ technologies or applications to develop your AI system, you have all necessary licenses and are not breaching them. It is also important to obtain and analyse all relevant documentation of such external tools to understand how they may change your approach.

    Stage 5:  Development process – remember about:

  • privacy by design and privacy by default GDPR rules;
  • if your system will be offered to consumers, you need to follow consumer protection laws;
  • test everything multiple times;
  • implement a process to note and report incidents (capture, analyse, lesson learned).
  • Make sure that all stages meet requirements of transparency and sustainability. It will be crucial to prove compliance in the future. As many legal rules favour risk-based-approach, you will need substantial evidence that all risks have been considered at all stages of the life circle of the AI system and taken care of.

    If you are looking for more policies and guidance relating to AI, especially to identify possible risks, "OECD Framework for the Classification of AI systems"[7] is also a valuable source to check. It introduces a framework to assess AI systems’ impact on public policy in areas covered by the OECD AI Principles. It classifies AI systems and applications along the following dimensions: People & Planet, Economic Context, Data & Input, AI Model and Task & Output.

    Do remember, that if at any stage it appears that you work on potentially high-risk AI system, you will also have to implement quality and risk management systems to ensure compliance with the AI Act and other requirements and to minimise risks for users and affected persons. This will be relevant also after your product is placed on the market.

  • How to use AI tools in the organization?
  • No jokes – for many industries using AI is not a matter of choice, but of survival. Despite numerous challenges, it can be done.

    Before you create any internal rules and policies, take time to understand how AI may benefit your organization and what are potential risks related to implementing it.

    Doing and keeping an up-to-date operational risk analysis for your company is a valuable tool, not only in the context of AI, but also other areas such as cybersecurity or general business continuity. It may help you identify easy fixes (or not easy but necessary) that can prevent serious damage to your income, operational resilience, and reputation.

    So, what steps should be taken when using AI tools in a company, especially if they become regularly involved with key processes?

    Step 1: Identify all potential use cases of AI and specific AI tools your organization is going to use.

    Step 2: Define purposes of using tools and check if they are in line with relevant T&C.

    Step 3: Double check what happens with the prompt and results of AI work that you get – are they available only to you? Are they stored within the EEA? Are they used for AI training? This may determine if use of the tool is compliant or how to make it compliant by for example limiting scope of information that your employees can use as prompt (step 4 below);

    Step 4: Define what can be used as a prompt / input – remember about limitations that may be applicable to you due to copyrights, personal data protection, NDAs, MSAs and other contracts or regulations that your company is subject to.

    Step 5: Prepare at least basic risk analysis and implement it in practice. Try to:

    a) name all the potential risks that come due to use of the AI tool, for example:

  • unintentional /unsupervised use of confidential information or personal data as a prompt;
  • security breach at the level of AI tool (leak of user’s data);
  • security breach at the level of your account (password cracking or disclosure due to a scam);
  • sudden malfunction or unavailability of the AI tool;
  • results of AI work happen to infringe copyrights of third parties;
  • results of AI work are incorrect, misleading or harmful;assess how probable are the risks taking into account your current setup and securities in place;
  • b)assess severity of the risks – how “expensive” in terms of money, reputation or other types of liability will it be, if the risks actually materialize;

    c) implement mitigation mechanisms until all risks have acceptable level – are very unlikely to happen and/or the costs of overcoming them is acceptable. For example (with reference to relevant risks named in the letter “a” above):

  • training of persons using AI and having formal procedures that they are obliged to follow;
  • making sure that not too much information is stored within the tool (retention policy) and the company has a local copy of important data;
  • training of persons, strong passwords, using 2FA to log-in;
  • ensuring that you have another way to execute crucial functions within the organisation;
  • documentation of the creation process, prompts, results; additional verification of the results against public sources to check for potential infringements;
  • testing procedures, possibility of human intervention to verify and change decisions.
  • Step 6: Identify problems, incidents, keep track of them, always apply “lesson learned” principle and check if conclusions were implemented.

    Other than the above:

  • try to use mostly tools that are not considered as in clear breach of GDPR or copyrights. Otherwise, even if you use it diligent, you may find yourself in a position of breaching legal requirements;
  • pay attention to whether the AI application exposes direct user (your employee) to data processing that is inconsistent with the GDPR or violates their rights. You are the data controller of the personal data of your employees – if you expose them to risks, you may be liable;
  • regularly check and verify the results generated by AI systems, as errors may occur that need identification and correction;
  • provide training to employees and collaborators on AI technology, promoting conscious and responsible use of these tools; employees play a crucial role in the safe use of AI. Continuous training programs and awareness initiatives ensure that employees are well-informed about the ethical considerations, data protection protocols, and best practices associated with AI. This empowers them to contribute to the responsible use of AI tools within the organization;
  • define and promote ethical issues related to AI application, setting boundaries for acceptable and unacceptable use of the technology;
  • check your contracts with your clients and contractors to understand if they allow use of AI and if yes, if it is safe for you;
  • stay informed and active - industry standards for AI usage may evolve as the technology matures and becomes more widespread; organizations should actively participate in industry discussions, collaborate with peers, and adhere to established standards. This ensures that AI practices within the organization are consistent with the broader industry landscape which is a strong risk mitigation;
  • be careful and diligent before you use AI for key processes, create or translate top secret data, or rely on AI too much.
  • In summary, the safe use of AI is an ongoing process that requires organizations to stay vigilant, adapt to changes in regulations and technology, and foster a culture of continuous improvement. By staying proactive and responsive to the evolving landscape, organizations can mitigate risks and leverage the benefits of AI responsibly.

    Authors: Grzegorz Leśniewski, Mateusz Borkiewicz

    Footnotes

    [1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance

    [2] Recommendation of the Council on Artificial Intelligence. Adopted: 22/05/2019, amended:  08/11/2023;

    [3] https://oecd.ai/en/wonk/ai-system-definition-update

    [4] https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation

    [5] https://www.whitehouse.gov/ostp/ai-bill-of-rights/

    [6] https://www.legal500.com/guides/guide/artificial-intelligence/

    [7]  https://doi.org/10.1787/cb6d9eca-en