-
What are your countries legal definitions of “artificial intelligence”?
Danish law does not provide a general legal definition of “artificial intelligence” (“AI”).
However, a definition is contained in the European Union’s (“EU”) Artificial Intelligence Act (“AI Act”): “‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” 1 2
Though the definition may de facto be generally used in practice, the legal definition of AI may still differ depending on the context; legally the definition used in the AI Act applies only within the context of the AI Act and is not a generally applicable legal definition.
The AI Act will enter into force as set out in section 3.
Footnote(s):
1 https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf
2 Article 3(1) of the EU AI Act
-
Has your country developed a national strategy for artificial intelligence?
Yes, the Danish Government has developed a national strategy for AI, published in 2019. 3
The strategy sets out the roadmap for how Denmark can be a front-runner in responsible development and use of AI. 4
The strategy outlines four objectives for the Danish development and use of AI:
- Denmark should have a common ethical and human-centred basis for AI.
- Danish researchers should research and develop AI.
- Danish businesses should achieve growth through developing and using AI.
- The public sector should use AI to offer world-class public services.
The strategy contains 24 initiatives that cover the public and private sectors. The strategy allocates DKK 60 million for new initiatives. A number of AI projects are carried out in cooperation with municipalities, regions, and private companies.
Key initiatives include:
- Principles for the responsible development and use of AI
- Common Danish language resource
- More open public-sector data for AI
- Signature projects in the public sector
- Stronger investment in Danish businesses
Footnote(s):
3 https://en.digst.dk/media/19337/305755_gb_version_final-a.pdf
4 https://en.digst.dk/strategy/the-danish-national-strategy-for-artificial-intelligence/
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Currently, only voluntary guidelines have been published. There are currently no rules, laws or legally binding guidelines specifically regulating AI in Denmark.
Voluntary guidelines
The Danish Agency for Digital Government has published three voluntary guides on the responsible use of AI in 2024, including guides for companies and authorities and a guide for citizens. 5 6 7
The Danish Agency for Digital Government in cooperation with the Danish Data Protection Agency also launched a regulatory sandbox for AI in March 2024 giving private and government organisations access to relevant expertise and guidance when developing and using AI solutions. Initially, the sandbox will focus on data protection (incl. the GDPR). At a later stage, it will also include guidance on the AI Act. 8
The guidelines highlight that it is the responsibility of the management to ensure that the use of generative AI is done in a responsible manner. Further, the guidelines highlight the risks associated with using generative AI tools, including the risk of bias, data leakage and data breaches.
In addition, the Danish Data Protection Agency issued guidelines in October 2023 on the use of artificial intelligence by public authorities. In the guide, the agency examines artificial intelligence, and the fundamental considerations authorities must address before developing AI solutions, including issues such as the basis for processing, duty of disclosure, and impact assessment. Though directed at the public sector, the principles in these guidelines would also apply partially to the private sector. 9
The Danish Financial Supervisory Authority published a set of recommendations on good practices for the use of AI in the financial sector in May 2024. As part of the authority’s 2025 strategy, which aims to support the prudent use of technology and new business models, the purpose of the recommendations is to make financial institutions aware of areas where the use of AI may lead to an increased need for risk-mitigating actions.
Furthermore, KL (Local Government Denmark) has published a guide for municipalities. The guide aims to help municipalities use AI responsibly and wisely. 10
Existing laws
The existing laws that could be applied to AI are primarily the Danish Copyright Act, the Danish Data Protection Law, and the Danish Marketing Practices Act.
AI analyses and reproduces patterns based on data. It has been debated whether this data use involves copyright infringement, as training data often involves copyrighted material. The Danish Parliament has addressed a part of these input issues by adopting an amendment to the Danish Copyright Act on June 1, 2023, which – based on the Directive on Copyright in the Digital Single Market – implemented new provisions in sections 11 b and 11 c of the Danish Copyright Act on so-called text and data mining. The effect hereof was that authors of works cannot object to their works being exploited for so-called text and data mining unless the rights holders have specifically opted out of commercial text and data mining. Violations of the Danish Copyright Act can lead to fines.
Another existing regulation that could be applied to AI is the GDPR, supported by the Danish Data Protection Law. AI systems process significant personal data, necessitating compliance with GDPR’s principles like lawfulness, fairness, transparency, data minimization, and purpose limitation. It also ensures individuals’ rights (e.g. access, rectification, erasure) are respected. Non-compliance with the GDPR can lead to significant fines.
In addition, the Danish Marketing Practices Act is another key regulation that could be applied to AI. This act governs the use of marketing, ensuring fair and non-deceptive commercial practices. AI-driven advertising and consumer data processing must comply with the act’s principles to protect consumers. Violations can result in fines and enforcement actions by the Danish Consumer Ombudsman.
Also, according to the Danish Financial Statements Act, companies must declare their data ethics policy in their annual accounts, cf. section 99 d.
The EU AI Act
Regulation of AI is primarily focused on European level to ensure a unified approach and harmonize legislation across borders in the EU. The AI Act will establish rules and standards for AI systems in specific sectors. It addresses concerns related to fundamental rights and protection of individuals from discriminatory or harmful AI practices. 11
The AI Act entered into force across all 27 EU member states on 1 August 2024. It will enter into force in stages with implementation from mid-2024 to mid-2027. The requirements for prohibited practices will already apply in February 2025. The requirements for general AI purpose models will come into effect in August 2025, followed by requirements for high-risk systems in 2026 and 2027 respectively.
Furthermore, an EU directive proposal (The AI Liability Directive) aims to adapt liability regimes to AI systems and clarify the burden of proof in cases involving AI. 12
Footnote(s):
7 https://digst.dk/digital-transformation/guide-til-brug-af-kunstig-intelligens/
10 https://videncenter.kl.dk/media/h4anfadj/guide-om-generativ-ai-til-kommunerne.pdf
11 https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
No specific rules apply directly to defective AI systems. However, a defective product containing an AI system can be subject to the Danish Product Liability Act, as well as the separate set of product liability rules developed through Danish case law. In most cases, the two sets of rules overlap. The Danish Product Liability Act implements the European Product Liability Directive 85/374/EEC.
The Danish Product Liability Act only applies to products. AI systems (software) are not expressly covered by the law. AI systems will however likely be indirectly covered when incorporated into a product; if the product is defective, the product is covered regardless of whether the defect was caused by the AI system. Therefore, the product liability rules as they stand today are not well suited to software (or AI) as the rules are designed for physical products. To address this, the European Commission has in 2022 proposed a revision of Directive 85/374/EEC to include AI systems as “products” within the directive’s scope. The Product Liability Directive proposal takes digital products into account to a much greater extent. The directive will directly cover AI and data damage will also be addressed in the new rules. However, the Danish implementation law has not yet been passed.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Civil liability
The general rules and principles of Danish law of torts apply; no specific rules apply directly to AI.
A natural or legal person may thus incur liability if they are responsible for having caused damages to any natural or legal person, including if AI was involved. The principles are the same as for damages caused without the use of AI.
Specific laws provide specific liability rules in addition to the general rules and principles. In practice, the following are of relevance when using AI:
In cases involving privacy issues, such as a personal data breach, the General Data Protection Regulation (GDPR) and the respective Danish Data Protection Law can be invoked giving a right for data subjects to seek compensation for damages.
If damage is caused to a natural person, the Danish Product Liability Act and the Danish Liability in Damages Act may be applicable if the AI system is part of a product. Refer to section 4 for more on product liability.
For intellectual property infringements, the Danish Copyright Act, the Danish Marketing Practices Act, the Danish Trademark Act, and the Danish Patents Act, among others, may be relevant.
As stated, an EU directive proposal (The AI Liability Directive) aims to modernize the liability framework for AI systems. 13
Criminal liability
The general rules and principles of Danish criminal law apply; no specific rules apply directly to AI. Under several of the above laws, there is a legal basis for imposing criminal sanctions in the form of fines.
Footnote(s):
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
There are no special rules on liability that apply specifically to AI. Refer to section 5 for the various relevant liability regimes.
Liability will be based on who was responsible pursuant to the general rules, and principles of Danish law of torts and damages. Therefore, the person responsible for the damage is determined based on who is responsible for the fault that caused the harm.
If the rules on product liability apply, the manufacturer must compensate for damage caused by a defect in a product produced or supplied by it, cf. section 6(1) of the Danish Product Liability Act. An intermediary must compensate for product damage unless the intermediary can prove that the damage is not due to their fault or negligence, cf. section 10(1) of the Danish Product Liability Act. If two or more persons are liable under this Act for the same damage, they are jointly and severally liable, cf. section 11(1) of the Danish Product Liability Act.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
The burden of proof usually rests on the injured party in most scenarios, regardless of AI being involved.
In terms of product liability, the injured party must prove the damage, the defect and the causal link between the defect and the damage, cf. section 6(2) of the Danish Product Liability Act.
A directive proposal (The AI Liability Directive) aims to adapt liability regimes to AI systems and clarify the burden of proof in cases involving damages caused by AI. 14
Footnote(s):
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Currently, there is no generally available insurance coverage designed specifically for the use of AI in Denmark. However, it is possible that AI in some cases can be covered under for instance a product liability insurance, to the extent the definition of product liability is satisfied. It is expected that these types of insurance coverages will emerge in the future.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
AI cannot likely be named as an inventor in a patent application, but as the Danish courts have not yet ruled hereon no Danish case law exists yet.
However, it is generally recognised that at least one natural person must always be named as the inventor. This is consistent with EU practice. In December 2021 the Legal Board of Appeal of the European Patent Office ruled in the combined cases J 8/20 and J 9/20 that AI cannot be named as the inventor on patent applications. 15
Footnote(s):
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
It is unlikely that images generated by AI can benefit from copyright protection. Generally, only original creations by natural persons can be copyright-protected. However, the Danish courts have not yet ruled hereon.
The originality requirement means that the work must be attributable to one or more persons who have made creative choices. According to the CJEU (C-683/17-Cofemel), a copyright-protected work requires an “intellectual creation reflecting the freedom of choice and personality of its author”.
Works created using computer technology can be protected if the author has made creative choices and the technology has only been used as an auxiliary tool. However, it is being debated how much of the creative process can be left to computers, including in relation to works generated using AI. It is also being debated whether any such copyright belongs to the AI developer or the user of the program.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
When using AI systems in the workplace, there are several legal issues to consider. Key considerations include confidentiality, data protection, copyright, and compliance with evolving regulation:
- Proper use: Employees must be sure to comply with all applicable laws, guidelines, and the AI system’s terms of use. Furthermore, the employee must be critical of the information that comes from AI. AI information is created based on human-generated data and may therefore produce results that are biased or false.
- Confidential information: In addition, employees must be careful not to use confidential information in AI, including trade secrets. All public AI tools should generally be considered as non-confidential systems, which is why there are potentially major consequences and risks associated with sharing confidential information with them.
- Data protection: It is important to implement strict data security protocols and comply with applicable data protection legislation, such as GDPR. Particular attention should be paid to legal bases for processing and the requirement for data protection impact assessments (DPIAs).
- Copyright: Furthermore, employees must respect copyright and other intellectual property rights when using AI. Employees must not use any copyrighted material in an AI tool without the permission of the copyright holder. Also, employees must be aware that AI may be built on copyrighted works without permission from the rights holders.
- Preparation for the AI Act: The AI Act will apply to any company developing, deploying, or using AI – meaning it does not apply uniquely to technology companies. Companies should prepare for the upcoming AI Act by reviewing their practices to align with its requirements.
Currently, there are no specific rules in Denmark regarding AI in the workplace. However, there are several voluntary guidelines issued by the Danish authorities. Refer to section 3 for more about these guidelines. These voluntary guidelines recommend that Danish companies decide which AI tools should be used in the workplace. Also, the company should prepare some guidelines for the use of AI by employees.
-
What privacy issues arise from the use of artificial intelligence?
The use of AI systems raises several privacy issues that stem from how data is collected, stored, processed, and shared.
- Data breaches and leakages: The primary privacy issues surrounding AI involve the potential for data breaches, data leakages and unauthorized access to personal data, given that AI systems require vast amounts of data to train and operate effectively.
- Data principles: Also, the GDPR sets clear guidelines for the processing of personal data to ensure privacy and data protection. The principles of data quality and data minimization ensure that only necessary and correct information is used. On the other hand, the AI must have enough quantity and quality of data to provide accurate results. This poses a challenge and raises privacy issues. At the same time, insufficient or irrelevant data can lead to erroneous results that can potentially violate citizens’ rights and privacy.
- The use of data in public AI: The use of confidential data in public AI may entail a risk of leakage and breach of NDA/confidentiality undertaking. In addition, the use of copyrighted data in AI may imply copying to third parties and a breach of the license/exclusive rights. Furthermore, the use of personal data would be a transfer to a new data controller and result in processing without a legal basis. Therefore, it is recommended to only use publicly available data in public AI.
- Cloud security: Storing AI-related data in the cloud raises concerns about data protection, access control, and compliance with GDPR.
- Cross-border data transfers: Many AI systems rely on cloud-based infrastructures, which may involve cross-border data transfers out of the EU. This raises concerns about data sovereignty and compliance with data protection regulations, particularly in jurisdictions with strict data privacy laws like the EU.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
IP
Data scraping can be legal in Denmark. However, the legality depends on how the scraping is performed. Scraping or publishing copyrighted material, including news articles, is not allowed if the material can only be accessed through a subscription or is otherwise dependent on a customer relationship.
The Copyright Directive (Directive (EU) 2019/790) stipulates, among other things, that press services that publish news articles should receive a share of the revenue generated from so-called news aggregators, i.e. services that use web scraping. Conversely, it also states that use without an economic purpose is legal, as is linking to news articles and extracting single words or short excerpts.
In general, the copying of training data, when it consists of works, is covered by the exclusive right to make copies, cf. section 2(1) of the Danish Copyright Act. Therefore, it requires permission from the rights holders to the extent that it involves copying the works in their original form and in their entirety.
The Danish Parliament has addressed this by adopting an amendment to the Danish Copyright Act on June 1, 2023, which – based on the Directive on Copyright in the Digital Single Market – implemented new provisions in sections 11 b and 11 c of the Danish Copyright Act on so-called text and data mining. Therefore, it has become possible in Europe to harvest data from websites etc. for text and data mining purposes, where the author or website owner has not actively opposed this (opted out), for example in metadata or in the terms and conditions of the website.
Similarly, the upcoming EU AI Act, Recital 105 will state that the text and data mining rules of the DSM Directive include data harvesting for the development of AI solutions.
Privacy
Under the GDPR, web scraping involving personal data may be prohibited. The presence of personal data within scraped information necessitates compliance with GDPR. Data scraping entities must justify their data processing on grounds such as consent or legitimate interest. Moreover, transparency with individuals about the usage of their data is paramount, alongside adhering to principles of data minimization and respecting individuals’ rights over their data. According to the recently released guidelines from the Dutch Data Protection Authority, private individuals and private parties are in most cases, not authorised to engage in scraping. 16
Competition Law
Data scraping might intersect with competition law, particularly when it seeks to undercut competitors unfairly or harvests proprietary information.
Footnote(s):
16 Dutch Data Protection Authority (AP): Guidelines – scraping by private organizations and individuals.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
Currently, it is unclear how the new provisions in the Danish Copyright Act (sections 11 b and 11 c regarding text and data mining) are enforced and used in practice. This is because it is difficult to prove that data scraping has taken place.
In the case of unauthorized data scraping, the affected party can get a (preliminary) injunction against the scraping of information that originated from their database. Furthermore, the evidence protection rules in Chapter 57a of the Danish Administration of Justice Act allow rights holders to get the bailiff’s help to secure evidence of potential infringements. In addition, compensation can be claimed, cf. section 83 of the Danish Copyright Act.
In addition, there are examples of cases where a party has successfully invoked the database rules in section 71 of the Danish Copyright Act in support of preventing another company from “harvesting” data from the plaintiff’s website. 17
If the nature and extent of the copied data exploits the investment and market position that a certain company has achieved, this may be in violation of Section 3 of the Danish Marketing Practices Act. 18
Footnote(s):
17 U.2003.1063.SH
18 U.2021.3408.SH
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes, the Danish Data Protection Agency and the Danish Agency for Digital Government have issued voluntary guidelines on AI. Refer to section 3 for the content of these guidelines.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The Danish privacy authorities have discussed several cases involving AI.
In one case, the Danish Data Protection Agency has ruled that the Danish authorities have a legal basis in the Danish Public Records Act to use AI solutions to respond to requests for access to documents. In addition, it was determined that the processing of personal data with an AI solution did not have direct consequences for citizens, as it was a matter of developing a solution. 19
In addition, in another case, the Danish Data Protection Agency concluded that the development, operation and re-training of an AI solution where personal data is processed in order to predict citizens’ need for rehabilitation to avoid functional impairment can generally be based on Article 6(1)(e) and Article 9(2)(g) of the GDPR. 20
However, the Danish Data Protection Agency has also found that an AI system does not in itself constitute personal data but is only the result of the processing of personal data. Furthermore, it is the Danish Data Protection Agency’s assessment that a data protection impact assessment (DPIA) should be prepared when developing and using AI solutions. 21
The result of the current practice by the Danish Data Protection Agency is that the use of AI with personal data is somewhat complicated. It may essentially be possible to train a solution on personal data – but not use it afterwards for the purpose for which it was trained. The practice is expected to develop further.
Footnote(s):
19 Case 2023-212-0021 (Danish Data Protection Agency)
20 Case 2023-212-0015 (Ibid.)
-
Have your national courts already managed cases involving artificial intelligence?
No, the Danish courts have not yet ruled on cases involving AI.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Yes, the Agency for Digital Government is appointed as the national competent authority for the EU AI Act and for the use and development of AI.
As the national competent authority, the Danish Agency for Digital Government will, among other things, coordinate with the competent supervisory authorities in Denmark and cooperate with other Member States and the European Commission, including the newly established European AI Office.
In addition, the Danish Data Protection Agency will maintain its focus on AI and oversee the use of AI by public and private organizations due to the large GDPR impact. Furthermore, the agency will oversee certain high-risk AI systems.
The European AI Office, established in February 2024 within the Commission, oversees the AI Act’s enforcement and implementation with the member states.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Danish companies generally see AI as a crucial parameter for maintaining competitiveness, and AI is widely adopted across various industries in Denmark.
Half of the larger Danish companies with over 100 employees already use AI today. However, only 20% of companies with less than 100 employees do the same. In addition, almost 30% of both large and small companies expect to use AI in the coming years. 22
However, it’s clear that there are differences in usage depending on the industry. AI is used in 40% of companies in the information and communication industry and 3% of companies in the construction industry. 23
Footnote(s):
23 https://www.dst.dk/da/Statistik/nyheder-analyser-publ/nyt/NytHtml?cid=51378
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
AI is becoming more and more prevalent in the legal sector, and several law firms use AI for many functions. 24
- AI expedites and streamlines legal work by effectively analysing extensive data, reducing time and costs associated with legal cases.
- AI also automates routine tasks, such as form completion, translation, summaries, and data extraction, freeing up time for more complex matters and minimizing the risk of human error.
- AI also helps with more creative tasks such as brainstorming new ideas and rephrasing sentences.
Footnote(s):
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Key challenges 25:
- GDPR: The GDPR’s clear guidelines on data processing, emphasizing data quality and minimization, challenge AI development by restricting the quantity and quality of data needed for accurate results. Also, the primary privacy issues surrounding AI involve the potential for data breaches, data leakages and unauthorized access to personal data, given that AI systems require vast amounts of data to train and operate effectively.
- Copyright: A key IP challenge is whether training AI systems requires permission from rights holders. AI systems are typically trained on extensive datasets, often including copyrighted material used without prior permission. The central question is if such training needs approval. The focus is on whether the text and data mining regulations apply and if their conditions are satisfied. Also, a key challenge is determining if and when using AI systems requires permission. Another question is whether AI-generated outputs can be protected by copyright, and what the implications are if they cannot be.
- Security and data integrity: AI enhances efficiency but risks disclosing trade secrets or personal data, requiring careful management and balancing. AI’s reliance on personal data necessitates strict security measures and compliance with regulations, and its effectiveness depends on high-quality, unbiased data.
- Legal judgements: While AI can analyse large amounts of data and identify patterns, it can be difficult to train the systems to handle situations that require judgment and assessment by applying legal knowledge and principles.
- Opt-out on EU justice and home affairs: Depending on the Danish Parliament, the courts may have a looser system for AI, while Danish arbitration – which as a starting point does not fall under the opt-out on EU justice and home affairs – must comply with the requirements of the EU AI Regulation. In addition, it is still uncertain how the government and the Danish Parliament will choose to address the EU AI Regulation under the opt-out.
Key opportunities:
- Efficiency: AI can help improve the efficiency and quality of legal services and provide better outcomes for clients while reducing the time involved in legal processes.
- Enhanced research: AI systems can perform comprehensive legal research and predict case outcomes based on past data which addresses the ever-increasing complexity of the law.
- Routine tasks: AI can automate routine tasks freeing up time for more complex matters and minimising the risk of human error.
- Reduced human errors: Automating data analysis and routine tasks reduces the risk of human error, increasing the accuracy and reliability of legal work.
- Cost reduction: AI reduces operational costs for law firms by streamlining processes and automating repetitive tasks, which cuts down on the time and resources needed to complete them manually.
Footnote(s):
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
The most significant legal developments in AI in Denmark over the next 12 months will most likely be as a result of the EU AI Act.
The AI Act entered into force across all 27 EU Member States on 1 August 2024. It will have a phased implementation, starting with the regulations on prohibited AI systems taking effect in February 2025, followed by obligations concerning GP AI models around August 2025, and the majority of the remaining provisions, including on high-risk AI systems, from August 2026.
Denmark: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Denmark.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?