-
What are your countries legal definitions of “artificial intelligence”?
Singapore has defined “artificial intelligence” within the Model Artificial Intelligence Governance Framework (“Model Framework”), issued by the Infocomm Media Development Authority (“IMDA”) and the Personal Data Protection Commission (“PDPC”):
Artificial intelligence (or “AI”) refers to a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).
This Model Framework is a voluntary document, setting out ethical and governance principles for the use of AI and translating them into practical recommendations for organisations to adopt. It applies across all sectors.
-
Has your country developed a national strategy for artificial intelligence?
In December 2023, Singapore issued its second National Artificial Intelligence Strategy “NAIS 2.0”. The first National AI strategy was announced in 2019.
Singapore’s NAIS 2.0 makes 3 shifts from the first national AI strategy announced in 2019. The first is that AI should be seen as a “necessity” and not only an “opportunity” – people “must know” AI and not just see it as “good to have”. The second is that Singapore must move from a local approach to a global approach, such that Singapore should be well connected to global innovation networks, contribute to meaningful AI breakthroughs, and develop AI products that the world values. The third is that Singapore will move beyond the flagship national AI projects in key areas such as healthcare, education and border security, and administer AI-enabled solutions at scale.
The NAIS 2.0 also emphasizes that Singapore will “retain agility” in the regulatory approach – the Government will take a “pragmatic approach”, supporting innovation but still ensuring that AI is developed and used responsibly. The Government will also take “differentiated approaches to managing risks to and from AI, ranging from regulatory moves to voluntary guidelines”, and will “do so thoughtfully and in concert with others, accounting for the global nature of AI”.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Singapore does not have legislation that specifically addresses the use of artificial intelligence across a variety of sectors (cf. the EU AI Act). The government is presently not looking to enact regulation for AI, but is focussing its efforts on promoting the responsible use of AI though mediums such as the Model Artificial Intelligence Governance Framework, and its AI governance testing framework and software toolkit called “AI Verify”. The government will continue to monitor the state of technology and how it is being used before deciding on a regulatory approach.
However, Singapore has enacted laws in relation to specific applications of AI technology. Our Road Traffic Act 1961 was amended in 2017 to provide for the trial and use of autonomous motor vehicles (“AVs”), as our road traffic laws were previously premised on there being a human driver (previous uses of AVs on the roads would be by way of exemptions from the Act). In relation to medical devices that incorporate AI technology (“AI-MDs”), these must also be registered under the Health Products Act 2007, as all medical devices must be registered regardless of whether they incorporate AI technology. However, the Health Sciences Authority’s Regulatory Guidelines for Software Medical Devices specifies the additional information that must be submitted when registering an AI-MD, for example, information about the data sets used for training and validation, a description of the AI model, reports to substantiate its performance claims, and the level of human intervention in the system.
Nevertheless, in all instances where AI technology is applied, existing laws can still apply. For example, tort law and contract law can apply where the AI system does not perform as expected, and the Personal Data Protection Act 2012 applies where the AI system is used to process personal data. Companies that develop or utilise AI systems must also comply with existing corporate laws, intellectual property laws, employment laws and competition laws, to name a few.
Our regulators have also issued a series of guidelines to assist the industry with utilising this new technology, such as:
- IMDA/PDPC issued (in January 2020) the 2nd Edition of the “Model Artificial Intelligence Governance Framework”, setting out key principles organisations must take into account when developing and deploying traditional AI systems.
The Model Framework is based on 2 high-level guiding principles to promote trust in the use of AI, where organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair, and that AI solutions should be human-centric with human well-being and safety at the forefront. It is complemented by the Implementation and Self-Assessment Guide for Organisations which sets out a series of questions for organisations to self-assess how their practice align with the Model Framework.
- IMDA and the AI Verify Foundation issued (in June 2024) a “Model Governance Framework for Generative AI”, which sets out 9 dimensions which policymakers, industry, the research community and the broader public must take action in to build trustworthy generative AI systems.
- The Monetary Authority of Singapore (MAS) released (in November 2018) the “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector”, and leads the Veritas consortium within the financial industry to promote the responsible use of AI;
- The Intellectual Property Office of Singapore (IPOS) issued the “IP and Artificial Intelligence Information Note to provide an overview of how different types of IP rights can be used to protect AI inventions.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Singapore has not enacted legislation that specifically deals with defective artificial intelligence systems. It would thus fall to be governed by the existing regime for that particular product – for example, in the case of AI-MDs regulated under the Health Products Act 2007, the Authority may suspend or cancel the registration of such AI-MD if it has reasonable grounds to believe that the safety of the AI-MD has changed so as to render it unsuitable to continue to be registered, or if it is in the public interest to do so (see section 37 of the Health Products Act).
Ordinary principles of tort and contract will also apply. Please see S/N 5 below for further details.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Civil liability
Where damage is caused by the AI system, the affected person may seek a remedy in tort or contract (if there is a contract between the parties). However, AI technology has some unique features that may affect how conventional tort and contract principles of liability are applied:
- AI is a “black box” – it is not always possible to explain how or why the AI system reached a particular outcome even if the factors the system is programmed to take into account are known, and the type of model chosen affects how easily the system can be explained, as some models are more complex than others – this would increase the difficulty in proving that the damage was the result of a defect in the programming of the AI system, as opposed to some other cause;
- AI is self-learning, where it can learn from the data it has been exposed to during its training and improve the results generated without being explicitly programmed, meaning that the output of the system will not always be foreseeable;
- AI has multiple people involved in its development, from procuring and selecting the datasets, to training the algorithm, to monitoring the performance of the algorithm – so it will be a complex fact-finding exercise to determine who is liable when damage is caused. AI is heavily reliant on the data that it is trained on, as it makes predictions based on that data, so if the data is flawed, the accuracy of the output is affected, and there could be errors compounded by other errors (e.g. in addition to flawed datasets for training, the algorithm was not a suitable one).
Criminal liability
Our criminal laws presently do not attribute liability to AI systems directly. Criminal liability presently only attaches to natural or legal persons, of which an AI system is not. Where an AI system causes damage, or breaks a criminal law, it would warrant an inquiry into how this arose, and it would turn on the facts whether the programmer of the system, its owner, the person who operated it, or any other person, is criminally liable. The mental state of the human in operating or overseeing the system is a key determining factor – was the consequence something they intended or knew about?
For example, if a person uses an AI system to deliberately commit crimes (contrary to what the AI system was designed for), such as hacking, that person could potentially be found guilty of an offence under the Computer Misuse Act 1993.
The Singapore Academy of Law’s Law Reform Committee has issued a “Report on Criminal Liability, Robotics and AI Systems” in February 2021 to explore these issues in-depth, and cautioning that there is no “one size fits all” approach to the application of criminal liability across all uses of AI.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
The person responsible for the harm caused by an AI system would turn on the facts of the case, and if there is a contract between the parties, what is set out in the contract.
For example, if the user did not use the AI system for its intended purpose, but for a different purpose despite clear warnings from the developer about the limitations of the AI system, then the developer may not be held responsible for any harm caused. Similarly, if a victim dashed across the road in front of an autonomous vehicle without checking for traffic, he or she may be found contributorily negligent.
Singapore’s Model Governance Framework for Generative AI recommends that responsibility be allocated based on the level of control that each stakeholder has in the generative AI development chain, drawing on how the cloud industry has built and codified comprehensive shared responsibility models. It also considers “safety nets” where unanticipated harm occurs, such as the offering of indemnities, the amending of legal frameworks to make it simpler for end-users to prove damage caused by AI-enable products and services, and the applicability of no-fault insurance.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In civil cases, the burden of proof is on the balance of probabilities. Where the victim alleges negligence on the part of the defendant AI developer/operator, the victim must establish that the defendant owed it a duty of care, there was a breach of that duty (falling below a standard of care), and that the breach caused the loss, where the loss is not too remote.
In criminal cases, the case must be proven by the prosecution beyond reasonable doubt. What must be proven depends on the actus reus and mens rea elements of the offence set out in legislation.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
For the use of autonomous vehicles, it is a requirement that the person authorised to undertake the trial or use of the vehicle must have in place liability insurance indemnifying the owner and any authorised driver or operator of the vehicle in relation to death, bodily injury or damage to property caused by, or arising out of, the use of the vehicle on a road. In lieu of such liability insurance, the person must deposit with the authority a security of not less than SGD$1.5 million, so that the victim will always have a remedy. For more details, please see section 6C of the Road Traffic Act 1961, and regulations 14 and 15 of the Road Traffic (Autonomous Motor Vehicles) Rules 2017.
For the deployment of AI technology in other products or services, whether insurance is required is determined by the existing statutory regime for that product or service, and not whether AI is being used.
Nonetheless, developers and users of AI systems are free to consult insurance providers and obtain their own coverage.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Under Singapore law, the inventor must be the “actual deviser” of the invention (see section 2(1) of the Patents Act 1994), and this must be a natural person (see the cases of National University Hospital (Singapore) Pte Ltd v Cicada Cube Pte Ltd [2017] SGHC 53 at para 51, and Energenics Pte Ltd v Musse Singapore Pte Ltd and anor [2013] SGHCR 21 at para 24).
This is also the position set out (at pages 18 – 19) in the joint report by the IPOS and the Singapore Management University titled “When Code Creates: A Landscape Report on Issues at the Intersection of Artificial Intelligence and Intellectual Property Law” (“the IPOS-SMU Landscape Report”).
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
The IPOS-SMU Landscape report discusses 3 issues relating to AI and copyright – (1) Can AI be named as an author; (2) Is there copyright protection for works generated by AI; and (3) Who is the owner of an AI generated work.
On the first issue, the answer is “no”. Singapore’s Copyright Act 2021 requires the author to be a natural person, and for a work to be protected by copyright, it must be original (see Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd.
On the second issue, for copyright to subsist, “there must be an authorial creation that is casually connected with the engagement of the human intellect” (i.e. “the application of intellectual effort, creativity, or the exercise of mental labour, skill or judgment”) – see Global Yellow Pages Limited v Promedia Directories Pte Ltd [2017] SGCA 28. The answer would thus depend on the nature and extent of the prompts entered by the human, as well as how the AI image generator operates to create images (as that impacts how much control the human has over the AI-generated output).
Singapore does not have an equivalent in our Copyright Act 2021 to the UK’s Copyright, Designs and Patents Act 1988 provision for the protection of computer-generated works.
On the third issue, assuming that AI-generated works can be protected by copyright, the owner of copyright can be a non-natural person with legal personality (e.g. a company) who has been assigned the ownership. Sections 133 and 134 of the Copyright Act 2021 address copyright ownership where the maker of the work is the default first owner, and where the work is created in the course of employment.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
The Model AI Framework sets out 4 key areas where organisations should adopt measures to promote the responsible use of AI:
- Adapt existing or set up internal governance structures and measures to have appropriate oversight over how AI technologies are used in the business; to minimise risks and allocate responsibilities relating to algorithmic decision-making;
- Determine the appropriate level of human involvement in AI-augmented decision-making based on the organisation’s risk appetite for the use of AI and the nature of the decision to be made;
- Operations management, such that the organisation addresses potential issues when developing, selecting and maintaining AI models, including the management of data (e.g. ensuring it is drawn from representative sources);
- Strategies for interacting and communicating with stakeholders (e.g. to inform them that AI is being used and how it affects them).
Separately, if an organisation is using generative AI to enhance productivity (e.g. employees use a ChatGPT-like AI system to generate marketing materials, summarise documents, etc.), the organisation should have in place guidelines for employees on the use of such tools, and ensure that employees are aware of the limitations of such technology. For example, the organisation should require employees to check the output of the AI system for accuracy, and warn them not to input sensitive data into the system unless the necessary security measures are in place.
-
What privacy issues arise from the use of artificial intelligence?
Artificial intelligence trains and operates on a vast amount of data, which is likely to include personal data. Personal data could be obtained from a broad range of sources (e.g. CCTV cameras, GPS location data, computing devices) and may be obtained from the data subject or another individual (e.g. where applications like ChatGPT are used, the user could input another individual’s personal data in the prompt). Data from multiple sources can also be combined to generate insights about a particular individual (e.g. their preferences, buying patterns, emotional state, health status, likelihood of repaying a loan on time). This gives rise to data privacy issues such as whether the data subjects are adequately informed of what personal data will be collected and how the AI system may use and disclose their personal data, as well as whether individuals can prevent use and disclosure or ensure that the data / inferences are accurate.
The Personal Data Protection Act 2012 (“PDPA”) must be complied with where personal data is processed, whether for the development or in the deployment of the AI system.
Organisations can consider the feasibility of using anonymised data, which will not be subject to the PDPA. However, even if a data set is initially anonymised, organisations should be mindful that the risk of reidentification could increase over time (e.g. as the AI system aggregates more data to derive correlations).
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
From an IP perspective, in relation to using copyrighted materials to train AI systems, Singapore’s Copyright Act 2021 sets out various permitted uses of copyright works, the 2 most notable being “fair use” (section 190) and the “computational data analysis” exception (section 244). There have not been any reported judicial decisions yet in relation to the applicability of these exceptions to training AI systems.
However, the computational data analysis exception will apply only in limited circumstances, and thus may not cover all instances of data scraping. One of the key conditions for the exception to apply is that the person who makes a copy of any copyrighted material must have lawful access to the material (called the first copy) from which the copy is made. In other words, the person must not have accessed the first copy by circumventing paywalls, or by breaching the terms of use of a database (unless that term is void under section 187 of the Copyright Act 2021, which does not allow contracts to override statutory exceptions). The copy must also be made for the purpose of using a computer program to identify, extract and analyse information or data from the work, or using the work as an example of a type of information or data to improve the functioning of a computer program in relation to that type of data – it may not be used for any other purpose.
From a privacy perspective, if personal data about an individual is publicly available, the organisation need not obtain consent for the collection, use or disclosure of such personal data. However, it still has to comply with all other obligations in the PDPA, and use the data only for purposes that a reasonable person would consider appropriate in the circumstances (e.g. it must not use the data for illegal purposes or where it would be harmful to the individual concerned). The organisation may also be bound by the terms and conditions imposed on the use of the data from source it obtained the data from.
From a competition perspective, there is no specific guidance from our competition regulator on this issue of data scraping at present.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
If a website’s terms of use prohibit data scraping but a person does so, this could constitute a breach of agreement. However, there is no settled position in Singapore that the terms of use of a website are always enforceable against the user. This would depend on whether the terms of use were sufficiently brought to the attention of the user, and whether the user actually accepted them. Many sites do not require the user to expressly agree (e.g. by clicking a button) before they can access the site so as not to negatively affect the user experience.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The PDPC has issued the following guidance to assist the industry with navigating AI use:
- 5 June 2018: PDPC published a Discussion Paper on AI and Personal Data – Fostering Responsible Development and Adoption of AI, which was their preliminary analysis of issues pertinent to the commercial development and adoption of AI solutions;
- January 2019: IMDA/PDPC published the 1st edition of the Model AI Framework;
- January 2020: IMDA/PDPC published the 2nd edition of the Model AI Framework;
- April 2024: PDPC published the Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems, to give organisations certainty on when they can use personal data to develop and deploy AI systems, and guidance on how they can be transparent to consumers about how their AI systems will use personal data to make recommendations, predictions or decisions;
- July 2024: PDPC published a Proposed Guide on Synthetic Data to assist organisations in understanding synthetic data generation techniques and potential use cases for AI.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The PDPC has not yet published any enforcement decision or released any statement on specific cases involving the processing of personal data with artificial intelligence.
-
Have your national courts already managed cases involving artificial intelligence?
Singapore’s courts have not yet issued decisions on cases involving artificial intelligence.
However, our Court of Appeal has issued a landmark decision on the use of deterministic algorithms to conclude contracts in the case of Quoine Pte Ltd v B2C2 Ltd (2020) SGCA(I) 02. A deterministic algorithm is one that “will always produce precisely the same output given the same input […] and does not have the capacity to develop its own responses to varying conditions” (see [15]). Therefore, where an attempt is made to void contracts concluded by such algorithms for unilateral mistake, in order to determine knowledge of that mistake, the court will refer to the state of mind of the algorithm’s programmers from the time of programming up to the point where the relevant contract is formed (see [97] to [99]).
It will be interesting to see whether the same principles apply where the algorithm is non-deterministic (i.e. for the same input, it may produce different outputs), or if there are multiple programmers, as the software used by B2C2 was devised almost exclusively by one of its founders.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
At present, Singapore does not have a dedicated AI regulator. The IMDA plays a key role in promoting the responsible adoption of AI across the public and private sectors. It has issued Model Frameworks for traditional and generative AI that apply across all sectors, as well as developed AI Verify (an AI governance testing framework and a software toolkit) in consultation with the industry, amongst other initiatives. Regulators in other sectors also issue guidelines on the use of AI for their sectors (e.g. health, finance, etc.).
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The use of AI by businesses in Singapore is gaining momentum. In IBM’s Global AI Adoption Index 2023, which surveyed 148 IT professionals at enterprises (organisations with more than 1000 employees) in Singapore, 53% indicated that their organisations have deployed AI as part of its business operations, and 41% indicated that their organisations are exploring but have not yet deployed AI.
To aid businesses in deploying AI solutions in the workplace, our regulators have also issued “A Guide to Job Redesign in the Age of AI”, where “jobs” should be broken down into “tasks”, as AI impacts on how tasks are to be performed. The guide also sets out considerations to decide whether a task should be automated.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
The legal sector is already using AI in a variety of ways, such as for discovery in litigation, and for due diligence processes in M&A transactions. With the accessibility of generative AI tools, the legal sector is also starting to explore how to integrate such tools into their workflows such as for research or document generation.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Of the 5 challenges set out, 3 relate to the developing landscape on how this new technology should be regulated, as there is no one-size-fits-all solution. The other 2 are changes to the nature of legal practice. We have chosen to address the challenges and opportunities together, since the challenges are actually opportunities to clarify the law and also ensure the legal profession keeps pace with technological developments.
- Because this is a developing field both in Singapore and overseas, it is important for lawyers to keep abreast of overseas developments, as technology can be exported across international borders. The pace at which legislation and guidelines are issued across the world has increased exponentially in 2023 and 2024, and lawyers must remain up to date on these latest developments.
- AI can be deployed in so many ways, so there is no one-size-fits-all solution (or regulation), and lawyers must be keenly aware of this. The rules surrounding AI used in a music recommendation system will be different from that in a system used by a bank to determine if a person should be granted a loan, because of the gravity of their impact on a person. The challenge will be in calibrating the level of governance measures/precautions to be taken in each scenario, without exposing the organisation to unnecessary (legal and other) risk.
- Determining liability where the AI system causes damage, or does not perform as expected. Lawyers must be aware of whether there are features of AI that make it different from other technologies, and assess whether there may be limitations in applying existing legal principles and how to overcome them. AI systems learn from the data they are trained on and can improve with the experience without being explicitly programmed. Aside from having an ‘autonomous’ quality (where their outcome may not always be foreseen), the quality of the data used to train the system also matters, as well as how different the real-world data input into the system is from the training data, as that also affects the AI system’s performance.
- The nature of the work performed by lawyers will change as Large Language Models are increasingly incorporated into legal practice, in tandem with other AI tools. Lawyers must understand the technology so that they can decide how to harness it in their work (including taking precautions for client confidentiality and checking the content generated by generative AI tools), and explain its use to their clients.
- There is an increasing demand from the public for legal AI tools for laypersons to use so they can access the law on their own. Lawyers will have to address issues such as where to draw the line where generative AI is giving legal information versus giving legal advice, and also who is to assume liability if the advice/information rendered is incorrect.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Singapore takes a practical, balanced approach towards the regulation of artificial intelligence, with the aim of promoting the safe and responsible development and use of artificial intelligence. While Singapore is not currently looking to enact any general AI legislation, we will update our regulatory frameworks where necessary, and do so in concert with other jurisdictions, to account for the global nature of AI.
Therefore, over the next 12 months, we will likely see our regulators issuing more guidelines to the industry in their specific sectors, together with more public consultations. Relevant use cases of AI will also be analyzed so that any basis for new laws and regulations is grounded on evidence. Our regulators have indicated that no person has all the answers where it comes to regulating this space, hence they will be working closely with the industry to understand the benefits and challenges of AI across a spectrum of use cases before deciding on the regulatory approach.
Testing and evaluation frameworks (e.g. AI Verify, Project Moonshot) will continue to be developed in partnership with the industry. Presently, our testing frameworks are not pass-fail, but having to go through a series of questions and/or technical tests will require organisations to consider their AI systems and internal governance measures thoroughly, and work on the areas flagged for improvement.
In addition, Singapore has also highlighted the importance of having evaluations for AI models tailored to local conditions – currently, the framing of toxicity, bias and demographic considerations in Large Language Model (LLM) evaluations tends to be Western-centric, and existing benchmark datasets and tools are primarily developed in English. A guide to multilingual and multicultural red teaming for Large Language Models (developed by IMDA and frontier companies such as Anthropic) will be released later in 2024.
Finally, Singapore and Rwanda are collaborating on the development on an AI Governance playbook for small states (announced at the Digital Forum of Small States Ministerial Dialogue on 30 May 2024), which is targeted to launch in the later half of 2024. The Forum of Small States is a grouping of 108 UN Member States. This Playbook will address common challenges associated with the secure design, development, evaluation and implementation of AI systems.
Singapore: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Singapore.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?