-
What are your countries legal definitions of “artificial intelligence”?
While there is no current legal definition, the UAE National Program for Artificial Intelligence Guide (the “Guide”) explains that Merriam-Webster defines artificial intelligence (“AI”) as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” The Guide goes on to say that a “AI defines a collection of technologies enabling a machine or system to comprehend, learn, act, and sense like a human.”
-
Has your country developed a national strategy for artificial intelligence?
Yes. In 2017, the UAE appointed a Minister of State for Artificial Intelligence, H.E. Omar Bin Sultan Al Olama. In April of 2019, the UAE Cabinet approved the National Strategy for Artificial Intelligence 2031 (the “National Strategy”), with the goal of making the UAE one of the world’s leaders in AI. To implement the National Strategy, the UAE established the Office of Artificial Intelligence.
The National Strategy has eight main objectives:
- Build a reputation as an AI destinatio;
- Increase the UAE competetive assets in priorty sectors through deployment of AI;
- Develop a fertile ecosystem for AI;
- Adopt AI across government services to improse lives;
- Attract and train talent for future jobs enabled by AI;
- Bring world-elading research capabiity to work with target industries;
- Provide the data and supporting infrastructure essential to become a test bed for AI; and
- Ensure strong governance and effective regulation.
The National Strategy will prioritize its focus in the current main industries such as logistics, energy, and tourism. It will also focus on the increasingly important industries that are healthcare and cybersecurity.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
The UAE Cabinet appointed the UAE Artificial Intelligence and Blockchain Council (the “Council”) to oversee the implementation of AI technology in society and across the government sector. The Council is to propose policies to create an ecosystem that will be AI-friendly while maintaining privacy and ethics in society.
The UAE Regulations Lab (“RegLab”) is designed to proactively anticipate and develop future legislation governing the use and application of emerging technologies in the UAE. The RegLab was launched in January 2019 and the UAE Cabinet uses it to grant temporary licenses for the testing and vetting of innovations that use technologies such as AI. The Sandbox is a similar concept to the RegLab, as it allows companies in the insurance sector and insurance tech startup companies to apply to test their products in a controlled environment.
In the free zones, the Abu Dhabi Global Market (“ADGM”) RegLab and the Dubai International Financial Centre (“DIFC”) Innovation Testing License (ITL) Programme enables temporary license holders to test and develop innovative fintech products, services and business models in a controlled environment without being subjected to the full suite of regulatory requirements. This provides participants in these programs the opportunity to explore and develop solutions in a risk-appropriate setting.
Additionally, the Central Bank of the UAE, together with the Securities and Commodities Authority, the Dubai Financial Services Authority of the DIFC and the Financial Services Regulatory Authority of the ADGM, have issued the “Guidelines for Financial Institutions adopting Enabling Technologies”. However, this is only a draft paper for discussions and once implemented, the guidelines are not to be considered as regulations or standards being issued by these various authorities. This paper emphasizes that financial institutions should have adequate governance, accountability, and consumer protection frameworks related to potential use of AI.
AI, and its various uses, is still relatively in its newborn phase. The need to implement regulations is high, but the urge to watch how AI unfolds and disrupts various industry sectors through its application seems to be much higher. While there are no current regulations or guidelines to govern AI, existing regulations such as those relating to consumer protection and civil liability could be applicable to AI and its use. Although it may prove difficult to implement these as blanket AI regulations, they could set the foundation for regulators around the globe, adopting a “wait and see” approach to AI.
The remaining Q&As of this Article will detail how current UAE regulations could be applicable to AI, and the challenges that could be faced in doing so.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As the UAE (excluding the financial free zones) is a civil law jurisdiction, UAE laws and regulations are codified. The relevant principles that may be applicable to AI can be found in the Civil Transaction Law No. 5 of 1985, as amended by Federal Law No. 30 of 2020 (“Civil Code”); Federal Law No. 15 of 2020 on Consumer Protection (“Consumer Protection Law”); and Federal Law No. 10/2018 on Product Safety (“Product Safety Law”). Below questions address each area in more detail.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Article 316 of the Civil Code states that “any person who has things under his control which require special care in order to prevent their causing damage, or mechanical equipment, shall be liable for any harm done by such things or equipment, save to the extent that damage could not have been averted.” This could be applicable to machines powered by AI, such as Carrefour’s Tally Inventory Robot. What would then happen if Tally malfunctions and knocks over products, or shelves of products, that injure human customers? Who will be held responsible? In cases concerning product defect issues, strict liability is imposed.
The difficulty with AI is that it is hard to isolate a sole responsible person. The “person” responsible could be either the engineer who programed the algorithm, the company responsible for its funding, or the company employing Tally for use in its stores. In theory, this issue could be mitigated with Article 291 of the Civil Code which discusses joint and several liability, stating “when several persons are responsible for a prejudicial act, each one of them is responsible for his share in it…” In practice, however, this could prove to be challenging. How is each “share” of damages caused by AI apportioned? One could argue the engineer developing the algorithm has the greatest responsibility to ensure the end-product does not contain any defect. One could also argue the quality control department of the entity distributing the end-product is even more liable, as they should have foreseen potential harmful issues.
In relation to criminal liability, Chapter 2 of Federal Decree Law No. 31 of 2021 on the Issuance of the Crimes and Penalities Law (the “Penal Code”) outlines the two elements of a crime, both well-known in every jurisdiction – actus reus and mens reus. Article 32 of the Penal Code states that “actus reus of a crime consists of a criminal activity either by act or omission when such act or omission is criminalized by law.” Article 39 states that “mens rea of a crime consists of the intent or the fault.” The use of AI in itself is not a criminal activity. This then leaves law makers with the duty to categorize the different uses of AI and attribute them into potential criminal acts.
For example, configuring an algorithm to program an automated car is legal. However, configuring the algorithm of said automated car to drive on the wrong side of the road is illegal. In the latter scenario, both the act and intent are there and proving criminal liability is simple. There are, nevertheless, other possibilities that can occur between the configuration of the algorithm and the accident of a default automated car. AI has advanced where it is not only powered by humans feeding it information, it now teaches itself to a certain extent. What happens when the automated car, perfectly configured to self-drive in a safe manner, teaches itself to drive outside the parameters of the original algorithms and ends up on the wrong side of the road?
Similar to Tally’s malfunction in the aisles of Carrefour, it would also be difficult to locate the exact person or entity responsible for the damages caused in this car accident. One would not only need to identify the criminal act, but also the intent to commit, or cause this crime to occur. AI cannot be held liable – it is a composition of data science, mathematically coded to exist. Yet, it has evolved where it can self-learn the same way a human can. The concept of joint and several liability would need to also evolve and encompass these scenarios.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Question added to and answered in question 5 above.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
This would again be too difficult to define at this stage. For product liability cases, individuals may look to the Consumer Protection and Product Safety Laws. Article 4 of the Product Safety Law outlines requirements for products to be “deemed safe”. Products should be conformant to approved UAE laws and standards but in the event that this wouldn’t be applicable to a certain product (as it might be for most AI-powered products), a risk assessment report for the product should be submitted to the relevant government authority to be studied and accepted prior to being released. In these situations, a victim would need to prove the absence of the above requirements for a certain AI product in order to claim compensation.
In cases of criminal liability, the victim would need to prove not only the act, but the intent to act. As mentioned in the automated car example, in a situation where AI is being used to perform illegal acts, the burden of proof for the illegal act itself would be applicable. However, in situations where the AI malfunctioned and damages were caused due to such malfunction, it would be difficult to trace back through the AI’s algorithms and self-learned codes to find the exact error that lead to the malfunction.
The levels of burden of proof would undoubtedly require an adjustment when lawmakers are ready to implement regulations.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The issues and challenges presented by AI do not fall neatly within the current insurance frameworks. However, there are a few insurance policies that could be applicable to AI if drafted in detail with the adequate protections. Commercial general liability and product liability insurance policies can cover claims of property damage and personal injury and these policies can protect businesses against claims arising from AI-enabled products or services that cause such harm. Intellectual property insurance policies cover the costs associated with defending or enforcing an IP right and these policies can help businesses protect these rights against infringement claims. As AI becomes more prevalent in corporate decision-making environments, D&O insurance can help protect directors’ and officers’ personal assets and shield them from AI-related liability claims.
Until robust insurance policies are introduced to protect the various potential AI-related claims, companies offering AI-based products and services need to be aware of legal uncertainty and possible evolution and be ready to integrate emerging regulations, while also considering these liability risks.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Federal Law No. 11/2021 on the Regulation and Protection of Industrial Property Rights states in Article 8 that the “name of the inventor shall be indicated on the application for a patent…” While it does not explicitly say that applicants can be AI, it makes various references to a person. As such, it would not be clear whether AI can be named an inventor in a patent application.
However, in 2019 the World Intellectual Property Organization announced that the EU Patent Office and the UK Intellectual Property Office have considered an application where DABUS (an AI-based creativity machine) has been named as an inventor, and the owner of the AI was named as the patent applicant and prospective owner of any issued patents. Both offices, according to WIPO, have found that these applications meet the requirements of patentability. However, in the United States, when Dr. Stephen Thaler made the same application naming DABUS as the inventor in two patent applications, the U.S. Patent and Trademark Office informed him that they applications were incomplete as they did not list a human inventor. On appeal, the Federal Circuit concluded that in passing the Patent Act, Congress has determined that only a natural person can be an inventor, so AI cannot be.
It will be interesting to see how the UAE’s patent law unfolds, especially considering that the majority of its current laws and regulations are heavily influence by English Law.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Federal Decree Law No. 38 of 2021 on Copyrights and Neighboring Rights (“Copyright Law”) does not specifically lay out provisions detailing work generated by or with AI, but it has been recently (the 2021 law replaced the old law introduced in 2002) updated to include a “smart applications, computer programmes and applications, databases, and similar…” in its list of “Work” protected by the Copyright Law.
“Work” is defined as “any innovative production in the fields of literature, arts, or science, of whatever type, manner of expression, significance, or purpose.”
An “Author” is defined as “the person who creates the Work, or the person whose name is mentioned thereon or if, upon Publication, the Work is attributed to him as being the author thereof unless otherwise proven. Shall also be considered author, whoever publishes anonymous or pseudonymous work, or in any other manner, provided that there is no doubt as to the true identity of the Author; otherwise, the publisher or producer of the work, whether a physical or juristic person, shall be deemed as representing the Author in the exercise of his rights, until the true identity of the Author is recognized.”
Using the above definitions, one can argue that images generated by AI benefit from the Copyright Protection. However, it will be difficult to identify the author. While the person who input the information to produce the generated image, the AI’s algorithm was created by a different individual, or team of individuals. Additionally the AI was likely to have advanced by learning from the vast influx of data that was inputted in it to “teach” it over time.
A recent case in the US, concerning images generated by an AI platform called Midjourney, and arranged in a specific order to create book, the U.S. Copyright Office initially granted the author copyrights to the book and the images and later amended the registration to exclude the images as they were not generated by a human. A recent update on the issue revealed that the U.S. Copyright Office stated “a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that the resulting work as a whole constitutes an original work of authorship.”
We anticipate a further update to the Copyright Law as people begin applying for copyrights on their work in the UAE.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
Focusing on the legal industry, accuracy is a primary concern. Lawyers must be sure to double check any work produced by AI-powered tools. There is a concern that applications, such as ChatGPT, create a false presumption of accuracy. Lawyers are encouraged to confirm the output of an AI program to avoid scenarios, such as that of the NY lawyer whose court submissions was riddled with case law that didn’t exist after he used ChatGPT for his legal research.
Bias is another concern as AI technology relies on algorithms to analyze vast amounts of data. It the AI is trained with biased information, the outcome will be biases leading to unjust legal representation.
Automated decision making process and people analytics, especially by employers looking to hire new talent, gives rise to a wide array of biases that must be carefully maintained to avoid potential workplace discrimination.
-
What privacy issues arise from the use of artificial intelligence?
Focusing on the legal industry, it is crucial to be wary of using personal data to train AI machines without redacting or anonymizing the information. Privacy and personal data have been the forefront of legislation for the last few years and it continues to the primary focus with the development of AI in today’s society.
AI systems rely on vast amounts of data, including highly sensitive and confident information, and may store such data. Lawyers must ensure that AI systems adhere to the relevant data privacy regulations.
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
While there are no defined rules, ADGM’s Guidance on the Data Protection Regulations of 2021 lay out a few guidelines when explaining how a company can use personal data to train it AI systems:
- Ensure that the relevant company inputs accurate data to acheive precise results;
- Ananoyzing the personal data before inputting it into the system;
- Regulary review the AI system to mitigate bias;
- Have internal procedures in place to review the analysis performed by the AI to adjust the algorithm as needed;
- Ensuring the Data Protection Regulation is enshrined in the company’s internal processes to ensure data subjects’ rights are protected and upheld.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
On Mainland UAE, the UAE Artificial Intelligence and Blockchain Council has established The Data Committee. Its main tasks include (i) identifying the main challenges of collecting data for artificial intelligence and blockchain, (ii) making recommendations for standardization, collection and inventory of data, and (iii) design specialized courses, policies, systems and standards for all data.
The DIFC recently announced that it will propose changes to the current Data Protection Law No. 5 of 2020. It is consulting on amendments to the current law, the majority of which focus on controls and guidelines on processing of personal data using artificial intelligence and automated systems.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
Outside of the discussions to revamp current data protection regimes, there are currently no cases involving AI.
-
Have your national courts already managed cases involving artificial intelligence?
There are currently no published cases involving artificial intelligence.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Yes, the UAE Cabinet appointed the UAE Artificial Intelligence and Blockchain Council (the “Council”) to oversee the implementation of AI technology in society and across the government sector.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The use of AI in the UAE is becoming increasingly popular and major companies in the UAE such as the national oil company ADNOC which uses AI algorithms across all sectors of oil production and global distribution to streamline efficiency across the network, and create new performance benchmarks that will increase revenue; Dubai’s Roads and Transportation Authority (RTA) which partnered with Tesla to test and deploy autonomous vehicles in the city and currently has a self-driving metro that services approximately 8.8% of all individual trips in Dubai; and Emirates Airlines that is already using AI to elevate their operations by using AI to inform on a myriad of flight logistic aspects ranging from amount of fuel needed to most efficient flight routes.
In line with the National Strategy, the use of AI will become widespread amongst many industries to help the UAE reach its goal of becoming a focal point of AI on the world map.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
Absolutely. Contract review and analysis can be analyzed in bulk using technology-assisted review. It uses machine-learning algorithms to support litigation attorneys conducting eDiscovery by identifying which document is relevant for a case or not. Time sheeting programs log hours spent by lawyers on work done for each client and generate an automatic invoice at the end of every month. Most recently, Allen & Overy announced that it will integrate Harvey, an AI platform built on Open AI’s latest model, in its global practice to conduct various tasks such as contract analysis, regulatory compliance and due diligence which aims to cut down time spent by lawyers and drastically cut costs for clients.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI-powered applications such as ChatGPT that may influence average individuals into thinking they can generate complete, legal contracts which in reality may be peppered with flaws and loopholes that may not necessarily protect their best interests. Issues related to copyright of AI-generated work will give rise to difficulty identifying the owner of the product. The potential for bias is increased as AI systems are only as good as the data they are fed – lawyers who depend on this data must ensure that it is accurate and up to date before relying on it in their work. Data privacy when training the machines poses a significant risk and there needs to be stringent care when redacting or sensitizing information for training purposes. The cost of implementing AI in either legal departments or firms – either hiring individuals with AI experience or implementing AI-powered features or technology – could reach hundreds of thousands.
On the other end, AI creates efficient researching and drafting, cutting down time spent and is ultimately cost-efficient for the client. AI-powered tools for document review and legal precedent findings speed up case preparations tremendously, leave room for lawyers to focus on the complex drafting of court cases. AI technology in email service providers create automatic reminders based on an individual’s inbox and calendar – Gmail has a feature that reminds users to follow-up on emails where no response was received. AI algorithms that can predict legal outcomes based on patterns in precedent cases could provide an advantage for a lawyer to predict a case’s chances of success.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
While there might not be concrete laws and regulations implemented, and especially with the release of applications such as ChatGPT, we anticipate there to be guidelines and principles issued throughout various sectors such as legal, cybersecurity, and privacy. There is currently an Ethical AI Toolkit that defines principles for AI systems – they should be safe, fair, transparent, accountable and understandable. We think regulators will use these principles as the foundation for future industry-specific guidelines.
Bibliography
- https://www.digitaldubai.ae/self-assessment — Ethical Toolkit
- https://ai.gov.ae/strategy/ – UAE National AI Strategy
- https://www.lexismiddleeast.com/law/UnitedArabEmirates/Law_15_2020/?V=pdf – Consumer Protection Law
- https://wipolex-res.wipo.int/edocs/lexdocs/laws/en/ae/ae014en.pdf – Uae Patent Law
- https://ai.gov.ae/wp-content/uploads/2020/02/AIGuide_EN_v1-online.pdf – AI Guide
- file:///C:/Users/lom/Downloads/Artificial%20Intelligence_%20regulatory%20approaches%20in%20the%20UAE%20and%20abroad.pdf – DIFC Article On Regulatory Approach
- https://www.adgm.com/documents/legal-framework/public-consultations/2021/guidelines-for-financial-institutions-adopting-enabling-technology.pdf – Guidelines for Financial Institutions adopting Enabling Technologies
- https://elaws.moj.gov.ae/UAE-MOJ_LC-En/00_CIVIL%20TRANSACTIONS%20AND%20PROCEDURES/UAE-LC-En_1985-12-15_00005_Kait.html?val=EL1 – UAE Civil Transactions Law
- https://www.lexismiddleeast.com/law/UnitedArabEmirates/DecreeLaw_31_2021/?V=pdf – UAE Penal Code
- https://www.lexismiddleeast.com/law/UnitedArabEmirates/Law_11_2021/?V=pdf – UAE IP Rights
- https://www.adgm.com/documents/office-of-data-protection/guidance/adgm-dpr-2021-guidance-part-3.pdf – ADGM Data Protection Guidance
- https://www.difc.ae/application/files/7416/8172/9067/Annex_A___proposed_Data_Protection_Regulations.pdf – DIFC Data Protection Amendments
- https://www.lexismiddleeast.com/law/UnitedArabEmirates/DecreeLaw_38_2021/ – UAE Copyright Law
- https://u.ae/en/about-the-uae/digital-uae/digital-technology/artificial-intelligence/artificial-intelligence-in-government-policies
- https://www.lexology.com/library/detail.aspx?g=7d42d0de-d319-46d0-9c85-33996a406d2a
- https://www.mondaq.com/copyright/1175570/the-new-uae-copyright-law-2021-key-takeaways-
- https://gowlingwlg.com/en/insights-resources/articles/2022/the-new-uae-copyright-law-2021-key-takeaways/?utm_source=mondaq&utm_medium=syndication&utm_term=Intellectual-Property&utm_content=articleoriginal&utm_campaign=article
- https://www.clio.com/resources/ai-for-lawyers/ethics-ai-law/
- https://www.dlapiperproductliability.com/countries/united-arab-emirates.html
- https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html
- https://www.wipo.int/wipo_magazine/en/2018/01/article_0007.html
- https://u.ae/en/information-and-services/justice-safety-and-the-law/consumer-protection/consumer-protection-law
United Arab Emirates: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in United Arab Emirates.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?