-
What are your countries legal definitions of “artificial intelligence”?
To this date, Mexico does not have a legal definition of Artificial Intelligence. However, on May 15, 2024, Senator Alejandra Lagunes presented before the Senate the issuance of the “National Agenda for Artificial Intelligence for Mexico 2024-2030”, which includes a diagnosis of this technology and a series of recommendations for our country. This Agenda consists of the following definition of Artificial Intelligence: “For the purposes of this document, we refer to the OECD Recommendation of the Council on Artificial Intelligence, which conceptualizes AI-based technologies as a machine-based system that, with explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. (OECD, 2023).” Such an Agenda is currently being considered the general framework and roadmap for eventual AI regulation.
Senator Ricardo Monreal filed a Bill of Law on April 2, 2024 -which was not discussed before Congress due to the presidential elections process-, that defined the Artificial Intelligence Systems in a very similar way as the European regulation, as follows: those [systems] that involve the use and exploitation of information technologies to create computer programs, capable of performing calculations, operations, research or reasoning comparable to those performed by the human mind.
Before that, on January 29, 2018, in the document “Estrategia Digital Nacional” (National Digital Strategy), Mexican Government actions were defined to ensure access to and use of information and communication technologies, to maximize their economic, social, and political impact to benefit the quality of life of Mexicans, and provided cooperation mechanisms with industry, civil society, and academia for its successful adoption. It is noteworthy that Mexico was one of the first 10 countries to issue a document of this kind, and the first one in Latin America.
In this document, the Artificial Intelligence Strategy section occupied a dominant position, despite no definitive definition being provided, but mentioned as “Artificial Intelligence refers to the group of digital technologies that allow machines to perform complex tasks that normally require human intelligence; and can generally be divided into two main fields” and divided AI in:
- General Artificial Intelligence, in which machines can fully replicate human intellectual capabilities, and even exhibit traits of consciousness.
- Specific Artificial Intelligence, where technologies and techniques such as data mining, deep learning, machine learning, or artificial neural networks are used to perform specific actions and are already available for use in early stages.
However, since a new Mexican Congress is expected to settle on September 1st, we expect this topic to be among the priorities for regulation, since President-elect Sheimbaum has informed science and technology to have a top priority in her agenda.
-
Has your country developed a national strategy for artificial intelligence?
Mexico has recently enacted a detailed “National Artificial Intelligence Agenda for the 2024-2030 period”, spearheaded by the National Alliance for Artificial Intelligence (ANIA), a group of cross-functional and multidisciplinary experts on the topic. It is a multi-pronged approach that goes beyond just using AI, but focuses on responsible development and maximizing its benefits for our country.
Here are some of the key areas the Agenda tackles:
- Building a strong foundation: This involves creating clear rules and regulations for AI use, ensuring data privacy, and addressing potential biases in AI algorithms. It also emphasizes education and workforce development to prepare everyone for the AI-powered future.
- Fostering innovation and growth: The Agenda supports research in core AI technologies and their applications in various sectors like healthcare, agriculture, and manufacturing. It also aims to create an environment that encourages AI startups and attracts investments in this field.
- Addressing challenges and risks: The plan acknowledges potential risks like cybersecurity threats and job displacement due to automation. It proposes strategies to mitigate these risks, including developing robust cybersecurity protocols and retraining programs for impacted workers.
- Collaboration and Governance: The Agenda emphasizes a collaborative approach, bringing together government, academia, industry, and civil society to implement the plan effectively. It also suggests establishing democratic governance mechanisms for responsible AI development.
- Mexico’s Global Position: The Agenda aims to position Mexico as a leader in the global AI conversation. This could involve collaborating with international organizations and fostering knowledge exchange with other countries.
The Agenda intends to advance its program in three main avenues:
- Public Policy recommendations, which focus on incorporating public policy actions within larger national and international plans, such as the 2030 Sustainable Development Goals; creation of an educative framework preparing youth for future jobs with AI; promote professional education, reeducation, and updates alongside employment requirements while guaranteeing employment rights; increase investment in emerging tech R&D; cybersecurity risk management systems promotion and development; promotion of human rights and minorities protections in AI development and use; investment in digital public infrastructure;
- Regulatory recommendations focus on issuing regulations or amendments on privacy and personal data, copyright and intellectual property, neuro rights, software and hardware certification, cybersecurity, data centers, and cloud computing. All the above regulations are to be implemented in a progressive, flexible, and adaptive manner, even considering self-regulation and co-regulation as the right way to do so, considering international best standards for technological neutrality, competition, risk management, privacy, safety, ethical use, and transparency. Regulation of sandboxes is also strongly recommended, alongside regulation based on AI risk level. The document also recommends strengthening several of the relevant regulators, such as data protection, consumer protection, telco, competition, copyright, electoral, and other regulators, as well as the judiciary.
- Governance recommendations focus on the creation of a National Digital Agency and an office of AI within the agency, which would coordinate all government agencies’ actions and ethical approaches to AI, along with other intergovernmental actions.
Overall, the National Artificial Intelligence Agenda for Mexico 2024-2030 is a comprehensive strategy with the potential to make AI a powerful tool for inclusive growth and social progress in our country. It’s a forward-thinking plan that focuses on responsible development and collaboration to ensure AI benefits all Mexicans.
However, since a new Mexican Congress is expected to settle on September 1st, we expect this topic to be among the priorities for regulation, since President-elect Sheimbaum has informed science and technology to have a top priority in her agenda
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Current regulation on AI in Mexico is not centralized, despite this may soon change due to potential regulation following the mentioned AI Agenda. On the Federal level, the 2018 National Digital Strategy, and its AI Strategy document, provided the framework for regulation. UNESCO AI Readiness Assessment Methodology has been taken by the government also as part of this framework. Mexico participated in the 2023 Declaration of Santiago, for the regulation of ethical AI, the New Delhi Global Partnership on Artificial Intelligence Summit 2023, and the ITU AI Summit, among others. Noteworthy, Mexico used to be a leader in international AI regulation discussions, such as the UN General Assembly Resolution 72/242, regarding the “Impact of fast technological changes in the advances on the Sustainable Development Goals”, and the related document 73/17 before the Third Forum on Science, Technology, and Innovation, which dedicated important sections for the review and regulation on AI.
Several regulatory bodies, such as the Data Protection and transparency regulator (INAI) have issued several documents with non-binding recommendations for the ethical and lawful use of AI so privacy is respected. The consumer protection regulator (PROFECO) has recently published together with an academic organization “Ethical Guidelines for AI Systems”. The competition regulator has issued some reports and participated in discussions, and has been active in digital market investigations. The trademark regulator (IMPI) has issued just some academic papers, while strongly using AI in their own services. We have not identified other authorities, such as several courts, the electoral regulator, and others, to issue relevant regulations or recommendations, despite most of them are already using some sort of AI for their operations. Noteworthy, most of the Mexican regulators have actively participated in the public discussions that derived from the National AI Agenda.
Given the absence of regulation, the most likely applicable regulations are, among others:
- Copyright law– However, such law is not entirely ready for an AI scenario, where the author or title holder might not always be clear, in addition to lack of clarity in many scenarios such as derivative works, and complexity in enforcement against infringing AI models or users. It would also apply to trade secrets used or generated by AI, despite the original wording being not meant for it.
- Privacy Law– The law is based on principles, which facilitate application to novel areas such as AI. However, AI poses higher complexity scenarios.
- Consumer Protection– since the use of AI systems and services might be a consumer relationship. However, the current wording may be insufficient for some specific uses and details.
- Criminal Code– several crimes may apply to scenarios related to AI. However, criminal law is of explicit application, and thus, even if some wording is close enough to a specific case, it may fall within the criminal type.
- Commercial Code– which regulates B2B relations, including the sending of commercial messages. Thus, any B2B use of AI, such as its use for marketing, advertising, and others.
- Federal and local Civil Codes– as they provide general rules on contracts, torts, civil liability, damages, negligence, and other scenarios that could stem from AI use. However, their applicability is to be discussed case by case.
- Telco Law– this law is mainly on technical elements of telecom and broadcasting infrastructure. However, the regulator is seen as the natural regulator due to technical knowledge.
- Labor Law- indirectly, AI could be seen as an employment tool for employees, in addition to the risks of employment displacement.
- Health, Banking, and other professional services regulations may apply, as such activities are heavily regulated, including who and how the services are provided, and the use of AI in different scenarios may infringe such laws. Naturally, the current wording does not provide for AI-related cases.
- Minor’s Protection Rights, would be applicable for the use by minors, as well as their protection, not only of privacy and speech, but also their development and infancy rights.
There are more than 35 bills of law on AI, but most of them try to regulate incidental sections, such as crimes, instead of the core elements of the technology. The only bill related to AI, presented by Senator Monreal, was a shorter version of the European AI Regulation, focusing on regulating AI on threat use levels.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Mexico, although there are no specific regulations exclusively addressing defective artificial intelligence (AI) systems, existing legal frameworks can be applied to procure the safety and reliability of AI technologies. These include:
Federal Consumer Protection Law (Ley Federal de Protección al Consumidor)
This law provides a broad framework for protecting consumers from defective products and services. While AI systems are not explicitly addressed, consumers can utilize the protections and remedies provided by this law. This includes seeking compensation for damages caused by defective AI systems that fail to meet safety expectations.
Federal Civil Code (Codigo Civil Federal)
The Federal Civil Code establishes general principles for liability in cases of damage caused by defective products. This encompasses damages caused by negligence or willful misconduct, which can apply to developers or manufacturers of AI systems that fail to ensure the correct use and safety of their services.
Federal Law on the Protection of Personal Data Held by Private Parties (Ley Federal de Protección de Datos Personales en Posesión de los Particulares)
This law is relevant for managing the level of security that AI systems must maintain in protecting user data. It imposes obligations on developers and operators to ensure data security and privacy. Non-compliance with these requirements can be considered a defect in the AI system, potentially leading to liability for any harm caused.
Federal Law on Copyright (Ley Federal de Derechos de Autor)
This law is relevant for copyright protection. If the AI becomes defective or produces incorrect content, including hallucinations, derived from improperly using any copyrighted content, such as use without authorization or license, the law may become applicable.
Eventually, a cybersecurity law will be extremely relevant for defective AI systems, but such has not yet been enacted.
While these existing legal frameworks provide a basis for addressing defective AI systems, several challenges remain:
- Traditional notions of product defects may not fully capture the complexities of AI systems, particularly in terms of software bugs, algorithmic errors, and unintended behaviors.
- Determining the responsible party (developer, deployer, or user) for defects in AI systems can be complex, given the collaborative nature of AI development and deployment. There is not always clarity of who is the liable party.
- Monitoring and enforcing compliance with safety standards for AI systems requires specialized knowledge and resources, which may be lacking in current regulatory bodies.
- International enforcement, as most AI developers are expected to be located offshore. Even if the regulatory or judicial case requires their participation for collaboration purposes, notification and service will require international collaboration. Enforcement against users would equally have severe international cooperation challenges.
Recognizing these challenges, Mexican authorities are exploring new regulatory frameworks specifically tailored to AI technologies. Legislative proposals are being discussed to create specific regulations for AI systems, focusing on safety, transparency, and accountability.
Mexico is engaging with international bodies to align its AI regulations with global best practices, ensuring comprehensive safety standards for AI systems.
While existing laws in Mexico provide a foundational framework for addressing defective AI systems, there is a clear need for more specific regulations to effectively manage the unique challenges posed by AI technologies. Ongoing legislative efforts aim to fill these gaps, ensuring that AI systems meet the safety expectations of the public.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Mexico does not have a detailed regulation related to the use of AI, nor the liabilities inherent to its application. In this regard, in a case involving such systems, the general principles of civil, criminal, and commercial liability would apply. Such rules are outlined in Mexico’s Federal Civil Code, Commerce Code, and the applicable Criminal Codes.
As a general principle, under such codes, a party that has been damaged has the right to be compensated. Furthermore, under the Mexican doctrine of liabilities, a person is liable when the affectation or damage is directly linked to a conduct or omission that the law characterizes as illegal (or the lack of fulfillment of an obligation). We consider that such a principle would apply to the cases where a person has suffered damage related to the use of AI.
Significantly, USMCA Chapter 19.17 provides for intermediaries safe harbor (lack of liability) for user-generated content. While this was not done envisioning AI, it is possible it could stretch also to AI service providers. The main difficulty would come from the requirements: there is a safe harbor insofar as the provider removes any unlawful content. AI developers might not be capable of complying with such. However, even if applicable, such a section of the Treaty has not yet been implemented in local law (except for Copyright) despite the Treaty requiring so in three years after its ratification, which could be interpreted as directly applicable.
There is a relevant case waiting to be resolved before the Supreme Court, Ulrich Richter vs. Google, where Richter has prevailed in claiming liability and damages against Google for “enabling” users to infringe his fundamental rights in a blog created by a third-party user. The case will likely determine the outcome of liability for technology providers.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Mexico’s case law has not dealt with the allocation of liability related to the use of AI.
Depending on the use of AI, and the relevant facts of the case, liability for the use of AI could be allocated to either the developer, developer, user, or victim. Factors such as the characterization of such technology, the ways it is operated and the position of the person in charge of such operation, the existence of a violation of law (or negligence) in the the use of such tools, and the possible harm suffered by the victim will be relevant to determine the allocation of liabilities associated with AI.
Please also refer to the USMCA and the Ulrich Richter vs. Google case comments in response to Question 6 above.
Consequently, we would have to wait to confirm Mexico’s precedents in order to learn how the courts will interpret liabilities derived from the use of AI.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
As a general principle, in Mexico, a person is liable when he causes an affectation or damage to another person and such damage is directly linked to a conduct or omission that the law characterizes as illegal (or the lack of fulfillment of an obligation). Furthermore, Mexican law recognizes that affectations may result from the use of tools and technology.
Mexico’s damages law distinguishes between objective damages (whether contractual or extracontractual) and moral damages. We foresee that the use of AI could trigger potential claims rooted in both types of damages claims.
Initially, a victim must initially prove the: (i) infringement of a legal provision due or negligence in the use of AI; (ii) the existence of direct, real, and actual damage to the victim; (ii) that such harm was directly caused by a person due to the use of AI tools.
It must be noted that Mexican courts have discretionary powers to solve disputes related to liabilities related to AI. The outcome of a possible case would depend on the interpretation of the relevant courts on the facts and evidence presented before them. As of today, there is no certainty or predictability as to the outcome of cases that involve the use of AI in Mexico.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
In Mexico, insurance products related to artificial intelligence are emerging as a part of known insurance products, in particular liability insurance, and cybersecurity insurance.
Artificial intelligence has the potential to cause significant damage by its systems or defective algorithms. A mistake from artificial intelligence may cause financial losses and damages to third parties that could be addressed by a well-structured liability policy that covers the legal cost and indemnification.
Also, artificial intelligence may be vulnerable to cybernetic attacks, in this scenario, cybersecurity insurance would be paramount to protect such systems that cover costs related to security incidents, as well as data recovery, breach notifications, and forensic investigations.
Even though there are no specialized insurance products for artificial intelligence yet, insurers are making strives to address the main risks concerning artificial intelligence through known insurance products, taking the Mexican insurance industry one step closer to the future.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Under the Federal Law on the Protection of Industrial Property (FLPIP), AI cannot be named an inventor, as it expressly establishes that inventions are “human creations”, and the inventor is the individual or individuals who are indicated as such in the patent application.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Images generated by and/or with artificial intelligence are not excluded from copyright protection. However, because AI cannot be considered an author under applicable laws in Mexico, the work-product resulting from the use of AI, cannot be considered as a creation of the authorship and, thus, ownership of AI directly. A notable aspect under consideration is whether creators/owners of AI should be acknowledged as authors for the works produced by their AI systems, or if the user of the AI itself can be regarded as an independent author. There is room to consider the user may at least request authorship for the prompt, despite it is unclear who would be the author for the output. The absence of regulations on AI in Mexico poses certain difficulties to the determination of rights and obligations concerning the works generated by AI, which will likely result in the enactment of specific regulations or legislation.
In 2023, the Supreme Court ruled that only humans can be considered authors, as “creativity is related to the capacity of a human person to create something”. However, recently the Court admitted to review a case where a lawyer tried to register the authorship of a work under “ChatGPT”; its denial was followed by judicial action. Plaintiff argues that the Court should provide for the recognition of the “legal personality of synthetic persons”.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
The use of AI in the workplace is neither regulated, nor standardized. There are some ethical questions regarding if prohibited or allowed, and in that case, which should be the ethical rules for its use. There are some concerns regarding the information input due to privacy and copyright considerations, as well as confidentiality and safety issues. Most companies disallow the use of sensitive or confidential data for AI, due lack of clarity on what use would that information receive, including training of the AI model. Also, depending on the line of business, the kind of AI systems and uses would vary, as some uses might be considered risky, improper, or unethical (such as bias, the use of sensitive data, or emotional manipulation). Thus, there is a growing trend for companies to draft AI Ethical Codes, which describe what can be done, what should not, and what requires authorization from newly created AI Ethics Committees or Chief AI Officers.
-
What privacy issues arise from the use of artificial intelligence?
Currently, the main concerns relate to machine training models, identity theft, and subject identification/consent to the processing of personal data. AI systems typically require large quantities of data for effective training, which may encompass sensitive personal information like biometric data, health records, financial details, and behavioral patterns. Concerns emerge when the processing of this data is inadequately informed or consented, potentially resulting in breaches and misuse. Moreover, anonymous individuals in a given data set may be identified through the conversion of anonymized data. AI also has the capability to deduce sensitive information about individuals based on patterns in behavior or interactions, predicting health conditions, political beliefs, or sexual orientation. Finally, surveillance technologies, including facial recognition and biometric scanning, can lead to the infringement of local privacy rights if such AI-powered technologies do not operate with the express and written consent of a data subject or when the processing of such data surpasses the limits of any consented purposes.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
Data scraping is currently not regulated in Mexico, which has allowed scraping companies to collect and process data unrestrictedly, virtually in violation of applicable privacy and data protection laws. However, the increased use of data scraping and data mining technologies may result in the enactment of new regulations.
From a competition point of view, data scraping could fit into two currently regulated categories related to the risks this practice poses to competition.
First, using data acquired through data scraping could mislead consumers, which may be considered “unfair competition”, which is regulated in the IP Law, per Article 10 of the Paris Convention and the WTO regulation. In this case, creating confusion with a competitor or misleading the public as to the nature of offered goods or services.
Second, the owner of the information might incur relative monopolistic practices for preventing others from scraping public data from their digital content. This interpretation suggests that a monopoly restricting access to data might pose a threat to competition. This was the argument made in the US case hiQ Labs, Inc v LinkedIn Corp, where a startup sued LinkedIn Corp for preventing them from scraping their data, which the plaintiffs considered an essential input of their business, or allowing data owners to selectively restrict data access to competitors while allowing its access to third-parties for commercial use. While we have not seen similar cases in Mexico, similar arguments of “refusal to deal” amounting to relative monopolistic practices could be made, as this interpretation has apparently gained traction given the growing role of data as an essential competitive tool.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
We have seen very few relevant cases to consider if there is case law or a trend. Strictly speaking, a website’s terms of use are binding between the parties. Under civil law, consent can be expressed (such as when there is a button) or tacit (just by using it), and thus, contractual relationships would exist. Thus, infringing a private contract would be a valid argument before civil courts. Additionally, there are prohibitions in the Copyright Law, Privacy Law, and the Criminal Code that prevent data scraping. Therefore, from a normative perspective, data scraping should be a valid case.
However, the main challenge comes from proving both (i) that a contractual relationship exists and (ii) that data scraping was effectively made. Both require expert forensic opinions, which could be complex and difficult to attain. This is even harder if there are international or offshore elements. Mexican courts are not particularly savvy in this kind of case. Due to the above, most corporations victims of data scraping strongly rely on extrajudicial measures, such as “cease and desist” letters, and similar strategies, before initiating litigation.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes. The Data Privacy regulator, the National Institute of Transparency, Access to Information and Personal Data Protection (INAI) has published certain general Recommendations for the Processing of Personal Data derived from the Use of AI, emphasizing data controllers’ obligations to: (i) provide data subjects with a privacy notice before the processing of any personal data; (ii) requesting specific consent which shall be tied to specific purpose informed through the privacy notice; (iii) avoid processing personal data for additional purposes except when compatible or analogous with those for which the data had been originally collected; (iv) establish and document procedures for the conservation, blocking and deletion of personal data, including conservation periods; and, (v) implement procedures to evaluate risks derived from new products, services, technologies and business models that involve the processing of data.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
Not in an official capacity or through publicly available information.
-
Have your national courts already managed cases involving artificial intelligence?
We are not aware of any public cases associated with the use of AI and the liabilities inherent to its use.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
No, there is not yet any such regulator. Different bills of law consider different options, such as the Telco regulator or a new specialized AI regulator. Thus, meanwhile, the regulation applicable to AI is dispersed, and oversight would be as well.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Mexico has usually been an enthusiast in technology implementation. Regarding AI, some corporations are quickly transitioning to use AI in their day-to-day operations, but the vast majority are in a “wait & see” perspective. Since this is a quite novel and not fully understood technology, with a fast-evolving players ecosystem, where most experts, media, and regulators are not fully aware of its potential, limitations, implications, and risks, despite generalized enthusiasm and interest in learning and understanding what it is and how it works, use in daily life and work is likely going to take some time and clarity.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
In our experience, most law firms are also in a “wait and see” mode, as they want to obtain all necessary elements to assess the risks around the use of AI, including risks around the information of their clients. Other concerns relate to what can and cannot be used, and how to train younger generations and their use of AI systems. However, most firms are silently exploring different AI options, mainly in contract drafting, large volumes of information location and retrieval, and simple chatbots. Use for regulatory or judicial predictions is significantly limited. However, in all of those, most use is usually limited and in testing mode.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Generative LLM AI systems have the following key opportunities:
- They can significantly reduce the amount of time dedicated to drafting standard and unchallenging work, freeing up time for challenging and interesting work, for client added value. The system can be used as a starting point, so legal documents do not need to begin from scratch, as they should be based on prior similar work.
- AI systems are very useful in helping in review and summary of large volumes of information and documents, helping identify relevant sections as well as providing metrics and trends.
- AI systems can help create smooth, standardized, and centralized legal processes and systems, increasing the standardization of quality and efficiency throughout the entire organization.
- AI can promote quality of work increase, becoming an efficiently trained paralegal tool, providing improvements to prior arguments and wording, and showcasing strategic weak points in drafting and negotiation.
- In a more remote future, AI systems could provide insights on trends and relevant data, both on the performance and skills of law firm members, as well as trends on regulatory, judicial, and even legislative actions, with very detailed information, paving the way for predictive AI models.
At the same time, they pose the following challenges:
- Proving they are safe to be used, from legal, confidentiality, privacy, and cybersecurity standards, as using clients’ information there is not trusted at the moment.
- Proving that the AI system was not trained with unlawful or infringing data, either from a copyright or a privacy perspective.
- Proving that bias and hallucinations have been identified in recurrent audits and have been mitigated.
- Clarifying who would hold authorship and ownership of the system output, and how it can be used, including reuse for the AI system.
- Enabling ethical and legal use, and providing sufficient alerts and signals (such as undeleted watermarks) to show when some work was done aided by AI, as well as inhibit, prevent, and control risky, dangerous, unethical, or unlawful uses.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Despite many of the most impressive AI developments being in the image, voice, and video sectors, the legal firms are mostly interested in:
- Contracts, lawsuits, memorandums and legal opinions drafting, editing, proofreading and review options, taking as basis prior similar documents in the firm historical database
- AI systems used to identify, locate and retrieve specific words, clauses or language, mainly when related to large volumes of documents.
- AI systems used to summarize large or complex documents, including contracts, statutes and regulations, case law, and others.
- AI systems used to manage legal processes, documents and information, such as litigation management, contract lifecycle management, and others.
- AI systems used for clerical and administrative work, such as billing systems, emails summaries, email drafting, creation of charts, tables, presentations and organograms.
- AI systems in the form of chatbots to respond to simple or standard questions.
Mexico: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Mexico.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?