Navigating Legal Risks in Expanding AI Applications: Challenges and Mitigation Strategies

Over the past few years, like the rest of the world, the field of artificial intelligence has developed rapidly in China. In terms of industry, all sectors are actively embracing AI to upgrade their industries through AI. At the national level, the Chinese government is committed to becoming a global leader in AI while ensuring safety, ethical standards, and promoting fair-side competition. Moreover, the Chinese government has introduced a series of policies to promote the development of AI and a “package” of laws and regulations to safeguard the development of AI.

Meanwhile, as the field of application of AI expands, regulators, policymakers, AI service providers, and users still face challenges in several perspectives.

In this section, we discuss four areas of AI application and present a few legal related practice risks associated with these areas and how they could be mitigated.

Medical industry

In recent years, the application of AI in China’s medical industry has been increasing, including but not limited to medical imaging, clinical assisted diagnostic decision-making, precision diagnosis, health management, medical informatization, medicinal research and development, as well as medical robots, which have effectively improved the quality and efficiency of healthcare industry to a new level. However, the application of AI in healthcare industry triggered extensive concerns and discussions in the legal community about how to properly exercise AI in accordance with the law.

In view of this, China’s legislative department and competent authorities have been continuously trying to regulate the application of AI in medical industry in recent years, and have successively promulgated a series of laws, regulations and codes of conduct, including but not limited to the “Basic Medical and Health Promotion Law of the People’s Republic of China (2019)”, “Management Specifications for Artificial Intelligence-Assisted Diagnosis Technology (2017 Edition)”, “National Hospital Information Construction Standards and Specifications (2018)”, “Internet Hospital Management Measures (Trial) (2018)” “Key Points for Review of Deep Learning-Assisted Decision-Making Medical Device Software (2019)” “Regulations on the Supervision and Administration of Medical Devices (2021)” “Guiding Principles for Classification Definition of Artificial Intelligence Medical Software Products (2021) Year)” “Guiding Principles for Registration Review of Artificial Intelligence Medical Devices (2022)”, etc.

Risks

Patients’ civil rights to health and rights to basic medical and services: China’s Civil Code and China’s Basic Medical and Healthcare and Health Promotion Law comprehensively protect citizens’ rights to health, body, and access to basic medical and healthcare services, prioritize people’s health as a national strategy, and utilize AI to analyse drugs and diagnose patients at the stages of drug research and development, disease prevention and diagnosis, and treatment and rehabilitation. The use of large artificial intelligence models to analyse drugs and diagnose patients at the stages of drug development, disease prevention and diagnosis, treatment and rehabilitation has become a technology that medical development companies and clinicians are increasingly relying on. However, current AI is not yet capable of mastering complex medical knowledge and experience and making personalized and precise judgments and recommendations, which may affect the accuracy and professionalism of pharmaceutical development and therapeutic behaviour and impair patients’ autonomy over their own health and bodies.

Insufficient Algorithm Transparency and Risk of Algorithmic Discrimination: Article 1219 of the Civil Code stipulates that medical personnel should explain their conditions and medical measures to patients during treatment activities; however, during the development of medical AI, the actual workings of the algorithms and decision-making processes are often difficult to understand and monitor by outsiders, thus jeopardizing the patients’ right to informed consent. In addition, developers may unintentionally introduce bias when training algorithms, affecting the safety and data accuracy of the algorithmic applications, and leading to discriminatory decision-making by generative AIs when confronted with certain patient groups.

Tort Liability Allocation and Proof Issues: When an AI makes a malfunction or an error in its medical activities that harms the patient’s personal rights and interests, it may usually result in medical malpractice, which can lead to a dispute over damages. According to Article 1218 of the Civil Code, if a patient suffers damage in the course of medical treatment activities, and the medical institution or its medical staff is at fault, the medical institution shall bear the liability for compensation. This gives rise to two questions: first, whether the medical artificial intelligence as the subject of liability; second, how to determine whether the medical artificial intelligence fault, because the artificial intelligence of the “black box properties” will make the fault of the investigation is very difficult. Similarly, if the AI medical activity is treated as a defective product, both the patient and the medical staff directly using the AI device may be able to assert product liability claims against the manufacturer.

Mitigate the risks

Standardize the use of medical artificial intelligence: In the process of drug development and clinical diagnosis, avoid over-reliance on large artificial intelligence models. In particular, the provisions of Article 1008 of the Civil Code should be used as a guide. When developing new drugs, medical devices, or developing new prevention and treatment methods, if artificial intelligence technology needs to be used for clinical trials, it must be approved and disclosed by the relevant competent authorities to subjects and can only be carried out with their written consent.

Strengthen the supervision of medical artificial intelligence algorithms: Based on the suggestions of the scholars, establishing a filing and review system for medical artificial intelligence algorithms, improving relevant regulatory regulations, and establishing a specialized agency to supervise the entire process of research, development and use of medical artificial intelligence is highly advisable. In addition, algorithm audits are strengthened through multiple channels to strengthen the protection of data vulnerable groups.

Clarify the rules for taking responsibility: According to the provisions of the “Management Specification for Artificial Intelligence-assisted Diagnosis Technology”, artificial intelligence can only be used as an aid and reference for diagnosis, and cannot be used directly as the final clinical diagnosis, and the final diagnosis must be determined by a qualified clinician, therefore, medical artificial intelligence cannot be the main body of the lawsuit that assumes the responsibility for damages, and in the case of legal disputes, it should be used in accordance with the provisions of the Medical Damage of the Civil Code. In the event of a legal dispute, the medical institution or personnel using the AI should be held liable for fault in accordance with the relevant provisions of the Civil Code on medical damage. In terms of product liability, the producer of the medical artificial intelligence should also bear no-fault liability in accordance with the relevant provisions of the Civil Code.

Manufacture industry

The application scenarios of AI in China’s manufacturing industry mainly include product development and production, enterprise operation management, and supply chain management. For example, the Tesla Super Factory in Shanghai, China, utilizes AI in the manufacturing process. In the future, AI technology may also be used in asset and equipment management and customer demand analysis. At the same time, legal risks associated with the above background do exist.

Risk

Product Liability: The algorithms on which AI technology relies should be compatible with the manufacturing process, and algorithms that lack engineering experience may lead to manufacturing defects or product defects that cause malfunctions or safety incidents. Additionally, products with artificial intelligence capabilities may also result in personal injury or property damage that could lead to product liability lawsuits.

Data security: Artificial intelligence technology products may collect users’ personal data, such as face information, and may violate citizens’ rights and interests if users are not informed and authorized in advance, according to the “Criminal Law”, the “Civil Code”, the “Personal Information Protection Law” and guiding cases of China’s Supreme People’s Court. Product manufacturers may be held civilly or even criminally liable as a result.

Labor Law: The application of artificial intelligence and robotics may replace some human jobs. Furthermore, due to the limited number of engineers who have mastered AI technology, there may be issues of unfair employment in violation of the Labor Law and the Employment Promotion Law.

Antitrust Law: The widespread application of AI technology involves the accumulation of data and the control of market share, etc. Large companies may consolidate their market position and implement unfair competition through data and technology monopolization, which may trigger interventions such as anti-monopoly investigations, and administrative penalties by law enforcement and regulatory authorities of the Anti-Unfair Competition Law and the Anti-Monopoly Law.

Mitigate the risks

On product liability risks: Strengthen pre-market testing and certification of products to enhance the reliability and safety of AI technology. Strengthen training and technical support for AI product users to reduce usage accidents caused by lack of understanding of AI product functions.

About data security: Inform users in advance that AI products will collect their personal data and provide relevant services after obtaining their consent. Ensure that the operation of the AI product complies with relevant Chinese laws and regulations during the collection, storage, processing and transmission of said personal data.

On the issue of fair employment: When enterprises introduce AI technologies in their production operations, they should seek to safeguard the labour law rights and interests of their employees and take reasonable human resource management measures, such as transferring and relocating employees and training and upgrading them. Enterprises should also actively communicate and consult with labour unions, listen to employees’ opinions and suggestions before adopting AI technology to replace existing positions, and maintain harmonious and stable labour relations.

Regarding monopoly and unfair competition issues: When expanding the application of AI technology, compliance reviews should be conducted to ensure that corporate behavior complies with the provisions of the “Anti-Monopoly Law” and the “Anti-Unfair Competition Law”. Companies should also actively participate in industry self-regulatory organizations, and jointly maintain a market environment of fair competition.

Financial industry

The application of artificial intelligence technology in the financial field mainly includes data analysis, imitating human decision-making, intelligent customer service, automatic trading, abnormal transaction identification, etc. In addition, financial regulatory authorities also apply AI technology to improve supervision and early warning capabilities, especially in identifying and preventing financial fraud. However, the development of financial artificial intelligence is not smooth sailing. For example, legal risks such as information security, personal privacy and data rights need to be resolved urgently.

Risks

Risks arising from the interconnectedness of China’s financial system: China’s banking system, bank-like financial institutions and financial technology companies, like other countries in the world, have huge and highly interconnected financial systems and networks, and are trying to empower them with emerging technologies such as AI. Upgrade and transform the modern financial system to prevent and resolve financial risks. Emerging technology companies such as Alipay and others are also transforming and upgrading through artificial intelligence technology. (associations between Alipay and other financial institutions/products)

As modern financial services such as mobile payment and consumer finance occupy a huge market size and are closely interconnected with traditional financial institutions such as banks, securities, and insurance, as well as related financial infrastructure, financial risk spillovers and transmission speed are accelerating. It is necessary to prevent technological endowments. New financial risks that can be avoided.

Data privacy risks: In China, the cornerstone of the financial credit system is the credit system. However, due to the lack of interconnection and big data analysis, my country’s traditional credit system and regulatory mechanisms are unable to assess the credit status of individuals and enterprises comprehensively and accurately. Big data and artificial intelligence technology can solve the above problems to a certain extent through data association and information mining. However, the above-mentioned connections and in-depth mining of personal and corporate information may infringe privacy, business secrets and other related rights and interests.

Risks of discrimination by model builders: The application of artificial intelligence in financial algorithm scenarios has the following risks. On the one hand, the design and operation of algorithms and models cannot escape the subjective intentions of the designers themselves. For example, when artificial intelligence algorithms are used in the field of credit scoring, the factors and parameters used by the algorithm to determine what constitutes good credit are susceptible to the inherent bias of the designer, resulting in objective unfairness in credit scoring. On the other hand, because the above-mentioned model is covered with artificial intelligence, it is more difficult for model users to refute the results of its credit evaluation.

Legal risks related to smart contracts driven by artificial intelligence: Smart contracts are applications that include code execution. When participants meet predetermined conditions, a certain action will be triggered. This automatic execution mechanism uses code as the underlying logic to ensure that all participants can honor their conditional orders. The mode in which smart contracts use computer language as the underlying logic to execute instructions and the legal rules that use natural language as the carrier are sometimes difficult to correspond one-to-one. For example, in the process of buying and selling stocks, investors set restrictions on buying and selling cut-off points based on factors such as region, time, and market value, thereby forming a series of transactions. The above behavior is not difficult to operate through code instructions, but it is different from Relevant legal rules cannot correspond one to one. For example, terms such as “reasonable” and “best efforts” cannot be effectively interpreted, which may lead to high legal risks in contract terms.

Mitigate the risks

Strengthening dynamic supervision of artificial intelligence financial services: To prevent and control risks related to artificial intelligence technology, strengthening multi-body and multi-link collaborative supervision including market entities, industry associations and regulatory agencies is the main way to deal with financial technology risks as one of the methods.

In 2020, the People’s Bank of China, together with relevant departments, formulated and issued the “Work Plan for the Coordination and Supervision of Financial Infrastructure” (hereinafter referred to as the “Plan”), clarifying the need to unify regulatory standards. On this basis, the associated risk indicators related to artificial intelligence technology should be further regarded as an important evaluation factor of system risk, to achieve dynamic monitoring of associated risks, timely intervention and control, and prevent risk spillover.

Regarding the application of artificial intelligence to China’s financial credit system, it is necessary to strengthen multi-department coordinated supervision, covering credit information and data collection, credit rating classification, credit assessment model setting, credit application scenario laying and other aspects.

Hierarchical management of data: In the field of artificial intelligence data application, participants should refer to the “Data Security Law of the People’s Republic of China”, “Personal Information Protection Law of the People’s Republic of China” and relevant regulatory requirements for the protection of consumer rights and interests in the financial industry, combined with the People’s Bank of China’s 2023 On November 8, 2019, the financial industry standard “Information Disclosure Guidelines for Financial Applications of Artificial Intelligence Algorithms” was released to provide hierarchical protection and anonymization of sensitive data, and to control data security throughout the entire life cycle from data collection, processing, storage to destruction. , to prevent data leakage or unauthorized use.

At the same time, relevant departments determine the classification standards and risk levels of digital financial algorithms based on their intelligence, application scenarios, and application stages, and determine applicable regulatory measures and supervisory intensity in accordance with the law.

Clarify the inter-subject responsibilities of artificial intelligence in credit supervision: In March 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the “Opinions on Promoting High-quality Development of the Social Credit System and Promoting the Formation of a New Development Pattern” (hereinafter referred to as the “Opinions” ), focusing on strengthening market credit constraints, the “Opinions” pointed out that it is necessary to give full play to the role of credit in financial risk identification, monitoring, management, disposal and other aspects, and establish and improve a risk prevention and resolution mechanism of “early detection, early warning, and early disposal”. According to the above regulations, credit information reporting entities such as credit service agencies and intelligent service platforms involved in the process of artificial intelligence in the financial field shall be responsible for the authenticity and accuracy of the credit information provided and update it in a timely manner.

Public credit information sharing platforms or third-party credit information collection agencies should handle objections raised by credit subjects in a timely manner, explain credit profiling or risk prediction artificial intelligence technology, and ensure the objectivity and fairness of artificial intelligence algorithms.

Establish an accountability mechanism for digital financial algorithms: In August 2023, the “Interim Measures for the Management of Generative Artificial Intelligence Services” began to be implemented, which emphasized equal emphasis on promoting AI innovation and maintaining national security, and stipulated the responsibilities and obligations of AI service providers and clearly proposed the design A two-tier regulatory structure that places equal emphasis on accountability and application supervision brings algorithm design into the scope of supervision. At its request, it is urgent to clarify the rules on the burden of proof in the case of losses caused by artificial intelligence and consider the possibility of related public interest litigation.

Large Language Model

In the field of artificial intelligence applications in China, large language models are widely used and can perform various tasks such as language understanding and generation. The Chinese corporate world is using big language models to transform and enhance a wide range of applications in everything from chatbots and virtual assistants to content generation, research assistance, and language translation. At the same time, related legal risks also arise.

Risks

Intellectual property infringement: The intelligent foundation of the big language model is a corpus constructed by extensively collecting and integrating Internet data through technical means (such as crawler technology). The above data is likely to include a large amount of copyright-protected content such as books, academic papers, social media information, etc., and the ownership of rights to such content is complex and diverse and involves differences in legal regulations and protection systems in different countries and regions. Therefore, when using large language models, care should be taken to avoid infringing the legitimate rights and interests of copyright holders in various countries and regions around the world.

Information security and privacy protection: The information collected by the large language model may include sensitive information such as individual users’ names, addresses, phone numbers, email addresses, browsing history, shopping preferences, health status, etc. Once this information is leaked, it may lead to a series of problems such as user identity theft, financial fraud, and harassing phone calls, affecting the user’s quality of life and psychological safety. For enterprises, big language models may collect trade secret data through various aspects such as product development, market strategy, customer service, etc. Once these trade secrets are leaked, it may lead to adverse consequences of unfair competition.

Risks of false and misleading information: Large language models may “make up” information to fill knowledge gaps, thereby misleading users. For example, judicial cases that do not exist may be fabricated to mislead the court and the public and affect the judicial process. Large language models and supporting automated tools may also be improperly exploited by users to generate false statements and spread them quickly, infringing on the reputation of others or damaging the goodwill of competitors.

Technology abuse and supervision: The generation ability of large language models may be exploited by hackers to write highly disguised malware or phishing emails to induce users to click to download. Their attack scripts are difficult to detect, thereby infecting devices or stealing data, causing the system to be compromised. Penetration and destruction.

Mitigate the risks

Comply with relevant laws and regulations, such as the Copyright Law of the People’s Republic of China and the Personal Information Protection Law of the People’s Republic of China, as well as cross-border legal issues that may require compliance with overseas regulations such as the EU General Data Protection Regulation (GDPR) to ensure copyright protection and personal information protection. Legality and compliance of processing of information and sensitive data.

Legal authorization: Before collecting and using data, actively obtain authorization from the copyright holder by cooperating with copyright agencies, purchasing licenses, etc., or directly use open-source data sets.

Before training a large language model, the data should be cleaned, desensitized, and anonymized to remove or replace content that may involve issues such as copyright, information, and privacy protection.

Principle of fair use: Only when the user actively explains the specific fair use scenario, the large language model will start the search function for works protected by copyright and require the user to clearly indicate the name of the author when using the found content and shall not Damage the legitimate rights and interests of the copyright owner.

Large language model development companies, or listed companies and large enterprises that need to use large language model corpora, may consider establishing a copyright review mechanism to ensure the legal compliance of the training corpus. For example, set up an intellectual property review team, use automated tools for copyright detection, and establish a rapid response mechanism for copyright disputes.

Subjects that store personal information and corporate information may consider using encryption technology to protect the security of data storage and transmission.

Large language model developers can consider continuously optimizing the model and verify the accuracy and authenticity of the content generated by the large language model through a fact-checking mechanism.

Strengthen technical supervision and law enforcement.

Based on the cross-border characteristics of large language model technology, countries can promote international cooperation and exchanges of such technologies, coordinate law enforcement, and jointly respond to and combat cross-border cyber-crimes.