-
What are your countries legal definitions of “artificial intelligence”?
There is no uniform legal definition of “artificial intelligence”(“AI”) in the United States, however the National Artificial Intelligence Initiative Act of 2020 [1] (discussed below) defines artificial intelligence as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” It further states “[a]rtificial intelligence systems use machine and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.”
Footnotes:
- https://www.ai.gov/wp-content/uploads/2023/04/National-Artificial-Intelligence-Initiative-Act-of-2020.pdf
-
Has your country developed a national strategy for artificial intelligence?
National AI Strategic Plans
The United States began focusing on AI strategy in 2016, during the Obama administration. Building on White House reports from 2014, 2015 and 2016 which focused on big data and algorithmic systems, the National Science and Technology Counsel released a whitepaper in October 2016 titled Preparing for the Future of Artificial Intelligence [1]. It made non nonbinding recommendations addressing concerns and issues of fairness, safety, governance and global security. Simultaneously, it released a National Artificial Intelligence Research and Development Strategic Plan (“National AI R&D Strategic Plan”) [2] which outlined seven R&D strategies [3]:
- Strategy 1: Make Long-Term Investments in AI Research
- Strategy 2: Develop Effective Methods for Human-AI Collaboration
- Strategy 3: Understand and Address the Ethical, Legal, and Societal Implications of AI
- Strategy 4: Ensure the Safety and Security of AI Systems
- Strategy 5: Develop Shared Public Datasets and Environments for AI Training and Testing
- Strategy 6: Measure and Evaluate AI Technologies through Standards and Benchmarks
- Strategy 7: Better Understand the National AI R&D Workforce Needs.
The National AI R&D Strategic Plan was updated in 2019 [4] and 2023 [5], to add two new core strategies [6]:
- Strategy 8: Expand Public-Private Partnerships to Accelerate Advances in AI
- Strategy 9: Establish a Principled and Coordinated Approach to International Collaboration in AI Research
AI Initiative Office
In 2021, the National AI Initiative Act of 2020 established the National Artificial Intelligence Initiative (“AI Initiative Office”) to coordinate across the entire federal government acceleration of AI research and development [7]. The AI Initiative Office is tasked with overseeing and implementing the U.S. national AI strategy with a mission to “ensure continued US leadership throughout the world in the development and use of trustworthy AI in the public and private sectors, and to prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.” [8]
Blueprint for AI Bill of Rights
In October 2022, the Biden White House Office of Science and Technology Policy (“OSTP”) issued the “Blueprint for an AI Bill of Rights” (“Blueprint”) [9] which laid out five principles to guide the design, use, and deployment of AI systems, in the context of “five core protections to which everyone in America should be entitled.” [10] Simultaneously announced were a number of actions across the federal government that advance the Blueprint’s goals, including the protection and support of “workers and employers, educators and students, patients and health care providers, veterans, renters and home owners, technologists, families, and communities.” [11]
Most Recent Strategy Efforts
In late May 2023, the Biden administration announced new efforts to “advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.” [12] The White House OSTP issued the revised National AI R&D Strategic Plan, referenced above, to “coordinate and focus federal R&D investments” in AI. [13] The OSTP also issued a Request for Information seeking input on “mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives,” with comments due by July 7, 2023. [14]
Congressional Hearings
In addition, numerous Congressional Hearings have been conducted to guide strategy and regulation efforts, including:
- Artificial Intelligence and Intellectual Property
- (Senate) Part I: Patents, Innovation, and Competition: Discussion on how to ensuring the US patent system does not stifle AI development or competition. [15]
- (Senate) Part II: Copyright: Discussion regarding intersection of AI innovation with IP law (e.g., inventorship and conception, prior art and data sets) and policy (e.g., regulation and U.S. relationship with China). [16]
- (House) Part I: Interoperability of AI and Copyright Law Examines the intersection of generative AI technology and copyright law, including use of copyright-protected works in the training of generative AI models, copyright protection of works produced with the assistance of generative AI and the economic impact of generative AI on the creative industry. [17]
- Artificial Intelligence: Advancing Innovation towards the National Interest: Discussion on how to support innovative development in AI so that it advances the national interest (e.g., developing commercial and military applications while ensuring they are safe, secure, and transparent). [18]
- Artificial Intelligence and Human Rights Discussion on how AI can infringe on human rights (e.g., the Chinese Communist Party’s vast AI-powered surveillance system). [19]
- Oversight of A.I., Rules for Artificial Intelligence: Discussion regarding privacy and accountability regulation in connection with emerging AI technologies. [20]
- Artificial Intelligence in Government: The government using AI to provide more efficient services, assess potential security threats, and automate routine tasks to enhance the federal workforce. [21]
- Testimony on the State of Artificial Intelligence and Machine Learning Applications to Improve Department of Defense Operations: How the US can keep up with the future of war, which will entail AI-powered software on the battlefield. [22]
- Defense in a Digital Era: Artificial Intelligence, Information Technology, and Securing the Department of Defense: Testimony on priorities of the Department of Defense for digital modernization and transformation, artificial intelligence, information technology, and cybersecurity. Discussion of different technologies and software that can be used to protect sensitive US governmental and military information. [23]
- Artificial Intelligence: Risks and Opportunities: Discussion on the potential risks and opportunities associated with AI, examining how AI affects the US’s competitiveness on a global stage, and discussion of ways to ensure that AI is used both safely and responsibly. [24]
- Trustworthy AI: Managing the Risks of Artificial Intelligence: Discussion on how to ensure AI use is safe and reliable in all industries, and how to create standards and evaluation methods to ensure AI’s trustworthiness. [25]
Footnotes:
- https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
- https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd_strategic_plan.pdf
- It also identified priority areas for federally funded AI research – specifically areas where government investment was more critical–such as “AI for public health, urban systems and smart communities, social welfare, criminal justice, environmental sustainability, and national security.”
- https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf
- https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf
- https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf
- https://www.ai.gov
- https://www.ai.gov/
- https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/. The Blueprint also identifies steps governments, companies and communities can take to embed protections into laws, policy, and technological design. White House Domestic Policy Advisor Susan Rice announced at the time of the release: “…these actions will help tackle algorithmic discrimination and address the harms of automated systems on underserved communities.”
- (1) Safe and Effective Systems: You should be protected from unsafe or ineffective systems; (2) Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way; (3) Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used; (4) Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you; (5) Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
- https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/
- https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/23/fact-sheet-biden-harris-administration-takes-new-steps-to-advance-responsible-artificial-intelligence-research-development-and-deployment/
- https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf
- https://www.whitehouse.gov/wp-content/uploads/2023/05/OSTP-Request-for-Information-National-Priorities-for-Artificial-Intelligence.pdf
- https://www.judiciary.senate.gov/artificial-intelligence-and-intellectual-property_part-i-patents-innovation-and-competition (Senate Judiciary Subcommittee on Intellectual Property) (June 7, 2023) Witnesses: John Villasenor (Professor of Electrical Engineering and Law, UCLA), Ryan Abbott (Professor of Law and Health Sciences, University of Surrey School of Law), Rama Elluru (Senior Director for Society and IP, Special Competitive Studies Project), Laura Sheridan (Head of Patent Policy, Google), Corey Salesberg (VP and Global Head of IP Affairs, Novartis)
- https://www.judiciary.senate.gov/artificial-intelligence-and-intellectual-property_part-ii-copyright (Senate Committee on the Judiciary: Subcommittee on Intellectual Property) (July 12, 2023) Witnesses: Jeffrey Harleston (GC and EVP of Business and Legal Affairs, Universal Music Group), Karla Ortiz (Artist), Matthew Sag (Professor of Law, Artificial Intelligence, Machine Learning, and Data Science, Emory University School of Law), Dana Rao (GC, EVP and Trust Officer, Adobe), Ben Brooks (Head of Public Policy, Stability AI)
- https://judiciary.house.gov/committee-activity/hearings/artificial-intelligence-and-intellectual-property-part-i (House Committee on the Judiciary: Subcommittee on Intellectual Property) (May 17, 2023) Witnesses: Sy Damle (Partner, Latham & Watkins LLP; former General Counsel of the U.S. Copyright Office), Chris Callison-Burch (Associate Professor of Computer and Information Science, University of Pennsylvania; Visiting Research Scientist, Allen Institute for Artificial Intelligence), Ashley Irwin (President, Society of Composers and Lyricists), Dan Navarro (Grammy-nominated recording artist), Jeffrey Sedlik (President and CEO, PLUS Coalition; Member, Joint Committee on Ethics in AI)
- https://science.house.gov/2023/6/artificial-intelligence-advancing-innovation-towards-the-national-interest (House Science, Space, and Technology Committee) (June 22, 2023) Witnesses: Jason Matheny (President and CEO, RAND Corporation), Shahin Farschi (General Partner, Lux Capital), Clement Delangue (Co-Founder and CEO, HuggingFace), Rumman Chowdhury (Responsible AI Fellow, Harvard University), Dewey Murdick (Executive Director, Center for Security and Emerging Technology Committee)
- https://www.judiciary.senate.gov/committee-activity/hearings/artificial-intelligence-and-human-rights (Senate Judiciary Subcommittee on Human Rights and the Law (June 13, 2023) Witnesses: Geoffrey Cain (Senior Fellow, Foundation for American Innovation), Alexandra Reeve Givens (CEO, Center for Democracy & Technology), Alekzander Madry (Cadence Design Systems Professor of Computing, MIT), Jennifer DeStefano (Victim of AI Deepfake Kidnapping/Extortion Scam)
- https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence (Senate Judiciary Subcommittee on Privacy, Technology, and the Law) (May 16, 2023) Witnesses: Samuel Altman (CEO, OpenAI), Christina Montgomery (Chief Privacy & Trust Officer, IBM), Gary Marcus (Professor Emeritus, New York University)
- https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government (Senate Homeland Security and Governmental Affairs Committee) (May 16, 2023) Witnesses: Richard Eppink (Of Counsel, ACLU of Idaho), Taka Ariga (Chief Data Scientist, U.S. GAO), Lynne Parker (Associate Vice Chancellor and Director, AI Tennessee Initiative), Daniel Ho (Professor, Stanford Law School), Jacob Siegel (Writer)
- https://www.armed-services.senate.gov/hearings/to-receive-testimony-on-the-state-of-artificial-intelligence-and-machine-learning-applications-to-improve-department-of-defense-operations (Senate Armed Services Subcommittee on Cybersecurity) (April 19, 2023) Witnesses: Jason Matheny (President and CEO, RAND Corporation), Shyam Sankar (CTO and EVP, Palantir), Josh Lospinoso (Co-Founder and CEO, Shift5)
- https://armedservices.house.gov/hearings/cyber-information-technologies-and-innovation-subcommittee-hearing-defense-digital-era (House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation) (March 9, 2023) Witnesses: John Serhman (CIO, Department of Defense), Craig Martell (Chief Digital and Artificial Intelligence Officer, Department of Defense)
- https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities (Senate Homeland Security and Governmental Affairs) (March 8, 2023) Witnesses: Alexandra Reeve Givens (President and CEO, Center for Democracy and Technology), Suresh Venkatasubramanian (Professor of Computer Science and Data Science, Brown University), Jason Matheny (President and CEO, RAND Corporation)
- https://science.house.gov/2022/9/research-technology-subcommittee-hearing-trustworthy-ai-managing-the-risks-of-artificial-intelligence (House Science, Space, and Technology Subcommittee on Research and Technology) (September 29, 2022) Witnesses: Elham Tabassi (Chief of Staff, Information Technology Laboratory, National Institute of Standards and Technology), Charles Isbell (Dean and John P. Imlay, Jr. Chair, College of Computing, Georgia Institute of Technology), Jordan Crenshaw (VP, Chamber Technology Engagement Center, U.S. Chamber of Commerce), Navrina Singh (Founder and CEO, Credo AI)
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Federal Laws and Governance Policy
While the United States does not, as of July 2023, have a federal equivalent to the European Union AI Act, there are sector and state specific laws as well as a number of pending federal and state laws.
Enacted Laws
- Artificial Intelligence Training for the Acquisition Workforce Act (“AI Training Act”) (2022) [1]
- Requires the Office of Management and Budget (“OMB”) to develop an AI training program to support the informed acquisition of AI by federal executive agencies, and ensure agencies and individuals responsible for procuring AI within a covered workforce are aware of both the capabilities and risks associated with AI and similar technologies.
- National Defense Authorization Act 2023 (“NDAA”) (2022) [2]
- The NDAA directs defense and intelligence agencies to integrate AI systems and capabilities into intelligence collection and analysis, data management, cybersecurity, and other DOD operations and develop recommendations and policies for federal AI use and to assess risks and impacts.
- Consumer Protection Laws
- Additionally, federal consumer protection laws, such as Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act apply to certain applications of AI, as discussed in more detail below.
Federal Agency Guidance and Rulemaking
The National Institute of Standards and Technology (“NIST”) [3], a non-regulatory federal agency [4] within the Commerce Department, released the Artificial Intelligence Risk Management Framework 1.0 (2023) (“RMF”). [5] The RMF defines trustworthiness as having seven defined characteristics [5] and suggests specific implementation actions33 to manage risk. While voluntary, the RMF may eventually be adopted as an industry standard.
Several federal agencies, including the Consumer Financial Protection Bureau (“CFPB”), the Department of Justice’s Civil Rights Division (“Division”), the Equal Employment Opportunity Commission (“EEOC”), and the Federal Trade Commission (“FTC”) enforce civil rights, non-discrimination, fair competition, and consumer protection, including those arising from automated systems, as detailed below:
- Consumer Financial Protection Bureau (“CFPB”) [6]
- On May 26, 2022, the CFPB published guidance35 addressing the applicability of the Equal Credit Opportunity Act (“ECOA”) to algorithmic credit decisions. It clarified that creditors’ reporting obligations under the ECOA extend to adverse decisions made using “complex algorithms.” It also made clear that the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.
- Department of Justice’s Civil Rights Division [7]
- On January 9, 2023, the Division filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services. [8]
- Equal Employment Opportunity Commission (“EEOC”) [9]
- On October 28, 2021, the EEOC launched the Initiative on Artificial Intelligence and Algorithmic Fairness [10] to ensure that the use of software, AI, machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces. [11]
- On May 12, 2022, the EEOC issued non-binding technical guidance for employers for the use of AI technologies in their hiring and workforce management systems. [12]
- Federal Trade Commission [13]
- (FTC Report) Combatting Online Harms through Innovation (June 16, 2022) [14]
- The FTC issued a report evaluating the use and impact of AI in combatting online harms. The report outlined concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. [15]
- (FTC Policy Statement) Policy Statement of the Federal Trade Commission on Biometric Information and Section 5 of the federal Trade Commission Act (May 18, 2023) [16]
- Warned that use of biometric information and related technologies, including machine learning, raises material consumer privacy and data security concerns and the potential for bias and discrimination. [17]
- (FTC Report) Combatting Online Harms through Innovation (June 16, 2022) [14]
(FTC Notice and Rulemaking) Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (August 22, 2022)
- Invited public comment on “whether it should implement new trade regulation rules” governing AI-powered technologies that make decisions impacting consumers. [18]
State Laws
Enacted State Laws
Many states have enacted laws applicable to AI, including:
- Colorado
- Protecting Consumers from Unfair Discrimination in Insurance Practices (2023) [19]
- Applies to use of algorithms and predictive models in insurance practices.
- Colorado Privacy Act (2023) [20]
- Provides a right to opt-out of processing of personal data for purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects.”
- Protecting Consumers from Unfair Discrimination in Insurance Practices (2023) [19]
- California
- Bolstering Online Transparency Act (2018) [21]
- Requires disclosure of a bot’s artificial identity to communicate or interact with individuals in California online.
- California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”) [22]
- Governs profiling and automated decision-making and gives consumers opt-out rights with respect to businesses’ use of “automated decision-making technology.”
- Bolstering Online Transparency Act (2018) [21]
- Connecticut
- Connecticut Privacy Act (2023) [23]
- Provides right to opt-out of profiling s in furtherance of automated decision-making that produces legal or other similarly significant effects.
- Connecticut Public Act No. 23-16, An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy (2023) [24]
- Establishes a task force to study artificial intelligence, and develop an artificial intelligence bill of rights.
- Connecticut Privacy Act (2023) [23]
- Illinois
- AI Video Interview Act (2022) [25]
- Requires employers to abide by certain parameters when using AI during the hiring process.
- AI Video Interview Act (2022) [25]
- Indiana
- SB5 (2023) [26]
- Omnibus privacy regulation that sets out rules for profiling and automated decision-making.
- SB5 (2023) [26]
- Maryland
- HB 1202 (2020) [27]
- Prohibits certain uses of AI for facial recognition in pre-employment context.
- HB 1202 (2020) [27]
- Montana
- Consumer Data Privacy Act (2023) [28]
-
- Creates certain transparency requirements around profiling and enables individuals to opt-out of “profiling in furtherance of automated decisions that produce legal or similarly significant effects” concerning the consumer.
-
- Consumer Data Privacy Act (2023) [28]
- New York
- Local Law 144 (2023) [29]
- Requires employers to conduct bias audits of AI-enabled tools used for employment decisions and imposes notice and reporting obligations.
- Local Law 144 (2023) [29]
- Tennessee
- Tennessee Information Protection Act (2023) [30]
- Requires data protection assessments in connection with certain kinds of profiling.
- Tennessee Information Protection Act (2023) [30]
- Texas
- Texas Data Privacy and Security Act (2023) [31]
- Omnibus privacy law that creates opt out rights for profiling.
- Texas Data Privacy and Security Act (2023) [31]
- Virginia
- Virginia Consumer Data Protection Act (“VCDPA”) (2023) [32]
- Omnibus privacy law that creates opt out rights for profiling.
- Virginia Consumer Data Protection Act (“VCDPA”) (2023) [32]
State Privacy Guidelines
On a state level, generally, state Attorney Generals and specialty agencies such as the California Privacy Protection Agency and Connecticut’s Office of Artificial Intelligence have noted a focus on the use of automated data processing; however, have not released formalized or binding guidelines. These specialized state agencies continue to seek comments on such topics to further its development of applicable regulations.
Proposed Legislation Efforts
There are also numerous proposed legislation efforts at both a state and federal level:
Proposed Federal Laws and Frameworks
- SAFE Innovation Framework (introduced June 21, 2023) [33]
- Proposed by U.S. Senator Chuck Schumer for developing comprehensive AI legislation, outlining ways to “protect, expand, and harness AI’s potential” with the aim to prevent catastrophic damage while ensuring that the U.S. remains a leader in AI technology. [34]
- National AI Commission Act (introduced June 17, 2023) [35]
- Proposes a federal commission to study AI and potentially establish regulations as U.S. lawmakers ramp up their efforts to control rapidly growing AI. It would evaluate current AI oversight, propose new structures, and establish a risk-based regulatory framework.
- Guiding Principles for the Future of American Artificial Intelligence (AI) Legislation (introduced June 7, 2023) [36]
- Proposes corporate guardrails, including: (1) Creating a legal avenue for individuals to sue companies over harm caused by AI models; (2) Imposing stiff fines for AI models collecting sensitive personal data without consent; (3) Restricting companies from making AI technology available to children or advertising it to them; (4) Banning the import of AI-related technology from China and prohibiting American companies from assisting China’s development of AI; and (5) Requiring licenses to create generative AI models.
- No Section 230 Immunity for AI Act (proposed June 2023) [37]
- Section 230 shields internet companies from liability for the content posted on their platforms. This bill creates an AI carve-out where claims emerging from generative AI technologies would not be protected.
- HR 3044 (proposed May 2, 2023) [38]
- Would amend the federal Election Campaign Act of 1971 to provide transparency and accountability around the use of generative AI in political advertisements.
- Stop Spying Bosses Act (introduced February 2, 2023) [39]
- Requires disclosure and/or prohibitions on data collected from employees.
- American Data Privacy and Protection Act (“ADPPA”) (introduced June 21, 2022) [40]
- The ADPPA aims to create a national framework that would preempt many, but not all, state privacy laws. While ADPPA shares similarities with current state privacy laws, several proposed requirements are particularly relevant to AI technologies, including risk assessment obligations. [41]
- Health Equity and Accountability Act of 2022 (introduced April 26, 2022) [42]
- Aims to address algorithmic bias in the context of healthcare. [43]
- Algorithmic Accountability Act of 2022 (introduced February 3, 2022) [44]
- Would require large technology companies across states to perform a bias impact assessment of any automated decision-making system that makes critical decisions in a variety of sectors, including employment, financial services, healthcare, housing, and legal services. Documentation from impact assessments would be required to be submitted to the FTC.
- Filter Bubble Transparency Act (introduced June 10, 2021) [45]
- Would apply to platforms that use “algorithmic ranking systems,” which includes computational processes “derived from” AI.
Proposed State Laws
There are numerous other proposed regulations in state legislatures, including for example (this list is not comprehensive):
- District of Columbia
- Stop Discrimination by Algorithms Act of 2023 (“SDAA”) (introduced February 2, 2023) [46]
- Massachusetts
- SB31 (introduced on February 16, 2023) [47]
- Pennsylvania
- HB49 (proposed March 7, 2023) [48]
- Rhode Island
- H6286 (proposed April 19, 2023) [49]
Footnotes:
- https://www.congress.gov/bill/117th-congress/senate-bill/2551
- https://www.congress.gov/bill/117th-congress/house-bill/7776/text
- https://www.nist.gov/. Its mission is to promote measurement science, standards, and technology to enhance productivity, facilitate trade, and improve the quality of life.
- NIST focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems by: (1) conducting research to advance trustworthy AI technologies (https://www.nist.gov/trustworthy-and-responsible-ai), (2) establishing benchmarks and developing data and metrics to evaluate AI technologies, (3) developing voluntary technical AI standards for companies that are developing, deploying or using AI systems to help manage risks and promote responsible development and adoption of AI, including the AI Risk Management Framework (the “RMF”) detailed below, (4) contributing to discussions and development of AI policies, including supporting the National AI Advisory Committee (https://www.nist.gov/artificial-intelligence/national-artificial-intelligence-advisory-committee-naiac) and (5) hosting the “NIST Trustworthy & Responsible AI Resource Center (https://airc.nist.gov/home)
- https://www.nist.gov/artificial-intelligence/technical-ai-standards
- (1) Safe: providing real-time monitoring, backstops, or other intervention of the AI system to prevent physical or psychological harm, or endangerment of human life, health, or property; (2) Secure and resilient: employing protocols to avoid, protect against, or respond to attacks against the AI system, and withstanding adverse events; (3) Explainable and interpretable: understanding and properly contextualizing the mechanisms of an AI system as well as its output; (4) Privacy-enhanced: safeguarding human autonomy by protecting anonymity, confidentiality, and control; (5) Fair, with harmful bias managed: promoting equity and equality and managing systemic, computational and statistical, and human-cognitive biases; (6) Accountable and transparent: making information available about the AI system to individuals interacting with it at various stages of the AI life cycle and maintaining organizational practices and governance to reduce potential harms; (7) Valid and reliable: demonstrating through ongoing testing or monitoring to confirm the AI system performs as intended.
- (1) Map: collect sufficient knowledge about an AI system to inform organizational decisions to design, develop, or deploy it; (2) Measure: implement testing, evaluations, verifications, and validation processes to inform management decisions; (3) Govern: develop an organizational culture that incorporates AI risk management in its policies and operations, effectively implements them, and encourages accountability and diversity, equity, and inclusion; and (4) Manage: monitor and prioritize AI system risks and respond to and recover from risk incidents.
- https://www.consumerfinance.gov/complaint/. The CFPB sets rules and enforces numerous federal consumer financial laws and protects consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and from discrimination.
- https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/; https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/
- https://civilrights.justice.gov/. Enforces constitutional provisions and federal statutes prohibiting discrimination across many facets of life, including in education, the criminal justice system, employment, housing, lending, and voting.
- https://www.justice.gov/opa/pr/justice-department-files-statement-interest-fair-housing-act-case-alleging-unlawful-algorithm
- https://www.eeoc.gov/youth/how-file-complaint. Enforces federal laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to a person’s race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information (including family medical history).
- https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness
- https://www.eeoc.gov/ai
- https://www.eeoc.gov/newsroom/us-eeoc-and-us-department-justice-warn-against-disability-discrimination; https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence. The EEOC’s guidance outlines best practices help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (“ADA”). The guidance provides three ways in which an employer’s tools could be found to violate the ADA: (1) by relying on the tool, the employer fails to provide a reasonable accommodation; (2) the tool screens out an individual with a disability that is able to perform the essential functions of the job with or without an accommodation; and (3) the tool makes a disability-related inquiry or otherwise constitutes a medical examination.
- https://www.ftc.gov/about-ftc/mission. The FTC protects consumers from deceptive or unfair business practices and unfair methods of competition across most sectors of the U.S. economy by enforcing the FTC Act and numerous other laws and regulations.
- https://www.ftc.gov/system/files/ftc_gov/pdf/Combatting%20Online%20Harms%20Through%20Innovation%3B%20federal%20Trade%20Commission%20Report%20to%20Congress.pdf
- https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems
- https://www.ftc.gov/system/files/ftc_gov/pdf/p225402biometricpolicystatement.pdf
- https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-warns-about-misuses-biometric-information-harm-consumers
- https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security. Questions included: (1) whether rules should require companies to take “specific steps to prevent algorithmic errors,” and what kind of error rates are generally prevalent in AI, (2) whether companies should have to certify that AI they use meets accuracy, validity, and reliability standards. If so, who should set the standards – the FTC, industry, or companies’ own published policies?, (3) whether rulemaking should prohibit or limit companies from developing or using AI whose outcomes are “unfair or deceptive” under Section 5 of the FTC Act, and if so, whether the prohibition should be economy-wide or only apply in certain sectors, and (4) What kind of transparency companies provide to consumers about AI they use.
- ttps://leg.colorado.gov/bills/sb21-169. Enacted to protect consumers from unfair discrimination in insurance rate-setting mechanisms and applies to insurers’ use of external consumer data and information sources (ECDIS), as well as algorithms and predictive models that use ECDIS in “insurance practices,” that “unfairly discriminate” based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.
- https://leg.colorado.gov/sites/default/files/documents/2021A/bills/2021a_190_rer.pdf. The law defines those decisions as “a decision that results in the provision or denial of financial and lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services, or access to essential goods or services.”
- https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001.
- https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5. Includes “profiling” consumers based on their “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.”
- https://www.cga.ct.gov/2022/ACT/PA/PDF/2022PA-00015-R00SB-00006-PA.PDF. Controllers must also perform data risk assessments prior to processing consumer data when such processing presents a “heightened risk of harm.” These situations include certain profiling activities that present a reasonably foreseeable risk of unfair or deceptive treatment of or unlawful disparate impact on consumers, financial, physical or reputational injury to consumers, physical or other intrusion into the solitude, seclusion or private affairs or concerns of consumers that would be offensive to a reasonable person, or other substantial injury to consumers.
- https://www.cga.ct.gov/asp/cgabillstatus/cgabillstatus.asp?selBillType=Bill&bill_num=SB01103&which_year=2023. Serves as an amendment to the Connecticut Privacy Act. Establishes an Office of Artificial Intelligence and imposes restrictions on controllers from processing personal data of individuals between the ages of 13-16 for purposes of targeted advertising, or selling without the consumer’s consent.
- https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68. Obligations include, for example, disclosing the use of AI and the kinds of characteristics it will use to evaluate candidates, as well as seeking consent for the use of AI to evaluate interviews.
- https://iga.in.gov/legislative/2023/bills/senate/5/details. Creates an omnibus consumer privacy law along the lines of the Virginia Consumer Data Privacy Act and the Colorado Privacy Act, to regulate, among other data uses, the collection and processing of personal information.
- https://mgaleg.maryland.gov/2023RS/bills/hb/hb1202F.pdf. Prohibits an employer from using a facial recognition service for the purpose of creating a facial template during an applicant’s pre-employment interview, unless the applicant consents by signing a specified waiver.
- https://laws.leg.mt.gov/legprd/LAW0210W$BSIV.ActionQuery?P_BILL_NO1=384&P_BLTP_BILL_TYP_CD=SB&Z_ACTION=Find&P_SESS=20231.
- https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9.
- https://www.capitol.tn.gov/Bills/113/Bill/HB1181.pdf. Requires data protection assessments in connection with “profiling” where the profiling presents a reasonably foreseeable risk of: (A) unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) financial, physical, or reputational injury to consumers; (C) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (D) other substantial injury to consumers.
- https://capitol.texas.gov/BillLookup/History.aspx?LegSess=88R&Bill=HB4. Based on the Virginia Consumer Data Protection Act and creates similar requirements enabling individuals to opt-out of “profiling” that produces a legal or similarly significant effect concerning the individual. Controllers must also perform a data protection assessment for high-risk profiling activities.
- https://lis.virginia.gov/cgi-bin/legp604.exe?211+sum+SB1392. Sets out rules for profiling and automated decision-making and enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer, which is generally defined as “the denial and/or provision of financial and lending services, housing, insurance, education enrollment or opportunities, criminal justice, employment opportunities, healthcare services, or access to basic necessities.”
- https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf
- “SAFE” stands for “security, accountability, foundations, and explain” and calls on lawmakers to address areas like AI job loss, national security, responsible AI systems, alignment with democratic values, transparency, and support for U.S.-led innovation.
- https://lieu.house.gov/sites/evo-subsites/lieu.house.gov/files/evo-media-document/national-ai-commission-bill.pdf
- https://www.hawley.senate.gov/hawley-announces-guiding-principles-future-ai-legislation
- https://www.hawley.senate.gov/sites/default/files/2023-06/Hawley-No-Section-230-Immunity-for-AI-Act.pdf
- https://www.congress.gov/bill/118th-congress/house-bill/3044
- https://www.congress.gov/bill/118th-congress/senate-bill/262/text
- https://www.congress.gov/bill/117th-congress/house-bill/8152/text
- The bill stalled during the past Congressional session, and it remains to be seen whether its framework will advance in the new Congress.
- https://www.congress.gov/bill/117th-congress/house-bill/7585
- The Bill would require the Secretary of Health and Human Services to establish a “Task Force on Preventing AI and Algorithmic Bias in Healthcare” to develop guidance “on how to ensure that the development and [use] of artificial intelligence and algorithmic technologies” in delivering care so that it “does not exacerbate health disparities” and helps ensure broader access to care.
- https://www.congress.gov/bill/117th-congress/house-bill/6580/text
- https://www.congress.gov/bill/117th-congress/senate-bill/2024/text
- https://lims.dccouncil.gov/Legislation/B25-0114. Would prohibit organizations from using algorithms that make decisions based on protected personal characteristics, including actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income or disability in a manner that makes “important life opportunities” unavailable to that individual or class of individuals
- https://malegislature.gov/Bills/193/SD1827. Requires any company operating a large-scale generative artificial intelligence model to (1) register with the Attorney General and provide certain enumerated information regarding the model and (2) to adhere to certain operating standards such as reasonable security measures to protect the data of individuals used to train the model, informed consent from individuals before collecting, using, or disclosing their data, and performance of regular risk assessments
- https://www.legis.state.pa.us/cfdocs/billInfo/billInfo.cfm?sYear=2023&sInd=0&body=S&type=R&bn=49. Directs the Department of State to establish a registry of businesses operating artificial intelligence systems which includes: (1) the name of the business operating artificial intelligence systems; (2) the IP address of the business; (3) the type of code the business is utilizing for artificial intelligence; (4) the intent of the software being utilized; (5) the personal information and first and last name of a contact person at the business; (6) the address, electronic email address and ten-digit telephone number of the contact person; and (7) a signed statement indicating that the business operating an artificial intelligence system has agreed for the Department of State to store the business’s information on the registry
- Regulates companies’ uses of generative artificial intelligence models, including (1) any company using large-scale generative AI may not use AI for discriminatory practices, (2) the AI model must be programmed to generate text with a distinctive watermark to prevent plagiarism, (3) the company must implement reasonable security measures to protect the data of individuals used to train the model, and the company must obtain informed consent from these individuals before using their data, (4) the company must also conduct regular risk assessments of potential risks and harms related to their services. If passed, the act will take effect on September 1, 2023, and (5) within 90 days of the effective date of the act, any company using large-scale generative AI must register the name of the company, description of the AI model, and information on the company’s data gathering practices with the attorney general.
- Artificial Intelligence Training for the Acquisition Workforce Act (“AI Training Act”) (2022) [1]
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Consumer protection and privacy laws impose obligations with respect to applications of AI that carry a higher risk of legal impact or discrimination on individuals in the United States. Common applications of these obligations apply to circumstances where AI is used to deny employment or promotions, housing, credit, insurance, or other benefits. There are also proposed rules regarding liability for defective AI systems, as detailed above.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
There are a variety of U.S. liability theories that could apply to the use of AI systems, including:
- Invasion of privacy and security and breach of related regulations (injunctive relief, compensatory damages, punitive damages)
- Breach of contract (terms of use, open source attribution requirements) (compensatory damages, specific performance or rescission)
- Defamation (compensatory for economic and non-economic losses, punitive for malicious/reckless)
- Products liability (actual harm from defects) could apply to AI used in healthcare/medical device settings
- Claims of IP infringement could result in injunctive relief, compensatory damages/actual damages/lost profits, statutory damages for copyright violations, and/or punitive damages
- DMCA 1202 anti-circumvention violation, which forbids the removal of copyright management information (injunctive relief, actual damages and defendant’s profits, statutory damages between $2,500-$25,000 per violation)
- Unfair competition (injunctive relief, compensatory damages) and unjust enrichment (preempted by state and federal copyright law, restitution)
- Negligence (compensatory damages, punitive damages for intentional infliction)
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
As of July 2023, it is unclear how liability will be allocated, though numerous class actions have been filed against the companies building large language models and AI-related products, as further outlined below.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
The common standard of proof for civil cases is a preponderance of the evidence. This standard requires you to prove that it is more likely than not that the defendant caused your injury. In other words, there is more than a 50 percent chance that the defendant is at fault.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Many insurers have not yet named AI specifically to be covered or be excluded, so it is unclear to the extent to which liability stemming from AI systems will be insurable. It is likely insurers will introduce new exclusions given the potential increased risk stemming from AI systems. While insurers historically can underwrite human error, carriers have yet to develop the tools to underwrite the potential risks and ramifications of using AI.
Existing policies that may cover AI-related losses include:
- Cyber Liability Coverage: These policies cover risks ranging from first-party digital asset loss to third-party liability for data breaches, and may be implicated if a generative AI-powered system is hacked or compromised. Cyber liability insurance can help cover costs associated with data breaches, extortion or data terrorism threats, legal expenses, notification requirements, restoration and remediation requirements, and potential liability claims resulting from AI-related cyber incidents. Although the market continues to drive additional capacity, with more and more carriers offering $10M, as opposed to previous maximum limits of $3M or $5M, it remains to be seen whether insurers will extend cyber liability coverage to use of infringing training data or other unauthorized use of protected IP.
- Property Coverage: These policies primarily cover physical assets and risks of property damage, but could play a role in insuring AI systems. Businesses using AI may have hardware, servers, or other specialized equipment dedicated to AI infrastructure. Property insurance can provide coverage for physical loss or damage of AI-related assets due to perils like fire, theft, or natural disasters. Property policies may also offer coverage for business interruption losses, such as AI system downtime, failure, or similar disruption of business operations.
- Technology Errors and Omissions Coverage: Technology E&O insurance may protect businesses against claims of negligence, errors, or omissions related to their technology products or services. These policies often cover claims of infringing conduct, such as “media wrongful incidents” that include IP infringement or plagiarism. Coverage may extend to legal costs, settlements, or judgments caused by AI system failures, algorithmic errors or erroneous results, or inadequate performance.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Inventors of patents must be a natural person. [1]
In February of 2023, the USPTO issued a Request for Comments Regarding Artificial Intelligence and Inventorship [2] which acknowledged the “growing consensus that AI is playing a greater role in the innovation process.” It referenced a congressional request by two U.S. senators for the USPTO and the U.S. Copyright Office to jointly create a national commission on AI to consider changes to existing law to incentivize future AI-related innovations and creations, and noted “[]…there remains uncertainty around AI inventorship [which] is becoming more immediate as AI, particularly machine learning, systems make greater contributions to innovation…. If these technologies are in fact capable of significantly contributing to the creation of an invention, the question arises whether the current state of the law provides patent protection for these inventions. Accordingly, in order to foster and promote AI-enabled innovation, the USPTO requests further stakeholder feedback on the current state of AI technology in the invention creation process and on how to address inventions created with significant AI contributions.” The comment period ended May 15, 2023. [3]
Footnotes:
- On April 22, 2020, the United States Patent and Trademark Office (“USPTO”) issued decisions denying petitions to name Device for Autonomous Bootstrapping of Unified Sentience (DABUS), an AI system, as the inventor, explaining that under current U.S. patent laws, inventorship is limited to a natural person(s). The decision was subsequently upheld by a district court (Thaler v. Hirshfeld, 558 F.Supp.3d 238 (E.D. Va. 2021)) and the Court of Appeals for the federal Circuit (Thaler v. Vidal, 43 F.4th 1207, 1210 (Fed. Cir. 2022). The federal Circuit held that the law defines an inventor as ‘‘the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention’’ and found that based on Supreme Court precedent, an ‘‘individual’’ ordinarily means a human being unless Congress provided some indication that a different meaning was intended. However, the court did not address whether inventions made by human beings with the assistance of AI are eligible for patent protection.’ 35 U.S.C. 100(f). See also USPTO, AI and Emerging Technology Partnership (available at https://www.uspto.gov/initiatives/artificial-intelligence/ai-and-emerging-technology-partnership-engagement-and-events?MURL=AiPartnership), USPTO, Public Views on Artificial Intelligence and Intellectual Property Policy (https://www.uspto.gov/sites/default/files/documents/USPTO_AI-Report_2020-10-07.pdf) and Gunderson Dettmer Webinar, (https://www.gunder.com/news/webinar-patenting-ai-what-does-it-mean-should-we-do-it-and-what-does-success-look-like).
- https://www.govinfo.gov/content/pkg/FR-2023-02-14/pdf/2023-03066.pdf (Docket No. PTO–P–2022–004)
- Updates to the Requests for Comment will be published on: https://www.uspto.gov/initiatives/artificial-intelligence/artificial-intelligence-reports
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
It has generally been well-established that copyright only protects the output of human creativity [1]. Federal appellate courts have similarly interpreted the Copyright Act, which provides copyright protection only for ‘‘works of authorship.’’ [2]
In March 2023, the United States Copyright Office (“Copyright Office”) launched a new Artificial Intelligence Initiative “to examine the copyright law and policy issues raised by artificial intelligence (AI), including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training” [3] and issued Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (“Copyright AI Guidance”). [4] The Copyright AI Guidance summarized the history of the human authorship standard, and noted that “when the Copyright Office evaluates works submitted for registration that contain human authorship combined with uncopyrightable material, it asks ‘‘whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.’’ [5]
However, it also stated that in some cases a work containing AI-generated material may contain sufficient human authorship to support a copyright claim, e.g. depending on human selection or arrangement of AI-generated material.
The Copyright Office further stated that this “does not mean that technological tools cannot be part of the creative process”:
Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘‘actually formed’’ the traditional elements of authorship. [6]
The Copyright AI Guidance followed a decision that a graphic novel comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright. [7]
The Copyright AI Guidance also made clear that applicants have a duty to disclose the inclusion of AI-generated content in works submitted for registration and outlined the appropriate procedures for making such disclosures [8]. It also announced plans to publish a notice of inquiry soliciting public comments on a range of copyright issues arising from the use of AI.
Footnotes:
- In Burrow-Giles Lithographic Co. v. Sarony, the Supreme Court used language excluding non-humans, defining an ‘‘author’’ as ‘‘he to whom anything owes its origin; originator; maker; one who completes a work of science or literature’’ and referring to such ‘‘authors’’ as human, a class of ‘‘persons’’ and a copyright as ‘‘the exclusive right of a man to the production of his own genius or intellect.” 111 U.S. 53, 56 (1884)
- 17 U.S.C. 102(a). Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 957–59 (9th Cir. 1997) (holding that a book containing words ‘‘authored by non-human spiritual beings’’ can only gain copyright protection if there is “human selection and arrangement of the revelations” stating “some element of human creativity must have occurred in order for the Book to be copyrightable” because “it is not creations of divine beings that the copyright laws were intended to protect.”); Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018), (held that a monkey cannot register a copyright in photos it captures with a camera because the Copyright Act refers to an author’s ‘‘children,’’ ‘‘widow,’’ ‘‘grandchildren,’’ and ‘‘widower,’’— terms that ‘‘all imply humanity and necessarily exclude animals, decided on other grounds).
- https://www.copyright.gov/newsnet/2023/1004.html
- https://www.copyright.gov/ai/ai_policy_guidance.pdf
- U.S. Copyright Office, Compendium of U.S. Copyright Office Practices sec. 313.2 (3d ed. 2021) (‘‘Compendium (Third)’’). It further stated:
When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘‘traditional elements of authorship’’ are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist— they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text- generating technology to ‘‘write a poem about copyright law in the style of William Shakespeare,’’ she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application. (citations omitted) - https://www.copyright.gov/ai/ai_policy_guidance.pdf
- U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023), https://www.copyright.gov/docs/zarya-of-the-dawn.pdf
- https://www.govinfo.gov/content/pkg/FR-2023-03-16/pdf/2023-05321.pdf
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
There have been numerous regulations enacted regarding use of AI for employment related decisions (i.e., hiring, promotion and firing), to protect against bias and discrimination. Certain states, including New York, Illinois, and Maryland, have proposed or passed regulations surrounding the use of automated employment decision tools (“AEDTs”) in relation to employment-related decisions, which are summarized above in Question 3. These laws highlight the sensitivity and potential risk of using AEDTs in the workplace and contain a variety of obligations surrounding disclosures, consents, and assessments such as annual “bias audit,” which may need to be made publicly available.
There have also been discrimination related lawsuits filed against companies in the employment context, as discussed below in Question 16.
-
What privacy issues arise from the use of artificial intelligence?
- Authority to Use Personal Data for AI Model Building
- Processing of personal data to train or improve AI models across data sources is a controller (or a “business” under the CCPA) activity. Consequently, parties conducting such training or improvement must meet all of the other obligations of a controller, including with respect to notice and disclosure, with which it can be difficult for companies to comply.
- Under certain consumer privacy laws, such as the CCPA, “automated decision making” activities require additional compliance obligations, such as risk assessments and explicit consent from individuals.
- Application of Data Subject Rights
- Individual data subject rights, such as the right to access, correct, and delete personal data, can be difficult to adhere to with respect to personal data input into AI systems, especially given the impact that personal data can have on the training and modeling of systems.
- Inherently, adherence to some of these rights could directly impact the code that such data helped enhance.
- Validity of De-identification and Anonymization Processes Applied to Personal Data
- One way that AI-driven companies are hoping to avoid certain privacy concerns is to de-identify or anonymize personal data before such data is used to train and improve models. The standards upon which data can be deemed truly “de-identified” or “anonymized,” however, is very unclear.
- Security of AI Systems
- As the quantity and sensitivity of personal data applied to AI grows, so do the risks of security incidents. Companies developing and/or using AI will need to remain particularly vigilant to security risks.
- Data Transfers
- Many privacy laws around the world include restrictions on data transfers and it may be difficult for companies to comply with such restrictions with respect to AI systems.
- Authority to Use Personal Data for AI Model Building
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
Generally, the same rules regarding authority and rights over personal data apply with respect to artificial intelligence, subject to industry or use-specific applications of AI (e.g. employment or lending). Consequently, companies utilizing AI need to adhere to all other applicable obligations, such as to provide adequate notice and disclosure, to comply with consumer rights over personal data (e.g. access, deletion, correction) and to apply adequate security measures over AI systems in processing personal data. Given the nature of AI systems and processing, compliance with privacy-related obligations pose inherent difficulties to comply. Some of these difficulties can be seen in the recent enforcement actions and law suits, further discussed in Questions 14 and 15 below.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
U.S. privacy authorities are split between state and federal regulatory bodies. Guidelines issued are discussed above in Question 3.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
FTC Actions
Parallel to its rulemaking, the FTC uses its authority under various consumer protection laws—including the Fair Credit Reporting Act, Equal Credit Opportunity Act, Children’s Online Privacy Protection Act, among others—to expand its enforcement authority on AI technologies. The FTC has published a series of blog posts describing its concerns on the following:
- Marketing and Claim Substantiation: The FTC reminds marketers that AI-based products and services must work as advertised, and claims must be substantiated with adequate proof. This implies that false advertising claims regarding AI-related products and services will be scrutinized heavily by the FTC. [1]
- Fraud and Consumer Harm: The FTC cautions businesses and consumers about the use of generative AI to create and distribute fake or fraudulent content at scale. It urges businesses to consider the potential for consumer harm at the design stage, deploy risk mitigation measures, steer away from over-reliance on post-release detection tools, and warns marketers to avoid using generated content in a misleading manner. [2]
- Dark Patterns: The FTC also raises awareness that AI can be used to exploit and manipulate consumers by engaging in “dark patterns,” which the FTC defines as “design tricks and psychological tactics” and “manipulative design practices.” [3] [4]
Recently, the FTC has brought a number of recent AI-related enforcement actions, including:
- Civil Investigative Demand (Against OpenAI Inc.) (Opened July 13, 2023): [5] The FTC opened an investigation against OpenAI on July 13, 2023 with a demand letter for records on how OpenAI addresses risks related to its AI models. The FTC requests OpenAI to provide detailed descriptions of all complaints it has received of its products making “false, misleading, disparaging or harmful” statements about people, and will conduct an expansive investigation into whether OpenAI has engaged in unfair or deceptive practices resulting “reputational harm” to consumers.
- Stipulated Order for Injunction and Monetary Judgment (Against Ring LLC) (Filed May 31, 2023) (D.C.): [6] On May 31, 2023, the home security camera company, Ring, settled with the FTC for $5.8 million. The FTC alleged violations of Section 5 of the FTC Act by deceiving its customers with respect to its privacy and security practices, in part, by failing to impose security restrictions on the use of certain data for purpose of training algorithms. The settlement also mandated the deletion of all pre-March 2018 recordings, models or algorithms developed using the recordings, and adopt a privacy and data security program.
- Complaint for Permanent Injunction, Civil Penalties, and Other Relief (Against Amazon Inc. and Amazon.com Services LLC) (Filed May 31, 2023) (W.D. Washington): [7] Alleges a violation of COPPA and Section 5 of the FTC Act with respect to the collection and use of children’s data, in part, to use to train algorithms behind Amazon’s product, Alexa. Amazon agreed to pay $25M in civil penalty to the FTC to settle its claim regarding the use of children’s data.
- Complaint for Permanent Injunction, Civil Penalties, and Other Equitable Relief (Against Weight Watchers International) (Filed February 16, 2022) (N.D. California): [8] The FTC contended that Weight Watchers marketed the app to children under 13 without parental consent and that children’s data had been collected in violation of the Children’s Online Privacy Protection Act. The FTC entered a settlement that required Weight Watchers to delete an entire AI algorithm it developed for a weight-loss app.
- Decision and Order (Against Everalbum, Inc. and Paravision Corp.) (Filed May 6, 2021): [9] The FTC alleged Everalbum continued to use their photos to train facial-recognition AI, even if users turned off facial recognition features. The FTC contended this was deceptive conduct that violated the FTC Act and required Everalbum to delete its facial training data and the AI algorithms it had developed.
Congressional Investigations
There have also been further congressional investigations into companies’ practices regarding use of data for AI training.
- Congressional Report: Attacks on Tax Privacy (July 2023) [10]
- The report alleges that three major tax prep companies illegally shared taxpayers’ sensitive financial data with tech giants Google and Meta for years, and that such data was used for targeted advertising and to train Meta’s AI algorithms.
- It calls for potential criminal charges against the companies, citing violations of taxpayer privacy laws.
Footnotes:
- https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check (FTC Business Blog Post,“Keep your AI claims in check”)
- https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale (FTC Business Blog Post, “Chatbots, deepfakes, and voice clones: AI deception for sale”)
- https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust (FTC Business Blog Post, “The Luring Test: AI and the engineering of consumer trust”)
- https://www.ftc.gov/system/files/ftc_gov/pdf/P214800%20Dark%20Patterns%20Report%209.14.2022%20-%20FINAL.pdf
- https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4
- https://www.ftc.gov/system/files/ftc_gov/pdf/proposed_stipulated_order_ring.pdf
- https://www.ftc.gov/system/files/ftc_gov/pdf/Amazon-Complaint-%28Dkt.1%29.pdf
- https://www.ftc.gov/system/files/ftc_gov/pdf/filed_complaint.pdf
- https://www.ftc.gov/system/files/documents/cases/1923172_-_everalbum_decision_final.pdf
- https://www.warren.senate.gov/imo/media/doc/Attacks%20on%20Tax%20Privacy_Final.pdf
-
Have your national courts already managed cases involving artificial intelligence?
Below is a summary of actions filed as of the date of this article:
Copyright:
- J.L. et al v. Alphabet Inc., Google Deepmind and Google LLC (filed July 11, 2023) (N.D. California): Alleges Google secretly stole content created by “hundreds of millions of Americans” to develop its AI chatbot Bard and other AI systems, giving Google an unfair advantage over competitors who obtain data legally for AI training. The complaint claims it could owe at least $5B. [1]
- Silverman et al. v. OpenAI, Inc. et al. (filed July 7, 2023) (N.D. California): Alleges OpenAI used copyrighted books as training material for the large language models that power ChatGPT. [2]
- Kadery et al. v. Meta Platforms, Inc. (filed July 7, 2023) (N.D. California): Accuses Facebook of exploiting copyrighted books as training material for its LLaMA program, which was released in Feb. 2023. [3]
- Tremblay et al. v. OpenAI LP et al. (filed June 28, 2023) (N.D. California): Alleges that copyrighted material from Plaintiffs’ published books were improperly ingested and used to train ChatGPT. [4]
- Getty Images (US), Inc. v. Stability AI, Inc. (filed February 3, 2023) (U.S. District Court of Delaware): Alleges that the AI image generator infringed Getty’s copyrighted photographs, removed or altered copyright management information (CMI), provided false copyright management information, and infringed its trademarks. Getty Images claims that Stability AI copied photographs from its website and used over 12 million images and associated metadata to train Stable Diffusion, despite express terms of use on Getty’s website expressly prohibiting such uses. [5]
- Andersen et al. v. Stability AI Ltd. et al. (filed January 13, 2023) (N.D. California): Claiming that their works were used without permission as input materials to train and develop various AI image generators. As a result, the AI’s output generated reconstructed copies of the Plaintiff’s works, which they argue qualify as unauthorized derivative works. The Plaintiffs also argue that the Defendants are liable for vicarious copyright infringement and violate the Digital Millennium Copyright Act (DMCA) by altering or removing copyright management information from the images owned by the plaintiffs and programming the AI to omit any CMI as part of its output. Defendants include Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. [6]
- Doe 1 et al. v. GitHub, Inc. et al. (filed November 3, 2022) (N.D. California): Alleges a violation of Section 1202 of the DMCA for unauthorized and unlicensed use of the programmers’ software code to develop the Defendants’ AI machines, Codex and Copilot. Defendants include GitHub, Microsoft, and OpenAI. [7]
Privacy
- J.L. et al v. Alphabet Inc., Google Deepmind and Google LLC (filed July 11, 2023) (N.D. California): Referenced above, this complaint also brings privacy related claims. [8]
- P.M. et al. v. OpenAI LP et al. (filed June 28, 2023) (N.D. California): Claims the improper collection, storage, tracking, and sharing of individual’s private information through web scraping without consent misappropriates personal data on an “unprecedented scale.” [9]
Tort
- Walters v. OpenAI (filed June 5, 2023) (Super. Ct. Gwinnett County GA): Alleges OpenAI defamed the Plaintiff. When Fred Riehl, a third party journalist, used ChatGPT to assist in researching an ongoing lawsuit, Riehl received false information that included made up allegations against the Plaintiff. ChatGPT described the open case as one that accused the Plaintiff of fraud and embezzlement, but the Plaintiff was not a party in the case. [10]
Discrimination
- Mobley v. Workday (filed February 21, 2023) (N.D. California): Claims that the AI systems used by Workday, which rely on algorithms and inputs created by humans, disproportionately impact and disqualify Black, disabled, and older job applicants. [11]
- Huskey v. State Farm Fire & Casualty Company (filed December 14, 2022) (N.D. Illinois): Claims State Farm’s algorithms and tools display levels of bias in the way they analyze data. [12]
Footnotes:
- Complaint, J.L. v. Alphabet Inc., No. 3.23-cv-03440 (N.D. Cal. July 11, 2023)
- Complaint, Silverman et al. v. OpenAI Inc. et al., No. 3:23-cv-03416 (N.D. Cal. July 7, 2023).
- Complaint, Kadrey et al. v. Meta Platforms, Inc., No. 3:23-cv-03417 (N.D. Cal. July 7, 2023).
- Complaint, Tremblay et al. v. OpenAI, Inc. et al., No. 3:23-cv-03223 (N.D. Cal. Jun 28, 2023).
- https://copyrightlately.com/pdfviewer/getty-images-v-stability-ai-complaint/?auto_viewer=true#page=&zoom=auto&pagemode=none
- https://stablediffusionlitigation.com/pdf/00201/1-1-stable-diffusion-complaint.pdf
- https://githubcopilotlitigation.com/pdf/06823/1-0-github_complaint.pdf
- Complaint, J.L. v. Alphabet Inc., No. 3.23-cv-03440 (N.D. Cal. July 11, 2023)
- Complaint, P.M. et al. v. OpenAI LP et al, No. 3:23-cv-03199 (N.D. Cal. Jun 28, 2023).
- Complaint, Walters v. OpenAI LLC, No. 23-A-04860-2 (GA. Super. Ct. Jun 5, 2023).
- https://www.courtlistener.com/docket/66831340/1/mobley-v-workday-inc/
- https://www.courthousenews.com/wp-content/uploads/2022/12/state-farm-racism-suit.pdf
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Regulation over AI will likely fall to various federal and state agencies, reflective of the various laws and regulations referenced in Question 3.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Use of AI is widespread across all industries, but rate of adoption can vary depending on factors such as market sector and company size. In particular, use of generative AI proliferated by large language models scaled up significantly beginning in late 2022 with the launch of OpenAI’s ChatGPT. For example, AI is used in industry-agnostic areas such as:
- Automation and Efficiency: AI systems can be used to automate repetitive tasks, streamline processes, and improve operational efficiency. This includes applications such as robotic process automation, chatbots for customer service, and AI-powered analytics for data processing.
- Decision Support: AI systems are capable of analyzing vast amounts of data to provide insights that aid in decision-making processes. Businesses are leveraging AI processing power for predictive analytics, market research, risk assessment, demand forecasting, and employment decisions.
- Personalization and Customer Experience: AI enables businesses to deliver personalized experiences to customers by leveraging data-driven insights, including AI-powered recommendation engines, targeted marketing campaigns, or chatbots that offer tailored assistance.
- Communication: Natural language processing technologies are used in chatbots, virtual assistances, and other voice-activated systems and hardware to facilitate human-like interactions and customer communications.
The growth in AI adoption mirrors investor interest in AI-related startups. The median pre-money valuations for early-stage rounds of generative AI companies has jumped 16% from 2022 to May 2023, whereas valuations for all other startups raising a Series A or B round has dropped by nearly 24%. [1]
Footnotes:
- https://pitchbook.com/news/articles/early-stage-valuations-generative-AI-compare-VC
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
While we are far from seeing the extent of its application in this sector, early predictions are that AI tools will transform the way attorneys work, how law firms are managed and how legal decisions are made. Here are a few examples of how AI is being applied:
- Due Diligence and Document Review: AI can quickly review vast amounts of data and documents, identify key points, and draw attention to relevant provisions. AI tools can process contracts and flag clauses responsive to diligence requests or disclosure requirements. This significantly reduces the time and effort needed for legal professionals to review documents.
- Legal Research and Predictive Analysis: Related to document review, AI can sift through many cases, regulations, and rules to identify relevant precedent and clauses. AI can also analyze prior decisions and judgments to predict possible outcomes of ongoing disputes to assist in devising legal strategy.
- Contract Generation: AI tools can be used to automate the creation of legal agreements based on set parameters or letters of intent, and can flag non-standard clauses, check compliance with legal requirements, or highlight critical agreements that are due for renewal or require re-negotiation.
- Chatbots and Ideation: AI-powered chatbots can provide legal direction on simple matters, reducing the time lawyers need to spend on routine queries or producing general client communications.
- Administrative Matters: AI can automate administrative tasks such as billing and time tracking, reducing errors and freeing up more time for legal professionals to produce higher-level, complex legal work.
That being said, the potential consequences of AI misuse in the legal industry is quite high. One notable instance of AI misuse is the use of ChatGPT by a New York litigator with over three decades of litigation experience to generate fake precedent (including case citations) in a litigation against Avianca Airlines. The attorney admitted to the use of AI and along with a colleague, is currently facing sanctions. [1]
Further, currently-available consumer AI tools may not offer sufficient security to preserve critical attorney-client privilege and confidentiality. Law firms and attorneys are likely to adopt AI tools from commercial providers that are fine-tuned specifically for legal work. In addition to any trust and verification services offered by commercial AI providers, law firms will need to adopt an ethical and responsible AI usage framework, and train individual attorneys to carefully review AI-generated outputs for relevancy, accuracy, truthfulness, and completeness.
Footnotes:
- https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
As highlighted in Questions 18 and 19, AI tools have the capability of providing material benefits across all industries, including legal services. These benefits include enhancing idea generation and creativity, increasing productivity, identifying patterns and insights, and improving quality of work product.
Key opportunities include:
- Improving Efficiency and Productivity: AI systems can automate repetitive tasks, analyze large amounts of data, and predict future trends, all of which can significantly improve efficiency and productivity in a wide range of industries.
- Enhancing Decision-Making Processes: AI can help decision-makers across all industries and roles make more informed decisions by providing sophisticated data analysis and predictive capabilities.
- Personalized Services: AI tools enable the development of more personalized services and products by using data to understand individual preferences and needs. Early AI applications have already had significant impact in areas such as e-commerce, entertainment, and education. Further, in healthcare, AI systems have the potential to transform healthcare services by improving diagnostic accuracy, personalizing treatment plans, predicting disease outbreaks, and automating administrative tasks, among other things.
- Addressing Issues at Scale: AI can be used to tackle large-scale, complex challenges. For example, it can help model climate change scenarios, optimize energy use, improve traffic management, and aid in disaster response.
- Eliminating Drudgery: AI tools open up opportunities to higher-value work by replacing routine tasks (e.g., contract analysis, due diligence, contract lifecycle management, etc.) and administrative matters (e.g., billing and time tracking).
Simultaneously, there are inherent risks associated with prompts used as inputs and use of AI-generated outputs internally or externally. External use of AI-generated outputs, such as in publications or other distributions to customers, inherently carries heightened potential infringement risk and liability exposure. For example, businesses must carefully monitor any potential for bias, inaccurate statements presented as facts, IP infringement and comply with stated license limitations or acceptable use requirements. To guard against these risks, companies should consider implementing a system for internal review, including an approval escalation process for business teams, such as product development or marketing that may be more reliant on higher-risk external use of AI-generated outputs.
Some key challenges include:
- Data Privacy: AI systems often need to process large amounts of data to operate effectively, which can raise significant data privacy concerns. For example, with respect to inputs used as AI prompts, companies should be wary of associated privacy risks, data leakage and security vulnerabilities.
- Cybersecurity: Further, AI technologies can be vulnerable to attacks, such as adversarial attacks, which involve manipulating an AI system’s input data to cause it to make mistakes. These could potentially have serious consequences, particularly in areas like cybersecurity or autonomous vehicles.
- Bias and Discrimination: AI systems can sometimes reflect and even amplify human biases, since they’re trained on data created by humans. For instance, a biased dataset could lead to a discriminatory hiring algorithm. AI, particularly deep learning models, often operate as black boxes where the underlying decision-making processes can be difficult to understand. This lack of transparency can make it hard to diagnose and correct errors, and it can also create challenges in gaining public trust. Companies must carefully craft AI usage policies that guard against such bias, and promote fairness, transparency, and accountability.
- Accuracy and Verification: Currently-available consumer AI tools are prone to “hallucinations” or inaccurate statements presented as fact. Businesses offering professional services, such as financial, medical, or legal services, must grapple with how to use AI tools responsibly when providing client advice. Companies must ensure that employees are properly trained to assess AI-powered outputs for relevancy, accuracy, truthfulness, and completeness.
- Liability Exposure: More generally, a business’s liability exposure when using AI tools remains unclear. Existing primary and secondary liability regimes may not apply to transformative methods of AI generation, and existing insurance policies may not cover risks associated with AI use, as discussed in Question 8.
As the AI market continues to evolve, the challenge for lawyers is forming consensus-based guidance for clients across varying industries that complies with a patchwork of applicable laws, regulations, and industry standards. Attorneys must grapple with understanding the technical process of AI modeling, fine-tuning, and application to help steer clients towards permissible uses of AI systems on a case-by-case basis.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
We should expect to see more actions filed against companies who are using web content and personal data to train models, along the lines of the class actions referenced above in Question 16. We should also expect to see more regulation and policy advancements.
United States: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in United States.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?