MinterEllisonRuddWatts

News and developments

Navigating the legal minefield: Generative AI and its implications for businesses and directors

Generative AI, a subset of artificial intelligence that relies on large quantities of data, neural network architecture, and complex algorithms to produce original content, is rapidly transforming the business landscape. 

From content creation to product design, customer service and marketing, generative AI is proving to be a game changer. McKinsey’s latest research estimates that generative AI’s impact on productivity could add $2.6 trillion to $4.4 trillion annually in value to the global economy [1].

However, like any powerful tool, generative AI comes with its own set of risks and challenges. Similarly, if the data set being used is erroneous, or restricted in any way, that can lead to inaccurate outputs. AI models can ‘hallucinate’ (i.e. create and rely on false data). Further, Al models often make decisions in a ‘black box’, meaning there may be no way for users to understand exactly how the AI has made its decisions (and in a litigation context, creating issues around discovery obligations and proof).

In this article, we set out some precautions that businesses and directors should take to minimise the legal risks of using generative AI, with a focus on privacy and copyright risks.

  • Privacy 
  • The use of AI tools present specific challenges for privacy. This is particularly the case for generative AI, which rely on large quantities of data. AI enables new ways to gather and combine personal information, and can make it harder to see, understand, and explain how personal information is used. It is therefore important to understand the privacy risks of using AI before implementing it. This requires some understanding of how the AI tool works, such as knowing what data sources they were trained on, and how relevant and reliable these sources are for your purposes.

    In New Zealand, there are currently no laws or regulation specific to the use of AI. The Privacy Act 2020 applies to the use of AI. On 21 September 2023, the Privacy Commissioner issued new guidance on how the Information Privacy Principles (IPPs) can be applied to the use of AI, which builds off the Commissioner’s initial set of expectations (published in May 2023).

    We set out a short summary of the guidance below.

  • IPPs 1-4 cover why and how personal information can be collected. This requires an understanding of the training data and processes used to develop an AI tool.

  • If you have already collected personal information and want to feed it into the AI, think about the purpose for which you originally collected the information, and whether feeding the information into AI is directly related to that purpose (IPP1).
  • In general, agencies must get personal information directly from the person it is about (IPP2) and must be transparent about the information being collected and how it will be used (IPP3). While there are some exceptions to the normal collection principles, such as the exception for “publicly available” information, it may be risky to rely on these exceptions without a good understanding of the training data and processes used for an AI tool. For example, training data scraped from the internet may include sources which require a login to access, such as social media profiles, which may not be publicly available and outside the expectations people have on how this information would be used.
  • Ensure that input and training data are collected in a way that is lawful and fair (IPP4), and relevant individuals know how and why their data is being used.
  • Proactively consider how to manage security risks and unauthorised access to personal information, such as taking cybersecurity measures (IPP5). AI tools enable new ways to access and use information, and this creates new security risks. Some AI tools can leak sensitive information. Consider setting up privacy breach response plans to ensure you can identify, contain, assess and respond to privacy breaches quickly. For more information on how to manage a data breach, see our podcast on this topic.
  • Develop procedures for how your organisation will respond to requests from individuals to access and correct their personal information (IPP6 and IPP7). Before purchasing an AI tool, consider whether you are able to practically access personal information about a person if they ask for it, and correct any personal information if required.
  • Be aware of the limitations of the tool, including gaps, biases and ‘hallucinations’, and take steps to ensure accuracy (IPP8). This includes ensuring the training data is relevant and reliable, and putting checks in place (such as human review) to ensure accuracy of output.
  • Clearly identify the purpose(s) for collecting personal information, and limit its use and disclosure to those purposes or a directly related purpose (IPPs 10 and 11). If you want to use personal information to train an AI tool, make sure that is clear at the time you collect the information. If you are sharing personal information with third-party suppliers, ensure they are not using the information for training AI tools unless that is why the information was collected. In supplier contracts and customer communications, set clear expectations about how personal information will be used and kept secure.
  • The Privacy Commissioner recommends conducting a privacy impact assessment (PIA) before using any AI tool, and seek feedback from impacted groups.
  • As a general tip, if purchasing an offthe- shelf AI tool, we suggest asking the provider for documentation of the sources and collection of input data, as well as assurances (or a warranty) that individuals whose data is inputted into the AI tool have consented to any collection and use of their personal information. Carefully review the terms and conditions for how (and what) information is retained and disclosed. In general, avoid inputting personal or confidential information into the tool, unless you are confident that the information is not retained and disclosed by the tool. If in doubt, strip input data of any information that enables re-identification and anonymise any data that is shared.

  • Copyright 
  • While the use of generative AI can create various intellectual property issues, particular issues arise in relation to copyright. Whilst the usual base principles in assessing copyright (including subsistence, originality and infringement) are all relevant, the use of generative AI raises new, untested questions, including in respect of these otherwise relatively settled areas. These include whether copyright subsists in AI-generated content, who the author is, and the scope of protection. The legal position will vary between jurisdictions.

    AI training data 

    Where the input data is a copyright work, making an unauthorised copy of that work could amount to an infringement. Where AI input data does infringe copyright, a third party that uses this data to generate output also risks infringement if the output is considered a copy of, or a copy of a substantial part of input data [2].

    While untested in New Zealand, legal action is being taken against AI providers in other jurisdictions for IP infringement. Some examples of this are set out below.

  • In early 2023, Getty sought an injunction to prevent the artificial intelligence company, Stability AI, from selling its AI image-generation system, in the United Kingdom and United States. This followed the creation of an image by Stability AI which clearly showed a ‘Getty Images’ watermark. Getty has made various claims against Stability AI in both the UK and the US, including copyright infringement and trademark infringement. The claims relate to both input and output of Stability AI. These cases, once they eventually reach trial, will address previously untested issues about the legal implications of using others’ works to train AI.
  • In September 2023, several authors in the United States (including former attorney John Grisham) initiated legal action against OpenAI alleging infringement of their original works.
  • Also in September 2023, the New York Times filed legal proceedings against OpenAI for infringing authors’ copyright.
  • AI outputs 

    The New Zealand Copyright Act 1994 (the Copyright Act) currently allows computergenerated works to be protected by copyright [3]. The Act provides that the owner of the copyright is the author of the work, being the person who “made the arrangements necessary” for the creation of the work [4].

    However, the current position in many overseas jurisdictions, including Australia and the United States, is that creative works require a human author to attract copyright. The Copyright Act is under review, and it is unclear whether the current position will change but we expect it unlikely that the permissive regime in New Zealand would be reduced in scope. To attract copyright under the Copyright Act, the work must be original [5].  The extent of originality depends on how much skill, labour, and effort the author has put into creating it [6]. While the threshold for originality is low [7] it is unclear whether some human skill and effort is required for the AI-generated work to attract copyright, and if so, the level of effort required.

    Another related question is who owns the copyright, being who “made the arrangements necessary” for the creation of the work. If a business develops and uses its own AI tool (including using internal training data), the business is likely to be the owner of the AI-generated content provided the usual parameters for copyright subsistence are met (and subject to any third party rights). However, if a business uses an external AI tool, it is unclear whether ownership of the output belongs to the business, the AI provider, another contributor of the tool, or is shared between different parties. This question may depend on the extent to which the AI output is amended from the original ‘prompts’, whether prompts amount to copyright works, and the extent to which the eventual output is amended from the AI output. All these factors are also able to be varied by contractual provisions (including terms and conditions), which will be important to assessing rights.

    Who owns the copyright of AI-generated content and any third party rights in it impacts the scope of permitted use and whether a business can monetise the content (and any associated infringement risk). Whilst AI tools such as OpenAI grant a licence for use, input data and outputs are also subject to contract terms, so exclusivity is likely to be lost. In addition, there may still be a risk of infringement. It also presents issues as to whether the business can enforce copyright in the goods or services that incorporate the content, such as where similar content may appear in a competitor’s product or marketing material.

    Numerous platforms (including Adobe, Google and Microsoft) now offer indemnities for the end users of their generative AI programmes against any third-party copyright infringement claim, but strict requirements and indemnities are imposed and this is unlikely to provide complete protection.

    Some precautions to take to minimise IP risks when using generative AI:

  • If purchasing an external AI tool, check whether the AI provider offers a robust indemnity for any infringement of IP from the use of the tool and requirements to ensure the indemnity applies.
  • The terms of use of many AI providers assign ownership of inputs and outputs, or grant a use-only a licence to its users. This is likely to mean your input data is available to others to use, as is the output. However, as noted above, it is prudent to ask for the documentation of the sources of input data. If the AI provider does not own the data, ensure that the provider has obtained relevant consents and/or licences from any external parties who may own the data. Ensure that any use of the AI tool does not infringe upon those consents and/or licence conditions.
  • Avoid input of data that is confidential, is likely to be used for a patented invention (as the required confidentiality may be lost) or strategically important to the business for these reasons;

  • Try to identify if use of AI generated content means you are inadvertently using open-source software as the open-source licence terms may be inappropriate, with similar issues for materials that are creative commons.
  • If in doubt, use internally generated and owned data to train models. Keep records of all inputs that go into the AI tool, such as prompts, which can help to show that the business “made the arrangements necessary” to create the output. However, be aware that a limited dataset could introduce bias (or other issues) into the model.
  • Be aware that you may be unable to prevent others from using a similar AI output, if you are not modifying the AI generated output.
  • Each organisation’s IP policy will differ, and there is no one-size-fits-all approach to ensuring your IP policy mitigates against risks of using generative AI. It is important to obtain specialist IP advice in respect of such policies, and also alongside litigation advice should matters become contentious.

  • Implications for organisations, employers and directors 
  • Under the Companies Act, a director must exercise the care, diligence and skill that a reasonable director would exercise in the same circumstances [8]. With the increasing use of generative AI, an emerging and untested question is whether directors have an obligation to inform themselves of the use of AI by their organisation, and to ensure that AI is used in appropriate circumstances. This raises further questions about what those circumstances might be. Where AI is informing or assisting in the decisions of directors, this raises the further questions regarding of the responsibility of directors for the outputs of those decisions.

    While this question may be clarified by the Courts and/or the legislature over time, it is important that directors and businesses take steps to mitigate against the legal risks of using (or not using) generative AI. Importantly, businesses should ensure that there are clear employee policies in place setting out where generative AI may be used and that privacy statements set clear expectations for use. We recommend being clear and upfront to customers about any use of generative AI, including how you are managing the associated privacy, IP, and other risks.

    Organisations should also keep on top of emerging regulation in this space. The European Union, United States and Australia are progressing with AI-specific regulation, and New Zealand.

    References and footnotes

  • The economic potential of generative AI: The next productivity frontier, McKinsey and Company
  • Copyright Act 1994, s 29.
  • The term “computer-generated” is defined in section 2 as “the work is generated by computer in circumstances such that there is no human author of the work.”
  • Copyright Act 1994, s 5(2)(a).
  • Copyright Act 1994, s 14(1) and (2).
  • Wham-O MFG Co v Lincoln Industries [1984] 1 NZLR 641 (CA) at 665.
  • Henkel KGaA v Holdfast New Zealand Ltd [2007] 1 NZLR577 (SC) at [38].