News and developments
Navigating the legal minefield: Generative AI and its implications for businesses and directors
Generative AI, a subset of artificial intelligence that relies on large quantities of data, neural network architecture, and complex algorithms to produce original content, is rapidly transforming the business landscape.
From content creation to product design, customer service and marketing, generative AI is proving to be a game changer. McKinsey’s latest research estimates that generative AI’s impact on productivity could add $2.6 trillion to $4.4 trillion annually in value to the global economy [1].
However, like any powerful tool, generative AI comes with its own set of risks and challenges. Similarly, if the data set being used is erroneous, or restricted in any way, that can lead to inaccurate outputs. AI models can ‘hallucinate’ (i.e. create and rely on false data). Further, Al models often make decisions in a ‘black box’, meaning there may be no way for users to understand exactly how the AI has made its decisions (and in a litigation context, creating issues around discovery obligations and proof).
In this article, we set out some precautions that businesses and directors should take to minimise the legal risks of using generative AI, with a focus on privacy and copyright risks.
The use of AI tools present specific challenges for privacy. This is particularly the case for generative AI, which rely on large quantities of data. AI enables new ways to gather and combine personal information, and can make it harder to see, understand, and explain how personal information is used. It is therefore important to understand the privacy risks of using AI before implementing it. This requires some understanding of how the AI tool works, such as knowing what data sources they were trained on, and how relevant and reliable these sources are for your purposes.
In New Zealand, there are currently no laws or regulation specific to the use of AI. The Privacy Act 2020 applies to the use of AI. On 21 September 2023, the Privacy Commissioner issued new guidance on how the Information Privacy Principles (IPPs) can be applied to the use of AI, which builds off the Commissioner’s initial set of expectations (published in May 2023).
We set out a short summary of the guidance below.
As a general tip, if purchasing an offthe- shelf AI tool, we suggest asking the provider for documentation of the sources and collection of input data, as well as assurances (or a warranty) that individuals whose data is inputted into the AI tool have consented to any collection and use of their personal information. Carefully review the terms and conditions for how (and what) information is retained and disclosed. In general, avoid inputting personal or confidential information into the tool, unless you are confident that the information is not retained and disclosed by the tool. If in doubt, strip input data of any information that enables re-identification and anonymise any data that is shared.
While the use of generative AI can create various intellectual property issues, particular issues arise in relation to copyright. Whilst the usual base principles in assessing copyright (including subsistence, originality and infringement) are all relevant, the use of generative AI raises new, untested questions, including in respect of these otherwise relatively settled areas. These include whether copyright subsists in AI-generated content, who the author is, and the scope of protection. The legal position will vary between jurisdictions.
AI training data
Where the input data is a copyright work, making an unauthorised copy of that work could amount to an infringement. Where AI input data does infringe copyright, a third party that uses this data to generate output also risks infringement if the output is considered a copy of, or a copy of a substantial part of input data [2].
While untested in New Zealand, legal action is being taken against AI providers in other jurisdictions for IP infringement. Some examples of this are set out below.
AI outputs
The New Zealand Copyright Act 1994 (the Copyright Act) currently allows computergenerated works to be protected by copyright [3]. The Act provides that the owner of the copyright is the author of the work, being the person who “made the arrangements necessary” for the creation of the work [4].
However, the current position in many overseas jurisdictions, including Australia and the United States, is that creative works require a human author to attract copyright. The Copyright Act is under review, and it is unclear whether the current position will change but we expect it unlikely that the permissive regime in New Zealand would be reduced in scope. To attract copyright under the Copyright Act, the work must be original [5]. The extent of originality depends on how much skill, labour, and effort the author has put into creating it [6]. While the threshold for originality is low [7] it is unclear whether some human skill and effort is required for the AI-generated work to attract copyright, and if so, the level of effort required.
Another related question is who owns the copyright, being who “made the arrangements necessary” for the creation of the work. If a business develops and uses its own AI tool (including using internal training data), the business is likely to be the owner of the AI-generated content provided the usual parameters for copyright subsistence are met (and subject to any third party rights). However, if a business uses an external AI tool, it is unclear whether ownership of the output belongs to the business, the AI provider, another contributor of the tool, or is shared between different parties. This question may depend on the extent to which the AI output is amended from the original ‘prompts’, whether prompts amount to copyright works, and the extent to which the eventual output is amended from the AI output. All these factors are also able to be varied by contractual provisions (including terms and conditions), which will be important to assessing rights.
Who owns the copyright of AI-generated content and any third party rights in it impacts the scope of permitted use and whether a business can monetise the content (and any associated infringement risk). Whilst AI tools such as OpenAI grant a licence for use, input data and outputs are also subject to contract terms, so exclusivity is likely to be lost. In addition, there may still be a risk of infringement. It also presents issues as to whether the business can enforce copyright in the goods or services that incorporate the content, such as where similar content may appear in a competitor’s product or marketing material.
Numerous platforms (including Adobe, Google and Microsoft) now offer indemnities for the end users of their generative AI programmes against any third-party copyright infringement claim, but strict requirements and indemnities are imposed and this is unlikely to provide complete protection.
Some precautions to take to minimise IP risks when using generative AI:
Each organisation’s IP policy will differ, and there is no one-size-fits-all approach to ensuring your IP policy mitigates against risks of using generative AI. It is important to obtain specialist IP advice in respect of such policies, and also alongside litigation advice should matters become contentious.
Under the Companies Act, a director must exercise the care, diligence and skill that a reasonable director would exercise in the same circumstances [8]. With the increasing use of generative AI, an emerging and untested question is whether directors have an obligation to inform themselves of the use of AI by their organisation, and to ensure that AI is used in appropriate circumstances. This raises further questions about what those circumstances might be. Where AI is informing or assisting in the decisions of directors, this raises the further questions regarding of the responsibility of directors for the outputs of those decisions.
While this question may be clarified by the Courts and/or the legislature over time, it is important that directors and businesses take steps to mitigate against the legal risks of using (or not using) generative AI. Importantly, businesses should ensure that there are clear employee policies in place setting out where generative AI may be used and that privacy statements set clear expectations for use. We recommend being clear and upfront to customers about any use of generative AI, including how you are managing the associated privacy, IP, and other risks.
Organisations should also keep on top of emerging regulation in this space. The European Union, United States and Australia are progressing with AI-specific regulation, and New Zealand.
References and footnotes