-
What are your countries legal definitions of “artificial intelligence”?
The White Paper issued by the UK government in March 2023 (“A pro-innovation approach to AI regulation”) noted “there is no general definition of AI that enjoys widespread consensus”. The government instead refers to two key characteristics, which are the ‘adaptivity’ of AI, whereby AI systems can infer patterns and connections in data which are not easily discernible to humans, and AI’s ability to make autonomous decisions.
-
Has your country developed a national strategy for artificial intelligence?
The UK government’s National AI Strategy (published September 2021) aims to translate the “potential of AI into better growth, prosperity and social benefits for the UK”. In July 2022, the UK also published its AI Action Plan, outlining the government’s further activities to advance the National AI Strategy.
As part of the strategy to become a “global AI superpower”, the UK hosted the world’s first AI Safety Summit in November 2023, bringing together various stakeholders to discuss the safe and responsible development of “frontier AI” (defined as AI models that can “match or exceed the capabilities present in today’s most advanced models”). At the summit, the EU alongside 28 nations agreed to the Bletchley Declaration, which recognised “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community”. Following the summit, the UK government set up the AI Safety Institute.
The UK’s implementation of a national AI strategy still requires further development. For example, in March 2024, the National Audit Office noted that “the government lacked a coherent plan to support adoption of AI in the public sector as part of its 2021 National AI Strategy”.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
The UK has not yet implemented clear rules or guidelines on AI. The government’s White Paper does, however, set out a regulatory framework that aims to be “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”. The White Paper identifies five “values-focused cross-sectoral principles” for AI regulation. These are: (1) safety and security; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. These principles are intended to guide businesses in designing, developing, and using AI in a responsible manner. Regulators will be required to publish their own sectoral guidance.
In February 2024, the UK government responded to consultations on the White Paper. It said that it had received “strong support” for its framework and that some regulators had already started to act in line with it, including the Competition and Markets Authority (“CMA”) and the Information Commissioner’s Office (“ICO”). The UK government also wrote to regulators including the CMA, the ICO, the Financial Conduct Authority, and Ofcom, requesting updates outlining their strategic approach to AI (see Question 18). Responses from these regulators were received during April 2024.
At present, the UK government does not appear to be planning to bring forward specific legislation around AI in the near future. The July 2024 briefing notes to the King’s speech stated that the new Labour government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, but without any associated timelines.
To the limited extent that AI is currently specifically regulated, this is implemented through existing legal frameworks such as the UK’s data protection regime (see Questions 12 – 15) or existing intellectual property laws (see Questions 9 & 10).
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As the UK does not have an overarching AI legislative framework, defective AI systems will be dealt with by general causes of action available under UK law, on a fact-specific basis in the context of the deployment and use of those systems and the nature of harm caused in each case. The potential routes for liability include contractual liability, the tort of negligence (if a duty of care is owed between parties), and product safety legislation (where the AI is integrated into a product) under the Consumer Protection Act 1987.
The Consumer Rights Act 2015 may also protect consumers where they have entered into a contract for AI-based products and services.
Finally, discriminatory outcomes from the use of AI systems may contravene the protections in the Equality Act 2010.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Civil liability rules are set out at Question 4. Criminal liability may be attributable to harm caused by an AI system if it can be attributable to a legal person. For example, in the most extreme cases, a corporate entity can be liable for corporate manslaughter under the Corporate Manslaughter and Corporate Homicide Act 2007.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
AI systems present a potentially complex nexus of liability between the different parties within the AI supply chain, ranging from developers through to corporate customers and any ultimate end-users. However, there is no current AI-specific statutory basis on which responsibility or liability for claims related to harm caused by AI is allocated between parties in the UK. As such, claims will be managed in accordance with the general rights and causes of action outlined in Questions 4 & 5.
Any contractual claims will sit with the contracting party. Similarly, any claim of negligence would sit with the party to whom a duty of care is owed, and would be brought against the party owing that duty of care. If the AI system is embedded in a product, a claim can be pursued against any of the following: (i) the producer (i.e., the manufacturer), (ii) a person who holds themselves out as a producer, or (iii) the importer of the product into the UK pursuant to the Consumer Protection Act 1987. As to any product liability claim, it is possible for the defendant to seek a contribution where the third party is liable to the claimant for the same damage/loss. Such claims must be brought within two years of judgment/settlement.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
As there is no current AI-specific statutory basis for determining claims in the UK, the burden of proof in AI-related cases will depend on the cause of action (as with general civil claims).
The most common burden of proof in civil claims in England and Wales is for the claimant to prove their case on the balance of probabilities. However, various statutory causes of action may differ. For example, under the Consumer Protection Act 1987, the claimant must prove that the product is defective and that the defect caused the damage which is the subject of the claim. Claimants should therefore seek advice as to the specific merits of an individual case, and any necessary burden of proof that needs to be met for that claim to be brought, in respect of any prospective claim involving AI.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The use of AI is insurable. Insurers are starting to offer specific cover for AI performance risk, offering third party coverage that indemnifies losses arising from the performance of AI models and the like. In addition, while the use of AI may not be specifically referenced in more generic policies, a range of liability policies such as professional liability or directors’ and officers’ policies as well as cyber or tech policies are likely to provide cover for losses flowing from risks that include the use of AI. Property policies may also provide silent cover, although some policies contain AI exclusions.
Insurers will consider very carefully the nature of the risk(s) (and the insured’s plan for mitigation of those risks) during the underwriting process and will price accordingly. Insurers will also continue to carefully monitor the potential shift in liability (towards manufacturers, software developers etc.) and adapt their policy wordings, which set out the scope of cover provided, applicable exclusions, and rights to bring subrogated actions, as they see fit.
A recent example of how developments in technology laws are affecting how insurers operate is the Automated and Electric Vehicles Act 2018. This Act extended compulsory motor vehicle insurance to cover the use of vehicles in automated mode, so that all victims of an accident caused by a fault in the automated vehicle will be covered. In such a case, the insurer is liable to pay compensation to the victims, but can recover from any other person who is liable (the car manufacturer or software developer). The Act also clarified that insurers may exclude or limit their liability in respect of damage resulting from software alterations made without authorisation or from failure to install safety-critical software updates.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In the UK, patent applications must name a human as the inventor or inventors. Although patent applications have been submitted in two cases which have named AI as an inventor, the courts have refused to recognise inventorship. The Supreme Court maintained this position, ruling unanimously that a patent application naming an AI machine, rather than a natural person, as the inventor is invalid under the UK Patents Act 1977.
The UK Intellectual Property Office has recognised that developments in technology mean that AI is making significant contributions to innovation, and held a consultation to consider whether the current rule for inventorship in the UK could potentially be improved to better support innovation and incentivise the generation of new AI-devised inventions as the capability of AI increases. The outcome of the consultation, determined that for the time being there would be no change to the rule that patent applications must name a human as the inventor or inventors.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
In the UK, images or artistic works may benefit from copyright protection to the extent they are original, i.e. the author’s own intellectual creation. The threshold for originality in the UK is low and does not require particular creativity. However, the author will have given the work their “personal touch”.
It is certainly the case that images may be created by a human who has assistance from AI, and, provided the work meets the usual threshold for originality, it will benefit from copyright protection like a work created using any other tool.
However, issues of copyright ownership may arise because the technology underpinning AI must be trained and improved through exposure to large datasets, including vast numbers of images available on the Internet. These images will already be protected as artistic works with the copyright owned by a third party. Although UK copyright law generally permits text data mining of copyright works for non-commercial purposes, the commercial aspect of many Al platforms means this fair-use exception cannot necessarily be relied upon. For example, if AI is directed to create “an image in the style of David Hockney”, the AI may look to its source data and return an image similar or identical to an existing David Hockney work, therefore calling originality and ownership of the AI generated image into question and putting the human creator at risk of copyright infringement.
In the UK, images generated by a computer where there is no human creator are capable of copyright protection. The “author” of a “computer-generated work” is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. However, there is a degree of ambiguity in this. In the case of images generated through an AI platform, this could mean that the person directing the AI through keywords or instructions would be deemed to be the author. Alternatively, the creator of the AI platform itself could assert ownership, although many AI platforms clarify through their terms and conditions that ownership of any AI generated work vests in the user, thereby passing over to the user any risks of third-party claims of infringement.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
There are four main issues for businesses to consider when using AI in the workplace. These are:
- Discriminatory outcomes: The risk of AI producing discriminatory and/or biased outcomes that are both contrary to what a business wants and may expose the business to potentially expensive discrimination claims and PR damage;
- Changes in established working practices and roles: Increased use of AI is expected to result in increased efficiency and cost savings for businesses. This is likely to lead to: employees working more with AI systems as opposed to people; opportunities for some employees to carry out higher-value or more interesting work; and, potentially, redundancies where AI performs all or part of certain existing work functions within a business;
- Managing communication with employees: Businesses will need to communicate clearly with employees to allay concerns regarding privacy and monitoring when implementing AI systems in the workplace. Employees may be concerned that their role may be replaced in certain scenarios, and may choose to leave before that happens, when in fact the business may have had no plan to replace such roles;
- Changes in business processes: Businesses may become increasingly dependent on AI systems, so will need to develop and maintain adequate operational plans to address situations where these systems fail or are temporarily unavailable. Businesses will also need to be satisfied that there is sufficient human involvement and oversight of the AI systems, both at the time they are being designed and implemented, and on an ongoing basis to ensure their use remains valid, accurate and delivers appropriate outcomes both internally and for external users or customers.
Existing UK employment laws apply in the normal way in relation to the employment and treatment of personnel within a business, irrespective of whether their role interacts with or involves the use of AI.
-
What privacy issues arise from the use of artificial intelligence?
- Automated Decision Making. AI can be used to make automated decisions about individuals. Unless an exemption applies, the UK General Data Protection Regulation (“UK GDPR”) gives individuals the right not to be subject to a solely automated decision, including profiling, which produces a legal or similarly significant effect. If an exemption applies, organisations must implement suitable measures to safeguard the rights, freedoms and legitimate interests of individuals, including by providing meaningful human intervention so that individuals can contest a decision.
- Transparency. The UK GDPR requires organisations to provide individuals with “meaningful information about the logic involved, as well as the significance and the envisaged consequences of” any automated decision-making. This can be challenging for organisations given the complexities of AI algorithms. If the information is too technical, individuals may struggle to understand anything meaningful within the information. Organisations will need to provide information in a clear fashion.
A further challenge arises when AI is trained using personal data scraped from the internet. Providing the information as required under Article 14 UK GDPR to data subjects in this context can be operationally challenging. Controllers often seek to rely on one of the exemptions to the right to be informed that are available under the UK GDPR and UK Data Protection Act 2018 (“DPA 2018”) such as impossibility or disproportionate effort. However, determining whether such an exemption applies is not always clear cut. Furthermore, controllers who seek to rely on an exemption will need to consider the effect of such reliance on the overall lawfulness, fairness and transparency of the processing and whether additional safeguards are required.
The ICO cautions that processing of this nature i.e., ‘invisible processing’, results in additional risks to a data subject as they cannot exercise control over the use of their data. In such circumstances, privacy information should still be published on the website of the controller and the controller should carry out a DPIA.
- Data Protection Impact Assessments (“DPIAs”). Under the UK GDPR, a DPIA is mandatory if the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals. A DPIA’s purpose is to identify and minimise the data protection risks associated with a project. It is likely that the use of AI will trigger the need for a DPIA where this involves the processing of personal data. Additionally, a prior consultation with the ICO may be required if the DPIA indicates that the processing would result in a high risk to individuals which cannot be suitably mitigated. The ICO has shown a tendency to closely examine DPIAs in the context of AI systems, and has published guidance on DPIAs in the context of AI (here) which makes it clear that the ‘vast majority’ of AI use cases will require a DPIA.
- Data Minimisation. Processing large amounts of data is central to the development and use of AI. Organisations will need to balance this need with the requirement of data minimisation under the UK GDPR. Data minimisation means that organisations must only process personal data to the extent it is adequate, relevant, and limited to what is necessary. There is a risk of function creep with AI, which would threaten the principle of data minimisation.
- Vendor Due Diligence. Most AI systems will likely be provided by a third party, which means vendor due diligence will play a crucial role in ensuring organisations can comply with their data protection obligations. The ICO also cautions that assurances from the AI vendors should be sought about any bias testing they conducted or else the controller should test the model themselves. Organisations will also need to ascertain the data protection roles of vendors and, where relevant, put in place compliant data processing terms with third parties that process personal data on their behalf as processors.
- Controller/processor/joint controller roles. Identifying controller, joint controller, and processor roles in the context of AI can be complex, not least because many parties are often involved in the development and deployment of AI systems. The ICO has published initial guidance and scenarios to assist with the assessment (here), which includes indicators of when an organisation may act as a controller in the context of an AI system.
- Lawful basis for training data. Most AI systems rely on publicly accessible sources for their training data. Where training data contains personal data, processing is subject to the UK GDPR. However, it can be difficult to identify an applicable lawful basis to web scraping activities for this purpose. There are arguments that legitimate interests, which is typically regarded as the most flexible lawful basis, may not be available if data is processed in ways the data subject does not reasonably expect or privacy information is not provided.
Obtaining training data via web scraping, in the majority of cases, will be invisible processing. The ICO issued draft guidance for consultation specifically on how to determine whether there is a valid lawful basis for web scraping (in the context of training generative AI) available here. The ICO states ‘five of the six lawful bases are unlikely to be available for training generative AI on web-scraped data’ and therefore the draft guidance ‘focuses on the legitimate interests lawful basis (Article 6(1)(f) of the UK GDPR).’
Since this consultation was closed, the Open Rights Group lodged a complaint in July 2024 against Meta with the ICO over UK GDPR violations related to Meta’s plans to use personal data for AI model training. The complaint alleges a ‘clear intentional breach of the law’ including lack of legitimate interest and a lack of transparency. It is asking for a legally binding decision under Article 58(2) to prevent unauthorised data processing and a prohibition on the use of personal data for AI technologies without consent. This contradicts the ICO’s position in respect of legitimate interests in its draft guidance. We await a final decision from the ICO on its position, which may itself be subject to legal challenge.
- Fairness and accuracy. In accordance with the fairness principle under the UK GDPR, controllers should only process personal data in ways that people would reasonably expect and not use it in ways that have unjustified adverse effects on them. The statistical accuracy of an AI system’s outputs is an important consideration to ensure compliance with the fairness principle. The UK GDPR mentions statistical accuracy in the context of profiling and automated decision-making at Recital 71, stating organisations should put in place ‘appropriate mathematical and statistical procedures’ for the profiling of individuals as part of their technical measures. The ICO has published guidelines for technical specialists and compliance professionals on fairness, which is available here.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
Data scraping by UK entities may be prohibited as follows.
Database right: depending on the geographical location where the database holding any source data was made, extraction and reutilisation of all or a substantial portion of data from that database may be a violation of rights under the EU sui generis database right and/or its UK equivalent.
The EU sui generis database right protects data held in databases which were either (i) made in an EU member state where there has been a “substantial investment” in obtaining, verifying, or presenting the contents of the database; or (ii) made in the UK prior to 1 January 2021 i.e. the date Brexit took effect. Following Brexit, a similar right exists in the UK in respect of databases made in the UK.
A person resident in the UK who conducts widespread data scraping may violate the rights of an EU organisation or a UK organisation which has invested in its database.
Privacy: as a processing activity subject to the UK GDPR and the DPA 2018, data scraping is regulated by the ICO and subject to UK data protection laws, ICO guidance and applicable case law.
In August 2023 the ICO and 11 other Data Protection Authorities worldwide published a joint statement calling for the protection of personal data from unlawful data scraping taking place on social media sites. The statement sets out expectations for how social media companies should protect personal data from unlawful data scraping, which may lead to increased vigilance in this context from the operators of the social media websites.
The ICO is finalising guidance which addresses the issue of whether there is an applicable lawful basis to data scraping. A key clarification is that in order to carry out data scraping in compliance with the UK GPDR, and for any lawful basis to be available, the relevant controller must do so in compliance with the law more generally (and particularly the lawfulness principle under Article 5 UK GDPR), including any applicable website terms and conditions.
Copyright: copyright may protect the contents of a database, or individual items of source data, where the data items in question are considered copyright works under the Copyright, Designs and Patents Act 1988. Where a person resident in the UK undertakes data scraping in respect of the collection of images, photographs, articles or similar, without the permission of the owner of those data items, this may infringe copyright and leave the scraper facing legal action.
Scraping and use of the source data (to the extent protected by copyright) may be permitted in certain circumstances, where the use is considered “fair use” – particularly in relation to research for non-commercial purposes. However, the position that use of data scraped from a third-party source cannot lawfully be used in research for commercial purposes was confirmed by the UK government in March 2023.
Breach of terms and conditions: Many database owners specifically reserve all rights in their source data, and apply terms and conditions which specifically prohibit the collection or use of any data gathered in this way.
Any UK person who scrapes data from any source – whether UK or overseas, may face a claim under one or more of the aforementioned grounds.
Competition: In May 2023 the CMA had launched an initial review of AI Foundation Models. The initial report was published in 2023 and an update paper followed in April 2024. In the technical update report accompanying the update paper, the CMA considered the implications of web scraping, and observed that if the ability to use web-scraped data is limited, this could benefit those holding the data or with resources to purchase access to data. The CMA also noted that imbalances could emerge between early movers and later entrants in the ability to have trained models on web-scraped data, and that developments in firms’ abilities to use web-scraped data going forward could impact the range of models available to deployers and users.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
There are no reported cases in England and Wales on the enforceability of terms of use prohibiting data scraping. However, a European Court of Justice case prior to Brexit (Ryanair Ltd v PR Aviation BV [2015]) supports the proposition that such terms would be enforceable in theory. In the absence of directly applicable reported judgments, the enforceability of any terms of use would depend on the principles of English contract law including proper incorporation of the terms.
Separately from any potential actions for breach of contract, the ICO’s position at the time of writing is that data scraping carried out in breach of website terms and conditions cannot comply with the lawfulness requirements under Article 5 UK GDPR, and therefore will be in breach of the UK GDPR. In terms of enforceability:
- An individual data subject may complain (either to the ICO or bring a court claim), as a result of damage they have suffered as a result of the breach of the UK GDPR;
- The ICO of its own volition could bring enforcement action, issue fines, issue an order to cease the processing activity, or bring a court claim against the relevant controller;
- There is also potential for criminal liability, under section 170 the DPA 2018, which sets out an offence of obtaining or disclosing personal data without the consent of the controller.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes, the Information Commissioner’s Office (“ICO”) has published the following guidance:
- Artificial intelligence and data protection – Guidance on AI and data protection | ICO;
- Explaining decisions made with AI – Explaining decisions made with AI | ICO;
- How to use AI and personal data appropriately and lawfully – how-to-use-ai-and-personal-data.pdf (ico.org.uk);
- How data protection law applies to biometric recognition- Biometric data guidance: Biometric recognition | ICO;
- Tools including its AI and data protection risk tool kit and toolkit for organisations considering using data analytics;
- Commentary on generative AI – Don’t be blind to AI risks in rush to see opportunity – ICO reviewing key businesses’ use of generative AI | ICO;
- Consultation series on generative AI- ICO consultation series on generative AI and data protection | ICO;
The ICO has collated various materials it has published on AI, such as guidance and thought pieces, on its website- Our work on Artificial Intelligence | ICO.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The most significant ICO enforcement action to date has been in relation to Clearview AI. Clearview AI enables customers to upload an image of a person to the company’s app, which is then checked for a match against all images in the company’s global online database. The app provides a list of images that have similar characteristics to the photo provided by the customer, with a link to websites from which those images were sourced.
The ICO fined Clearview AI Inc £7,552,800 in May 2022 for using images of people in the UK and elsewhere that were collected from the web, stored in the company’s database and could be used for facial recognition. The individuals were not informed that their images were being collected or used in this way.
The ICO found that Clearview AI Inc breached UK data protection laws by:
- failing to use the information of people in the UK in a way that was fair and transparent;
- failing to have a lawful reason for collecting people’s information;
- failing to have a process in place to stop the data being retained indefinitely;
- failing to meet the higher data protection standards required for biometric data; and
- asking for additional personal information, including photos, when questioned by members of the public whether they were on the company’s database, which may have acted as a disincentive for individuals to object to their data being collected.
In October 2023, the First Tier Tribunal (Information Rights) upheld Clearview AI’s appeal of the ICO findings on jurisdictional grounds. While the First Tier Tribunal did not disagree with the ICO’s findings in respect of Clearview AI’s processing activities, the First Tier Tribunal did find that Clearview AI’s processing was not within the remit of the UK GDPR on the basis of the UK GDPR law enforcement exemption, as Clearview AI provided its services to foreign law enforcement agencies.
In November 2023 the ICO published a statement on the judgement of the First Tier Tribunal, stating its view that the First Tier Tribunal incorrectly interpreted the UK GDPR and ‘Clearview itself was not processing for foreign law enforcement purposes and should not be shielded from the scope of UK law on that basis.’ The ICO has sought permission to appeal the finding of the First Tier Tribunal.
The ICO has taken additional enforcement action since Clearview AI, and appears to be paying close attention to DPIAs in this context:
- Chelmer Valley High School | ICO: in July 2024 the ICO issued a reprimand to Chelmer Valley High School in respect of Article 35(1) UK GDPR. The school introduced facial recognition technology for the purposes of taking cashless canteen payments from students but failed to complete a DPIA prior to doing so. The ICO found that facial recognition technology is likely to result in high data protection risks and a DPIA is essential in order to use such technology legally, so that the higher risks can be identified and managed. The school also failed to consult its DPO and wrongly relied on ‘opt out’ consent from parents.
- Serco Leisure Operating Limited and relevant associated Trusts | ICO: in February 2024 the ICO issued enforcement notices against Serco Leisure and various associated entities (“Serco”) ordering them to stop using facial recognition technology and fingerprint scanning for employee attendance monitoring purposes. Serco had failed to demonstrate that its use of facial recognition technology and fingerprint scanning was necessary or proportionate, when there other, are less, intrusive means available. The ICO also noted the fact that employees were not presented with an alternative method of clocking in and that acceptance of facial recognition technology and fingerprint scanning had been presented as a pre-requisite to receiving payment. According to the ICO, this increased further ‘the power imbalance in the workplace.’
- UK Information Commissioner issues preliminary enforcement notice against Snap | ICO: in October 2023 the ICO issued a preliminary enforcement notice to Snap, Inc and Snap Group Limited (“Snap”) over potential failure to adequately address privacy risks associated with Snap’s generative AI chatbot ‘My AI’. The ICO found that the DPIA Snap carried out prior to introducing My AI did not adequately consider the risks involved, particularly to children whose data would be processed by Snap.
The ICO also focused on how the DPIA did not adequately address how Snap would adhere to the requirements of the data minimisation principle given the potential for the technology to impact upon the nature and volume of special category data shared by users.
In May 2024 the ICO published a statement and decision following its investigation. The ICO stated Snap had taken ‘significant steps’ to more thoroughly review the potential risks associated with My AI and demonstrate that it had implemented appropriate mitigations, and that following review of the fifth version of the DPIA the ICO was satisfied that the DPIA now complied with Article 35.
-
Have your national courts already managed cases involving artificial intelligence?
Without bespoke AI legislation, most of the court cases around AI stem from technology disputes (primarily contractual claims) and product claims around defective systems embedded in products.
In 2023, the Supreme Court considered issues around intellectual property rights that are created by AI systems in Thaler v Comptroller-General of Patents, Designs and Trademarks, concluding that an artificial intelligence cannot be an inventor for the purposes English patent law.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
There is no central regulator or authority responsible for supervising the use and development of AI in the UK.
In April 2024, in response to a request from the government, certain regulators published updates on AI regulation in their respective areas, including:
- The Bank of England, which noted that its focus is on understanding “how to support the safe and responsible adoption” of AI “from a macro-financial and prudential perspective”;
- The CMA, which referred to risks around the dominance of “a small number of the largest incumbent technology firms”;
- The Equality and Human Rights Commission (“EHRC”), which has introduced a specific focus on AI in its 2022-25 strategic plan;
- The FCA, which committed to “build an in-depth understanding of how AI is deployed in UK financial markets” and to colloborate with other regulators domestically and internationally; and
- The ICO, which noted that AI and its application in biometric technologies is one of its focus areas in 2024/25.
Other bodies have also issued guidance, including the Intellectual Property Office in respect of intellectual property issues around AI.
In the public sector, responsibility for AI sits across the Department for Science, Innovation & Technology, and the Cabinet Office. In March 2024, The National Audit Office noted that there is potential for “overlap”, and that it may be unclear which of these departments has “overall ownership and accountability” for delivery.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The US International Trade Administration values the UK AI market at $21 billion and estimates that it will be valued at over £1 trillion by 2035. After the US and China, the UK is the third largest AI market in the world.
The use of AI by business in the UK does vary however, depending on the sector in which a business operates and the size of the business. 2022 UK government research noted that 68% of large companies, 34% of medium sized companies and 15% of small companies had adopted at least one AI technology. Since the launch of generative AI products, AI use has likely increased since 2022.
In November 2023, an AI Opportunity Forum was set up to drive AI use in the private sector, the first meeting of which was held in February 2024.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
AI use in the legal sector in the UK appears to be rising. In November 2023, the Solicitors Regulation Authority reported that “at the end of [2022] three quarters of the largest solicitors’ firms were using AI, nearly twice the number from just three years ago [and] over 60% of large law firms were …exploring the potential of the new generative systems, as were a third of small firms”. It also noted “anecdotal evidence suggests that the use of AI in small and medium firms is also rising”.
LawtechUK categorises legal AI into several areas:
- risk identification and prediction, which includes automating compliance tasks and predicting case outcomes;
- administration, where AI performs routine tasks like information gathering and client communication, often through chatbots;
- profiling, which involves analysing documents for clarity and prioritising cases;
- searching, which automates document discovery and precedent identification; and
- text generation, which produces legal documents and summaries.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI seems likely to be among the most disruptive technologies deployed in the legal sector in the coming years. Five key challenges include:
- The rapid pace of AI development, which can outstrip the creation of traditional regulations. This ‘pacing problem’ means that legal standards do not typically reflect the latest technological capabilities. Lawyers have a significant challenge in advising their clients on the law in this area. Law firms may also lack specific guidelines for internal AI use cases, challenging lawyers, and businesses to know where best to deploy AI.
- Trust in adopting AI technology. If lawyers do not trust AI and its output, they are less likely to use it, limiting its potential benefits. As AI deployment goes from small to large-scale, the issue of trust is likely to grow and will need to be managed carefully.
- Uncertainty if a ‘patchwork’ of laws is created in the UK, and a possible risk of duplication (as indicated by the NAO) or gaps in the law, particularly as the UK government’s White Paper does not anticipate a significant ‘horizontal’ oversight body to ensure consistency in regulation between individual sector regulators.
- Timing in adoption of the technology. In the legal sector, law firms and businesses may be holding off adopting AI through fear that they may enter the market too early. Conversely, leaving adoption too late may risk missing commercial opportunities.
- Regulators may be under-resourced. As the House of Commons Science, Innovation and Technology Committee noted in May 2024: “the AI challenge can be summed up in a single word: capacity”. The Committee said that the government’s February 2024 announcement of £10 million to support regulators was “insufficient to meet the challenge, particularly when compared to the UK revenues of leading AI developers”.
Five key opportunities include:
- New areas of legal advice. Lawyers should be well-placed to advise on the new laws and regulations (including case law) that seem likely to come into being over the coming years. UK lawyers should have the opportunity to help develop these rules in a way which can help build trust in AI within and beyond the UK.
- New business models. Businesses may be able to use AI to develop new business models, which could change how legal services are produced and priced. Alternative legal services providers, accountants and retailers already operate in the legal sector, but there should be opportunities for development and implementation of new AI-powered products and solutions.
- The opportunity to ‘go global’. UK legal businesses may be able to leverage the UK’s reputation as a global influence to use AI to create products and solutions which can be rolled out internationally, particularly in countries that have equivalent common law legal systems.
- The opportunity to ‘add value’. Lawyers may increasingly struggle to compete against AI for certain tasks. This should lead to a greater focus on specific client needs and more complex tasks where lawyers can provide personalised and bespoke support, supplemented by AI.
- Greater access to legal advice. A 2021 Lawtech UK report put the unmet legal needs of SMEs and consumers in the UK at £11.4 billion. Unmet legal needs in many areas may potentially be better served by taking advantage of AI. In a March 2024 speech, the Master of the Rolls said, “AI has great potential within the digital justice system which promises to provide quicker, cheaper and more efficient ways to resolve the millions of disputes that arise in British society every year”.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
There is unlikely to be significant change in UK regulation of AI within the next 12 months. After the July 2024 general election, the new government appears to be focusing on other priorities, as indicated in the King’s Speech and by the reported shelving of £1.3billion of funding promised by the Conservatives for AI projects.
Instead of giving responsibility to a new single regulator, the government will probably continue to empower existing regulators to develop their own sector-specific approaches to AI regulation. However, it remains unclear how regulators will work together or how those approaches may be influenced by, or align with, broader international standards or regulation. It is also unclear when and how the UK government will regulate on the most powerful artificial intelligence models.
We anticipate developments around how the UK’s AI regime interacts with the various international approaches to AI regulation. So far, the UK’s approach has differed from the EU’s, for example, but developing international regimes are nevertheless likely to influence the UK’s direction in the coming years
United Kingdom: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in United Kingdom.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?