The legal press often features articles on AI replacing lawyers. Until recently, there has been much less focus on the question of how lawyers can help society to live alongside AI.
AI is unique because unlike any other previous technology, it can act autonomously. This means it can make decisions which were not pre-programmed by any human. This distinction was recognised by the Singapore International Commercial Court in Quione v B2C2. The first instance judge (whose observations on this point were upheld by the Court of Appeal in February 2020) described a traditional computer programme in vivid terms: ‘The algorithmic programmes in the present case are deterministic, they do and only do what they have been programmed to do… They are no different to a robot assembling a car rather than a worker on the factory floor or a kitchen blender relieving a cook of the manual act of mixing ingredients.’
By contrast, AI technology (the most popular at present being machine learning) is capable of independent behaviour. This feature was demonstrated in a series of games played between the AI programme AlphaGo and multiple world champion Lee Sedol, in the ancient board game, Go. In the thirty-seventh move of the second game, AlphaGo made a play which baffled those watching. Even its own programmers considered that it must have made a mistake. Several hours later, this turned out to be the winning move: essentially AlphaGo had formulated a strategy which no human had considered in the history of Go being played.
At present, laws generally govern choices made by humans. We, therefore, have gaps in situations where AI causes harm as a result of its independent choices, which may not have been predicted by its users or programmers.
Existing laws and regulations are not well designed to address decisions made by AI. At present, laws generally govern choices made by humans. We, therefore, have gaps in situations where AI causes harm as a result of its independent choices, which may not have been predicted by its users or programmers. Who should be liable if a self-driving car crashes? Or an autonomous surgical robot performs an operation badly? What standard should an AI be held to? Many legal principles reflect concessions to human limitations – for example, negligence uses the concept of the ‘reasonable’ rather than the ‘perfect’ person. Are such concessions appropriate in circumstances where an AI system is demonstrably more accurate than a human (though still not perfect)?
The ethics of AI has been gaining increasing attention. This involves two issues: how should AI take difficult decisions, and are there any decisions which AI should never take? There are many high-profile examples of AI failures: from the Facebook image recognition software which identified some black people as gorillas, to the Apple credit card which granted men far higher borrowing limits than women. Some of these issues may be solved by ensuring that the data input into the AI system is sufficiently representative of the real world. But this is by no means a panacea.
From 2017 to mid-2019 there was a tendency for companies and other organisations to publish high-level sets of desirable features for their AI, which achieved publicity but had very little practical impact. These vague standards are now being replaced, or rather substantiated, by detailed checklists. Although the trend seems to be towards accepting a risk-sensitive approach to precautions rather one-size-fits-all rules, lawyers will need to play an important role in explaining and justifying an organisation’s decisions in this regard, internally and to third parties.
One major technical difficulty is that AI decisions can be difficult to predict and explain. This is sometimes known as the ‘black box’ problem. Yet there is an international consensus that some form of explainability is desirable. GDPR requires that where certain automated decisions are made about a person they have a right to be provided with ‘meaningful information about the logic involved’.
Fortunately, we are starting to see emerging best practices in terms of processes which can be used to ensure that AI is robust, reliable, and trustworthy. Some of the most detailed examples to date have been produced by the UK’s ICO in its draft AI Auditing Framework, Singapore’s Personal Data Protection Commission, as well as the World Economic Forum. Hard legislation on AI is coming too: the US has signified that it intends to support federal laws on AI, and the European Commission recently released a white paper setting our proposals for harmonised AI laws across the internal market. Likewise, China is also moving towards binding legislation.
Sophisticated regulators are unlikely to be impressed with organisations which have not adopted policies to govern their AI use. Just like environmental, anti-bribery, and data privacy concerns, AI regulation matter is fast becoming a serious issue for legal and compliance departments, as well as company boards.
The benefits of AI use are enormous, but the dangers in terms of liability, regulatory, commercial, and public relations risk are also growing. This, in turn, presents opportunities for lawyers of all kinds to advise clients on the best way to navigate these swirling currents.
Jacob Turner is a barrister at Fountain Court Chambers, author of Robot Rules: Regulating Artificial Intelligence (2018), and a contributing author of The Law of Artificial Intelligence (2020). He has advised companies, regulators, and governments around the world on AI governance. Jacob lectures regularly on the topic, with recent engagements including Cambridge, Copenhagen and Singapore Management Universities, as well as the Dubai Judicial Institute and the European Commission. Together with Simmons & Simmons and BestPractice AI, Jacob has recently launched a new AI advisory service.