News and developments

Artificial Intelligence: Some of the Challenges Ahead

AI promises to dominate the news in 2019. This article highlights some of the challenges ahead.

Reference to Artificial Intelligence ('AI') is likely to bring to mind dystopian pictures of a robot apocalypse as depicted in films such as iRobot or The Matrix or to spark debates on the possibility of machines with 'consciousness'. Yet although we are rightly concerned about AI and the future of mankind, catastrophic thinking may diminish or slow its progress and make us lose sight of its vast potential benefits.

AI is more pervasive in our daily lives than one may think. Spotify and Netflix use AI to recommend songs, films or television shows which it considers relevant to our interests based on our past activity or choices. Amazon uses it to recommend products based on past purchases or web browsing activity. Our increased reliance on online services and smart phones means that humans are interacting with AI at an ever-increasing rate.

The technology has the potential to greatly increase efficiency, to save lives and to increase the general standard of living. AI's use in everyday life is ubiquitous and has even spread to sectors ranging from healthcare and transport to executive decision-making and policing. Whilst the potential benefits of the technology are endless, the race to adopt it with haste may lead developers and users to cut corners and disregard certain ethical and legal concerns, such as its effect on privacy, the threat faced by cyber security and AI's potential disregard for fundamental human rights.

"If we don't change direction soon, we'll end up where we're going", (Irwin Corey). In order to take a path, we must envision a future where AI technology enhances human capabilities rather than challenges them. The following are some of the immediate hurdles (legal and ethical) which, in this author's view, must be addressed if the inevitable adoption and accommodation of AI is to take a path most beneficial to society.

1. If AI is the Hare, can Laws and Regulations be the Tortoise?

The development of technology uncovers loopholes in the law, and legislators seem to be fighting a losing battle trying to keep up with it. When new technology is launched, we frequently 'put the new wine into old wineskins' but is this advisable? The answer to this question depends on whether AI-related technology possesses special features which may be singled out as novel and distinct from those regulated in the past. That is, does AI raise issues and concerns – societal, economic, legal and ethical – to justify regulation or can they be tackled by existing bodies of law?

If the answer is to regulate, then in what manner should this be done? Should the technology be governed by a distinct set of laws and a separate regulatory body/agency? Any move toward regulation must be taken with care. It is difficult to regulate uncertainties. Thus, one of the first steps which must be taken is to define what we mean by "AI". Efforts to discuss regulation of AI are wasted if parties to the discussion are talking about different things. Too wide a definition may lead to ambiguities and loss of focus, whilst too narrow a definition might lead to the overlooking of important AI-related technology or to a definition which may become obsolete in a matter of years. [1] Furthermore, whether we think of AI as a tool, companion or weapon will determine its eventual purpose and how the law will come to deal with it.

2. Not my Fault – Liability and Responsibility in Contract and Tort

Probably the most immediate challenge to the legal system is the establishment of liability in cases of accidents. That is, who is responsible in case of an accident caused by an AI-powered machine? To use a practical example which has gathered widespread attention: who is responsible where an autonomous vehicle causes an accident, such as by running over a person? In order to award damages to the injured party, the law generally requires one or more parties to be responsible (or at 'fault' here in Malta). So, will it be:

The driver or owner of the autonomous vehicle?

The car manufacturer?

The computer programmer who designed the AI system?

The AI itself?

The extent of this thorny question is further evidenced by the range of different proposals and solutions to tackle the issue. Some push for the idea of applying "electronic personality" to the AI itself. [2] Others believe that current legal frameworks determining liability are flexible and already equipped with the necessary tools to tackle AI technology.

Malta, being a mixed jurisdiction faces particular challenges in this regard. Continental and Common law principles collide and present challenges that may need thinking outside the box. Champions of common law claim that the current system of tort in common law jurisdictions is flexible enough to tackle new issues of responsibility, reminding us that the law of tort has been subject to constant change over the years. [3] Malta relies on fault-based liability for such incidents, and courts would have to attribute fault based on existing tort and product liability law. Are these existing legal systems equipped to deal with the challenges which autonomous vehicles bring? This is probably the single biggest hurdle to be overcome before AI-related technologies in this sector are rolled out to the public. The emergence of AI relies on public trust. Without a sound legal framework that provides legal remedies that are conducive towards such trust, innovation will grind to a halt.

One must also consider the extent to which insurance companies will be involved: will insurance companies be willing to cover such vehicles; does the law on compulsory third party liability insurance need to be modified? and; will premia skyrocket or be reduced with autonomous vehicles or other AI technology taking over?

3. Privacy and Data Protection

In order to operate accurately and efficiently, AI powered machines or programmes require a vast amount of data. Amassing and using data for AI purposes raises a number of privacy and surveillance concerns whilst also challenging the core notions of the EU's General Data Protection Regulation ('the GDPR'). [4] Particularly, AI technology raises concerns of:

Transparency and 'explainability'. Oftentimes, both the code and algorithms behind AI (often referred to as 'black box' algorithms [5]) are either protected by intellectual property or trade secrets, or may even prove to be incomprehensible to the original programmer himself. In principle, this would come into conflict with a data subject's right to information or an explanation under the GDPR since, if the programmer cannot understand the workings of his own algorithms, as they operate in a 'black box', how can one possibly expect an end user to do so?

Monitoring and profiling. Machine learning algorithms in AI systems may allow its wielders to infer characteristics which are deemed sensitive data e.g. race, from other characteristics such as address, income etc. Such profiling techniques may be used by vendors to target consumers' wants and interests or they can be used to target voters during elections by trapping them in a 'filter bubble' of information which is pre-emptively designated as 'relevant' to the user.

4. Are Robots Going to Take our Jobs?

Another challenge presented by AI is its potential threat to employment, in that AI-powered machines are already able to perform tasks traditionally performed by humans, sparking forecasts of the next industrial revolution. What is markedly different from the original Industrial Revolution, is the fact that, generally the assumption with technology is that it is used as a tool for example to increase workforce efficiency, but with AI and particularly embodied AI, technology is becoming the workforce itself. [6]

Not only is AI threatening to destabilise the traditional workforce, but it also has the potential to displace the jobs of professionals. AI-related technologies are replacing humans in the banking and financial service sectors and AI lawyers and doctors are already amongst us. Therefore, even if this challenge is not immediate, governments must think in advance considering the impact that it may have on the population's well-being.

To tackle this particular threat to jobs traditionally assigned to human beings, some have proposed the introduction of a robot tax and/or a universal basic income to protect those persons who are at risk of losing their jobs. [7] An assessment of current fiscal and labour laws may therefore also be required in this respect.

5. The Public Law Sphere

The interaction with AI does not end in the private sphere. Governments around the world are increasingly making use of AI in public decision-making. In the U.S., some institutions use the technology to determine bail or the length of prison sentences, whilst in the UK some constabularies are making use of machine learning technology in predictive policing which can be likened to the now not-so-farfetched picture painted in the Steven Spielberg film Minority Report released almost two decades ago.

Although the use of the technology in such areas may be said to reduce the potential human bias of adjudicators and increase the efficiency of public decision-making exponentially, relying on such technology in such high stakes areas as criminal trials is dangerous. Although AI may decrease the error rate in decision-making when compared to humans, it is still prone to errors and inaccuracies. It is, of course, still in its youth, yet the sooner we accept that even AI is prone to errors, the sooner we can develop standards for redressing them.

There is evidence that AI, particularly through machine learning technology and its reliance on historical data, has the potential to actually reinforce biases and become biased itself, thereby compounding the problem rather than solving it. In fact, it has been found that software used in the U.S. criminal justice system to calculate the potential of recidivism of defendants facing detainment decisions has been found to erroneously label defendants of colour as 'high risk' leading to higher bail conditions and longer prison sentences. These circumstances coupled with a lack of transparency in such 'black box' software, besides threatening the livelihood of those individuals affected by such decision, challenge such individual's fundamental human rights, such as his/her right to a fair trial or presumption of innocence.

Conclusion

This piece merely scratches the surface in a brief and digestible fashion. Besides challenging the core principles of privacy and data protection; raising concerns of a mass displacement of jobs; and questioning age-old legal practices regarding responsibility/liability in the case of damage, it is arguable that AI-powered technology also has the potential to transform our notion of warfare, challenge the core principles of our fundamental human rights and rattle core notions of democracy.

Even if regulation is not required or warranted in its most basic form, the challenges outlined above show that at the very least, AI demands discussion and investment of time and resources because, together with ancillary technology, it is likely to become a dominant aspect of our lives. The EU has realised the technology's potential and is increasingly dedicating more resources and research to the area. The European Commission together with Member States has published a Coordinated Plan on Artificial Intelligence at the close of 2018 in order to promote a common development for AI in Europe with the ultimate aim of having an ethical, trustworthy and human-centric AI.

Following its success in cementing its place as the Blockchain Island, Malta has also realised this opportunity, aiming to fill the void and, as asserted by Malta's Junior Minister for Financial Services, Digital Economy and Innovation, to "position Malta amongst the top 10 countries in the world with an Artificial Intelligence Policy." This has sparked various initiatives on the Island, including the launch of Malta.ai the Maltese government's vision for AI, and the hosting of events such as the AI in Business Malta Summit and Malta AI & Blockchain Summit 2019.

Footnotes:

(1) Andrea Bertolini, 'The legal issues of robotics (Robohub, 06 April 2017)

(2)Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics [2016] 2015/2103(INL) para. 59(f)

(3)F. Patrick Hubbard, Chapter 2: Robot Law (2016). Robot Law, Ryan Calo, A. Michael Froomkin & Ian Kerr, eds., 2016

(4) Big Brother Watch and others v. The United Kingdom (App. n. 58170/13) decided by the European Court of Human Rights (First Section) on the 13th September 2018

(5)Pasquale F, 'The Black Box Society: The secret algorithms that control money and information' (First Harvard University Press edition, 2015)

(6)Ford M, 'The Rise of the Robots: Technology and the Threat of Mass Unemployment' (Oneworld Publications, 2015)

(7)See Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics [2016] 2015/2103(INL) para. 44