Legal technology already touches much of the work of the arbitration practitioner. Whether by streamlining document review, assisting with legal research or enhancing communication through video conferencing, technology has had a significant impact on how international arbitrations are conducted compared to a decade ago.
These developments, however, have been largely incremental in nature and artificial intelligence (AI), in its fullest sense, has yet to have its breakthrough moment.
Considerable discussion has been had as to the possible applications of AI in arbitration, from assisting with (or perhaps conducting) arbitrator selection, to being able to accurately predict the outcome of cases. Arguably the most revolutionary use of AI, though, would be for machines to replace humans as the decision makers in arbitration matters. This would transform arbitration as we know it but begs the questions of whether this is possible or desirable and, if it is, what kind of legal issues might be raised?
Technical limitations
AI is already able to perform many tasks better and/or more efficiently than humans given its ability to review and analyse vast quantities of factual information. In the medical field, for example, AI can now diagnose certain conditions with an accuracy rate far in excess of a human doctor. Clear parallels can be drawn with the legal field which similarly relies on applying an ever-growing body of knowledge. This should be all the more so if natural language processing continues to develop as expected and machines become able to more accurately digest complex legal drafting, seeing AI more readily deployable in additional areas to the already familiar document review process.
This comparison is often challenged, though, by the fact that international arbitrations tend to be particularly complex and tactical, with non-repetitive fact patterns and diverging decisions based on the application of different laws, making it more difficult to establish useful training data for AI applications. Likewise, there is uncertainty over AI’s ability to weigh evidence to reach appropriate conclusions on disputed facts. How will AI cope, for example, when it comes to assessing the credibility of conflicting witness testimony regarding an alleged oral agreement? Another significant obstacle to using AI to adjudicate disputes is that, for the most part, it is still unable to provide reasoning for its decisions. Machine decision-making can in fact be so complex at times that it is actually opaque, yet there is absolutely a need for transparency in the decision-making process, not least when it comes to enforcement. Moreover, the use of AI does not come without risks – bias, hacking and the amplification of human error are all real possibilities. Indeed, it is extremely unlikely that parties will agree to enter an AI adjudication process and accept a decision given by a machine if they are unable to understand how or why that decision was made or if it may have been compromised in some way.
But what about the law?
One of the oft-cited barriers to the use of AI in international arbitration is the general confidentiality of its proceedings. This is particularly apparent when considering AI as a decision maker, since to achieve something that closely resembles human decision-making, AI would need to be ‘fed’ huge amounts of data on previous cases and awards. As it stands, however, in the arbitration sphere this information is not readily available or, where it may be (for example, in investment treaty cases or analogous litigation cases), it is not available in sufficient quantity to create a robust data set.
Whether this can be overcome may come down to the arbitration community’s appetite for AI in proceedings. Faced with increasing pressure from commercial litigation, where AI is already making significant inroads, we may see a shift towards publicly available awards, or at least disclosure of anonymised data to certain bodies (subject to confidentiality agreements), as a step on the road towards AI as the ultimate decision maker.
Another complication is the fact that existing arbitration legislation was not drafted with the possibility of an AI tribunal in mind. For instance, certain national laws, such as those in France and the Netherlands, specifically require that an arbitrator be a natural person. Of course, domestic legislation can be changed but that takes time: clearly no one wants to have an award rendered by cutting-edge technology only to have it be incapable of enforcement, either domestically or under the 1958 New York Convention, because the relevant national system is not yet ready for it. Of course, if blockchain enables self-contained dispute mechanisms within smart contracts, there may be situations where enforcement before national courts becomes unnecessary, but this will only relate to a proportion of disputes.
What then about other entrenched considerations, such as the requirement for due process? As it stands, we typically understand this to mean that humans, capable of equity considerations, determine a case. Can a machine really achieve justice if it is unable to show empathy or respond flexibly to changes in the proceedings? A related issue arose in the US criminal case of Loomis, where the Supreme Court of Wisconsin held that a trial court’s use of algorithmic software in sentencing to assess the defendant’s likelihood of reoffending did not violate due process, since the software was used as an aid to the judge’s decision-making rather than a substitute for it. However, even if AI is used only to guide (rather than replace) arbitrators, perhaps in contained areas, such as damages or costs, where should the line be drawn as to what is the AI’s decision and what is the arbitrators’? This issue could become particularly apparent as technology becomes increasingly sophisticated and arbitrators may hesitate to go against its recommendations.
Trust is key
Despite the promise shown by AI, it seems unlikely we will see it adjudicating on complex disputes any time soon. AI can and does, however, already enhance the work of arbitrators and provides them with tools to reach better and more efficient decisions.
Trust will likely be the key factor in determining the extent of the role that AI ultimately plays in the arbitration arena. Even if all of the above issues are overcome there will likely still be those who for a variety of reasons have a preference for human control. This scepticism is in line with a recent survey by Bitkom, in which only 10% of those asked said they would prefer to have a decision given by an AI judge.
The above said, as technology continues to develop and to show what it can do, AI is likely to receive wider acceptance and have a greater role in the arbitration process, particularly as the next generation of arbitrators, counsel and clients are introduced to it, both in education and in the early stages of their careers. When we consider the use of auto-piloted aircraft and autonomous cars, it is clear that what might seem difficult to envisage now could become an accepted part of arbitration practice in the future.