Karanovic & Partners in cooperation with local lawyers | View firm profile
Artificial intelligence is a burning topic
in many sectors today and the legal industry is no exception. Recently, at the World Services Group’s annual employment
law conference held in February, AI was heavily debated along with its’
impact not only on the legal profession, lawyers, clients, the way business is
done, but also our traditional understanding of concepts such as “law” or
“justice”.
According to expert predictions, with
current software developments and the rise of intelligent systems, in a few
decades certain jobs will disappear – bank clerks, cash-register operators etc.
It may be difficult to imagine, but it is actually not that far from reality.
Artificial
intelligence and employment
Software is even now used by HR
professionals in the recruitment process and in later stages – certain
interviews are conducted with no humans present apart from the interviewee, with
intelligent software asking the questions. Whether by the various backdoors or
through the main gates, technology is replacing the human workforce. So, what
are the consequences?
Certain jobs that seem necessary now will
become obsolete – what will happen to the people doing those jobs? If they
would become redundant, the system would need to find them new jobs or enable
them to re-qualify in a timely manner. However, there are many positive aspects
to this – such as the development of new professions and industries. In a
sense, even though some jobs will disappear, new ones will pop into existence
to replace them. Of course that there will always be jobs that machines are not
sophisticated enough to perform, that will always need a “human touch”. On the
other hand, when working in dangerous environments, machines clearly have the
advantage.
Further, from an ethical viewpoint, the
subject of discrimination in the relations between humans and their machine
interviewers is quite interesting, especially in the recruitment process. Can
we get accidentally discriminatory solutions based on the input “fed” to the
programme on the acceptable candidate? Artificial intelligence still eventually
requires humans to operate it and feed it information, so can a machine then
really be more objective than a human?
Other legal repercussions of the development
of AI are quite substantial. If, for some reason or other, the machine fails to
perform adequately, who is then to blame? Traditionally the one held
responsible would be the user or the owner. Now, perhaps the one to blame is
the developer that worked on that AI?
Due
diligence software
The legal profession is no exception to
these developments. A growing number of international law firms use AI to
conduct due diligences, partially or in
toto.
For example, in the most limited use of
software for due diligence, the programme is given information from a virtual
data room, and through a combination of key-words it eliminates unnecessary or
irrelevant documentation. The next phase in more developed DD software includes
the programme processing the relevant documentation through the recognition of
certain word combinations or information, pairing it then with its’ own legal
terminology based on the laws and rulings that it was fed. The AI can also
compare information, fill in parts of the report or assists in some other way.
Since most clients these days request a fast
reaction and a due diligence more and more often needs to be conducted within few
days, in order to speed things up and reduce costs, international law firms
decide to use this this kind of software as it makes things a lot easier, since
long and arduous work can be sometimes resolved in hours.
Online
dispute resolution and AI judges
A topic which excites considerable
discussion is the development of online dispute
resolution. In cases when there is no need for hearings, and when disputes
are handled through submissions, the current development of technology enables
that submissions by the parties can be simply uploaded and read – after which a
ruling can be reached, without the need for the physical presence of the
parties before the authorities. Most courts are still not as advanced as to use
this method, but there is a tendency to implement online submissions in some
more flexible forms of arbitration or mediation. Technically it is a very small
step to go from filing your submissions in hard copy to uploading them.
This brings us to perhaps the most
controversial application of AI in the legal industry – software passing
judgements and rulings. The majority of lawyers would say that this is the last
area that they would expect to be conducted via software. Most would say that
this requires justice – a characteristically human concept.
However, it may come as a surprise that a
similar kind of software is being currently tested by some judges and
professionals in EU as we speak. Most notably, this option is currently being
considered for collective lawsuits, where numerous claims arise from the same
factual background and the judges are actually copy-pasting their decisions in
practice.AI can do this faster and more efficiently, saving the judges’ time
for more complex decision making. This appears to mark only the beginning of
the silent AI revolution in the legal segment, since we may not be that far
away from AI making decisions in other typical cases as well.
An artificial intelligence judge has
accurately predicted most verdicts of the European Court of Human Rights.
Scientists built AI that was able to look at legal evidence and asses its
ethical questions to decide how a case should be decided. And it predicted
those with 79 per cent accuracy, according to its creators. The algorithm
looked at data sets made up 584 cases relating to torture and degrading
treatment, fair trials and privacy. The computer was able to look through that
information and make its own decision – which lined up with those made by Europe’s
most senior judges in almost every case.
Paradigm
shift
Would such a use of AI make law more
neutral and unbiased? Is justice dealt out by a machine is more objective than
human justice? Of course, this relates to the question of whether we want law
to be objective in such a way or do we want to look at the specific
circumstances of each unique case from a human perspective? Then again, some
would argue against AI objectivity by pointing out that humans feed it
information so that it can never reach the ultimately objective approach.
The idea is that we still need to feed the
software with court rulings, precedent law and information from which it would
develop syllogisms, reach decisions and propose further actions. So, while
inputting information, certain discriminatory ideas could be included
accidentally, allowing for discriminatory rulings. According to some researchers,
for example, republican judges are more inclined towards stricter punishments
than their more liberal counterparts. Also, according to simple statistics,
judges reach harsher verdicts before lunch than after. And, as much as judges
try to be objective, there are indications that courts are generally more
forgiving towards women or more inclined to provide them with certain rights –
especially mothers. So, despite our attempts at neutrality, certain patterns
emerge, given a large enough sample, that the AI can pick up and continue with
in its decision making process.
So, will machine justice be more just than
human justice? And if so, would we want such a thing?
The AI judge in our example was wrong in around
20 per cent of rulings – its logic and conclusions were mostly correct, similar
to those of its human counterparts. Taking that into consideration and given
the cost-effectiveness of the use of software as opposed to using human lawyers
– the final word could be left to the clients. Whether to hire lawyers or to
opt for the more affordable solution, even though not ideally just.
Conclusion
Until recently, the legal profession was
much the same as it was two thousand years ago. From the times of Ancient Rome,
we have had the prosecution, the defence and arbiters that consider the
arguments and then pass judgements.
We are witnessing now a paradigm shift –
the ideas of law and justice are being reconsidered. How we adapt to emerging
technologies, new client expectations and industry trends will determine the
place of lawyers in this brave, new world.