M L

23.04.2019 Cristiana Gonçalves Correia (Alumna ML)

Traditional Practitioners and the Last Crusade (or the impacts of Artificial Intelligence in International Arbitration)

Much has been said and written in the last couple of years about Artificial Intelligence and its possible impact in the legal practice. Experts seem, however, far from reaching a consensus on the effects that the so-called “next industrial revolution” may have on the legal business. Will lawyers be replaced by robots? What about arbitrators? Are traditional practitioners about to go on their Last Crusade?

The term “Artificial Intelligence” is said to have been coined in 1956 when a computer scientist named John McCarthy invited a group of researchers with different expertise to study and discuss the existing conceptions around “thinking machines”. The study was to be carried out “on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”[1].

Modern definitions emphasise that Artificial Intelligence is a sub-field of computer science that focus on how machines can imitate human intelligence and perform tasks normally requiring human intelligence.

Richard and Daniel Susskind have identified some of the most common biases that hamper the discussion around the impacts of automation in legal practice (and, thus, also in arbitration practice)[2]. We shall highlight three: the “status quo bias”, the “technological myopia” and the “AI fallacy”. One of the errors in which practitioners often incur is to list characteristics of their work that are not compatible with automation, usually by reference to complex cases (“status quo bias”). Another common mistake is to look at the existing systems and conclude that only a small percentage of a lawyer’s job can ever be automated, thus underestimating the potential of emerging technologies (“technological myopia”). Finally, it is also a common blunder to assume that the only way to develop systems that perform lawyers’ tasks (at least in a higher level, i.e., senior lawyers’ tasks) is to replicate the thinking processes of human beings and, accordingly, argue that computers cannot replace human beings because they are not able to think or feel (“AI fallacy”).

The question of whether or not robots may lay waste to the legal profession as it exists, with its canons and specific challenges, cannot be answered in abstract. It is common ground among experts that there are tasks more and less likely to be (correctly) performed by machines[3]. Document review and legal research, for instance, are tasks that can easily be undertaken by a machine, most likely with better results (in terms of time and cost efficiency and also accuracy). Drafting legal pleadings and cross examining witnesses, on the contrary, are less likely to be performed through IT applications and software, at least in the near future. In other words, lawyers’ tasks that involve the processing of information and that are “routine”, which are normally completed by junior lawyers and paralegals, are the easiest to be automated, which may have great impacts in the training and professional development of younger practitioners. Administrative secretaries, usually entrusted with supporting activities such as drafting documents of an organisational nature (procedural orders, procedural timetable, minutes of the hearings, and so forth), maintaining the case file and forwarding documents on behalf of the arbitral tribunal, are also more likely to be displaced by computers than arbitrators themselves.

However, as many have pointed out, the discussions should rather focus on how cutting-edge technology may change (and is already changing) the work of lawyers and arbitrators and on whether or not the public is ready and willing to welcome such change[4]. Obviously, even if a particular task can be automated, this does not necessarily mean that the clients will be comfortable with that form of automation and, therefore, that it will succeed in the marketplace. In fact, clients may be happy to accept risk assessments being performed by machines, based on the existing case law on the matter in a certain country, but will they agree to have their disputes settled by robots whose decisions may lack emotional processing?

Unsurprisingly, among the challenges that may arise from the use of new technologies in arbitration, especially in the decision-making process, some have identified issues of due process (usually understood as the right to a fair trial, which includes the right to be treated equally, the right to have a reasonable opportunity to assert or defend its rights and the right to have a decision rendered by an independent and impartial judge/ arbitrator within a reasonable timeframe[5])[6]. Since robot’s capacity to render decisions that combine the strict application of law with considerations of justice and ethics is yet to be proven, the conformity of a non-human decision with due process is, at the very least, arguable. In reality, human features such as sensitivity, empathy and emotion are often as important to a fair trial as the technical expertise and experience of the decision-maker. In other words, overthinking may, in general, cost extra money and time to users, but it is normally key to a better award[7]. Also, machines would have to be programmed to be able to decide, possibly through the possessing of great amounts of judicial and arbitration awards rendered in similar cases. But is this compatible with the requirements of independence and impartiality of the decision-maker? And what if robots become more cognitive capable than humans in the next few years[8]?

Hence, it is too early to sentence that lawyers and arbitrators will be replaced by robots. However, it is time for lawyers and the arbitrators to change its attitude in view of maximising the use of technology. We, too, tend to believe that human beings will remain the centre of legal practice for many years, although probably with different garments and instruments. That is to say: cyborg lawyers[9] and arbitrators (i.e., human experts whose abilities are extended by technology), rather than robot lawyers and decision-makers, are taking over.



[1] Bernard Marr, “The Key Definitions Of Artificial Intelligence (AI) That Explain Its Importance”, Forbes (14.02.2018).

[2] The Future of the Professions: How Technology will transform the Work of Human Experts, 2015, pp. 43-45.

[3] Dana Remus / Frank Levy, “Can Robots be Lawyers: Computers, Lawyers, and the Practice of Law”, The Georgetown Journal Of Legal Ethics, Vol. 30:501, 2017, pp. 501 and ff.

[4] Jason Borenstein, “Robots and the changing workforce”, AI & Soc (2011) 26:87–93, pp. 87 and ff.

[5] See, for instance, articles 1 and 3 of the ALI / UNIDROIT Principles of Transnational Civil Procedure, available in https://www.unidroit.org/instruments/transnational-civil-procedure.

[6] See the hoganlovells.com interview to Winston Maxwell, Laurent Gouiffès and Gauthier Vannieuwenhuyse on “The future of arbitration: New technologies are making a big impact — and AI robots may take on “human” roles” of 21.02.2018, available in https://www.hoganlovells.com/publications/the-future-of-arbitration-ai-robots-may-take-on-human-roles.

[7] José María de la Jara, Daniela Palma, Alejandra Infantes, “Machine Arbitrator: Are We Ready?”, in Kluwer Arbitration Blog (04.05.2017).

[8] The question was raised by Allan Smith in 2014 – see “Mankind Is Getting Ready To Turn Over Most Decisions To Robots”, Business Insider (03.08.2014).

[9] Q:A: Max Paterson (Settify principal and CEO), Law Management Hub (30.08.2017), available in http://www.lmhub.com.au/qa-max-paterson-we-believe-that-expert-human-lawyers-will-always-be-at-the-centre-of-legal-practice/.

[Text originally published in Lisbon Arbitration]