Are we ready to accept an AI judge?

On: September 24, 2018
Print Friendly, PDF & Email
About Bianca Crichigno


   

Nowadays we live in an automated world full of algorithms that are shaping our actions and affecting our decisions by the use of predictive models of Artificial Intelligence (AI). We might feel okay by being “helped” to decide what to buy or go to, but if we suddenly are judged by a robot on Court we might think it twice.

 

Source: Adobe Stock

Artificial intelligence has been present in different aspects of our lives now. For the last decade machines have been using different algorithms that today are making decisions for us regarding what we should buy, eat, listen to, watch, or even date.

The term Artificial Intelligence (AI) was firstly coined in 1956 and can be understood as the ability of computers to process large amounts of data and recognize patterns in order to perform human-like tasks through automated reasoning (Goodnight n.pag).

Most of the time we voluntary use AI in our daily lives to help us make “better decisions” through the recommendation of different options. But what happens if suddenly, instead of nudging us to make the best decision, it starts deciding whether or not we are guilty of some crime? A new idea for Black Mirror’s script has just been stolen. Like in many of those episodes, this leads to many ethical questions that we cannot ignore.

In 2016 researchers from University College London, University of Pennsylvania and University of Sheffield developed a software which acts like a real judge and is able to analyse legal texts and make its own decision. According to The Guardian, the AI robot has predicted different verdicts from the European Court of Human Rights, with a 79% accuracy in 584 cases involving torture, degrading treatment, and privacy (Johnston n.pag).

This might come as no surprise to Reed C. Lawlor who in 1963 predicted that computer algorithms would be used in deciding verdicts: “If in fact judges and courts are consistent in the decisions that they render, their behaviour can be described in mathematical form and incorporated in computers that can then be used to predict future decisions of the same judges or courts with a high degree of reliability.” (341)

Source: Adobe Stock

How does the algorithm work?

The main purpose of the system is to extract patterns that correlate with specific outcomes in order to identify potential violations of a particular Article of the European Convention of Human Rights. This was achieved by the analysis of textual evidence, relevant applicable laws, and arguments presented by both parts involved (Aletras, et al. 2).

The following formula represents the model used to determine accuracy in the analysis of 584 different cases (10):

Accuracy =  TV+TNV/V+NV  ,

where TV and TNV are, respectively, the number of cases correctly classified that violated an Article or not and V and NV are the number of cases where there was a violation and there was no violation, respectively, according to the judge.

The main finding of the study is that the decisions made by the judges are strongly correlated to the non-legal reasons (facts) of the cases (11). This means that when judges decide on hard cases they tend to consider more the circumstances of the case rather than legal ones (i.e., the laws), weighing in on the debate on legal formalism vs. legal realism. The former holds that the “legal reasoning should determine all specific actions required by the law based only on objective facts” (Leiter 1144), while the latter takes decisions that “fall into patterns correlated with the underlying factual scenarios.” (1148)

The ethical paradox

From this research we might conclude that the software used to analyse various court cases is taking into consideration other external facts that vary among the cases rather than sticking to the legal law and making a more objective and fair decision.

Researchers also found that some of the cases analysed by the AI judge were misclassified because of certain similarities that were shared between both violation and non-violation cases (14). This lack of agreement between specific words and topics that determine whether a particular case included violation or non-violation would directly affect the decision made by the AI judge. In order to aim for an “objective position” this classification of all cases must be clarified by the robot to have access to a bigger history of patterns and therefore make a more representative decision.

The fact that experts are developing AI to aim for a more objective and neutral perspective in order to make an unbiased decision can be described as a paradox. Because the algorithm is created based on data that represents humans then this data is also a direct reflection of humans’ biases and bigotries (Corder n.pag).

Transparency issues

When considering the use of AI in complicated legal cases there is another negative aspect that rises regarding when the involved parties would like to appeal to a verdict.

Even though AI tries to in some way to simulate the human brain (Velik 2), a neural network presents unknowns regarding what happens during the mapping process and function of neurons between input and output layers (Yildirim and Beachell n.pag).

Considering that a neural network creates connections on its own that cannot always be explained by humans it may be difficult to explain the software’s decision-making process (Tashea n.pag). In this sense, algorithmic processes are in disadvantage compared to traditional decision-making where humans can articulate their rationale when is necessary, which can be only limited by their desire to give an explanation and the questioner’s capacity to understand it (Mittelstadt 7).

Even though the use of an AI judge might turn into a valuable tool for highlighting which cases are most likely to be violations (Griffin n.pag) the ethical and transparency issues will need to be highly considerate. Furthermore, it also might be possible that judges will have to look for a second job, due to the fact that by 2036 more than 100.000 jobs in the legal sector will be automated (“Deloitte Insight: Over 100,000 legal roles to be automated” 2016).

 

References

 

Aletras, et al. Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective. PeerJ Computer Science. 2: e93. (2016): 1-19. (September 2018) https://peerj.com/articles/cs-93/.

Corder, D. Ethical Algorithms: How to Make Moral Machine Learning. Medium. 15 January 2018. 18 September
2018. https://medium.com/qdivision/ethical-algorithms-how-to-make-moral-machine-learning-e686a8ad5793

“Deloitte Insight: Over 100,000 Legal Roles to be Automated.” 2016. Legal Technology. 18 September 2018.

https://www.legaltechnology.com/latest-news/deloitte-insight-100000-legal-roles-to-be-automated/

Griffin, A. “Robot Judges Could Soon Be Helping with Court Cases.” Independent. 2016. 19 September 2018.

https://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-judge-robot-european-court-of-human-rights-law-verdicts-artificial-intelligence-a7377351.html

Goodnight, Jim. Artificial Intelligence: What it is and Why it Matters. SAS The Power to Know. n.d. SAS Institute

INC. 22 September 2018. https://www.sas.com/en_id/insights/analytics/what-is-artificial-intelligence.html

Johnston, Chris. “Artificial Intelligence ‘Judge’ Developed by UCL Computer Scientists.” The Guardian. 2016. 24

October 2016.  https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists

Lawlor, Reed C. “What Computers Can Do: Analysis and Prediction of Judicial Decisions.” American Bar

Association Journal. Vol. 49, No. 4. (April 1963): 337-344. 20 September 2018.

Leiter, B. “Review: Positivism, Formalism, Realism.” Columbia Law Review. Vol. 99, No. 4 (May 1999): 1138-1164.

20 September 2018. https://www.jstor.org/stable/1123484?read-
now=1&refreqid=excelsior%3A72ef3177e574dc1b3d0d54d76834d968&seq=7#metadata_info_tab_contents

Mittelstadt, B. D. The Ethics of Algorithms: Mapping the Debate. SAGE Journals. (2016): 1-21. 19 September 2018.

http://journals.sagepub.com/doi/pdf/10.1177/2053951716679679

Tashea, J. “Courts are Using AI to Sentence Criminals. That must stop now.” Wired. 2017. 22 September 2018.

https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/

Velik, R. “AI Reloaded: Objectives, Potentials, and Challenges of the Novel Field of Brain-Like Artificial Intelligence.”

BRAIN. Broad Research in Artificial Intelligence and Neuroscience. Volume 3, Issue 3 (October 2012): 25-54. 18

September 2018.

https://www.researchgate.net/publication/236130507_AI_Reloaded_Objectives_Potentials_and_Challenges_of_t
he_Novel_Field_of_Brain-Like_Artificial_Intelligence

Yildirim, S, and Beachell, R. “Does the Human Brain Have Algorithms?”. Semantic Scholar. 2006. IC-AI. 19

September 2018. https://pdfs.semanticscholar.org/16cf/a47cb22e99d1cd190e2538343f8024f9a0d4.pdf

Comments are closed.