Impact of artificial intelligence on effective judicial protection
Posted: 26 June, 2023 Filed under: Jackeline Maribel Payé Salazar | Tags: access to justice, AI technology, AI-based programmes, algorithms, Artificial intelligence, automated court decisions, challenges, civil jurisdiction, COMPAS software, human rights, judge-robot systems, judicial protection, justice administration systems, predictable justice, regulatory frameworks Leave a comment
Author: Jackeline Maribel Payé Salazar
Lawyer
Introduction
Artificial intelligence (AI) makes a significant contribution to achieving timely and predictable justice. However, it is necessary to analyze the challenges that its use represents for the right to effective judicial protection. This right includes not only the right of people to access the courts of justice and to obtain a judicial decision in a reasonable period of time but also the right to obtain a decision duly motivated. This supposes that judicial decisions have to express the reasons on which they are based. In this sense, the author asks: is it possible to sufficiently guarantee the right to effective judicial protection if we use expert systems based on AI? What are the benefits of AI in the justice administration system? What is the “dark side” of AI? What are its limits, from the perspective of the right to effective judicial protection?
It is important to carry out this analysis because the academy cannot appear indifferent to the challenges represented by the incorporation of AI in the preparation of judicial decisions. It is necessary to carry out this study because legal predictability systems and the appearance of judge-robot systems force us to reflect on the judicial guarantees of the justice administration systems of any country in the world.
AI in the administration of justice
At present, the justice administration systems of the African Union states do not yet have an AI-based programme that produces automated court decisions; however, there have been significant steps in the use of AI-based programmes that analyse and identify specific information at a level that a human being could not. These programmes can be found in Nigeria, where Grace InfoTech Limited has created the “Timi” system, a chatbot that understands the rules of civil procedure in Lagos State. Likewise in South Africa, where Bowmans LP –which has a geographical spread across several eastern and southern African nations, including Kenya, South Africa, Tanzania and Uganda– in collaboration with law firm Udo Udoma & Belo-Osagie, have adopted an AI solution known as “Kira”, which automatically identifies and extracts information from contracts. The firm is the first law firm in Africa to adopt AI technology.
However, in this paper we will specifically refer to those systems that are intended to be used by courts of law for the issuance of judicial decisions. For this we will first ask the following questions: Is the use of AI in the administration of justice advantageous? What are the benefits and risks of implementing predictive justice based on the use of AI in the administration of justice?
The benefits are by no means negligible, but the risks cannot go unnoticed either. Among the benefits are: i) greater information and transparency about the functioning of the justice system; ii) uniformity in the application of the law and the absence of biases that judges may have (reduction of the margin of arbitrariness); iii) labour savings and reduction of response time; iv) improvement of the link between the justice system, its component institutions and the citizenry, improving the level of access to justice.
The risks include the following: (i) petrification of the legal system and jurisprudence; (ii) appearance of biases in the algorithms –if the system is fed a series of erroneous or limited or incorrect data, the analysis process and the solution it will propose will be incorrect–; (iii) limited information handling (AI systems do not perceive information that is obtainable through human eyes); (iv) the defencelessness of the litigants (judicial decisions issued by automated systems are not always motivated in law); and (v) the detachment of justice from the concrete case – judicial decisions are not always based on a logical link (legal syllogism), but are also based on the social context, the political situation, religious bases and the ethics of the judge.

That said, it is pertinent to bring up a case in which the COMPAS system, a programme used in some states in the United States, was applied, and which later caused controversy. This is the case of Mr. Erick Loomis, who in 2013 was stopped by the Wisconsin State Police while driving a vehicle involved in a recent shooting. He was accused of fleeing from police and using a vehicle without the owner’s permission. Mr. Loomis pleaded guilty to both offences in the hope that he would be spared prison time. However, the prosecutor in the case presented a COMPAS report, which concluded that Mr. Loomis posed a “high risk to the community”. In view of this, the judge sentenced the defendant to six years’ imprisonment and a further five years’ probation. The defence appealed the sentence, claiming that the right to due process had been violated because it could not dispute the methods used by the COMPAS software, given that the algorithm was secret and only the company that had developed it knew about it.
In response, the Wisconsin Supreme Court dismissed the appeal, arguing that the software had relied only on the usual factors for measuring future criminal dangerousness, such as flight from the police and prior criminal history, arguing that the defendants’ right to due process was not violated merely because they did not have access to an adequate explanation of the computer processing of the algorithms.
In contrast to the criterion adopted by the Supreme Court of Justice of the State of Wisconsin, we consider that the entire process developed by an algorithm in the administration of justice must be open, auditable and traceable and not use black box AI techniques or what is known as “deep learning”, in order to guarantee the right to defence, effective judicial protection and due process. It is also important to note that responsibility for any infringement of human rights must be attributable to human beings. Herein lies the importance of adopting a normative framework regulating the use of AI in the administration of justice.
AI and its impact on effective judicial protection
Although the benefits offered by AI systems mentioned in the previous section contribute to the realisation of effective judicial protection, insofar as they allow for prompt and predictable justice, it is also true that their misuse can lead to the violation of the rights to defence and to reasoning of judicial decisions, which are also included in the scope of protection of effective judicial protection.
The inclusion in the process of any AI tool and specifically, in this case, that of judicial prediction techniques must always be carried out with respect for the fundamental rights of the parties. C. San Miguel points out that the safeguarding of procedural rights and guarantees must take precedence over the efficiency of the administration of justice, otherwise we would be placing the parties in a procedural context that is essentially inquisitorial and incriminating without any possibility of defence.
It is therefore necessary to regulate the use of AI in the justice administration system in a manner that is compatible with effective judicial protection. This means establishing regulatory parameters for the application of AI systems, regulatory barriers that establish in which fields of law, in which phases of the judicial process these automated systems may be used, as well as what role these systems will play in the process, whether in a merely assisting role, supporting the judge, or a decisive role, what the judge’s role will be, how the rights of defence and due motivation of the decisions will be guaranteed and who will assume responsibility for the errors that the automated systems may make.
David Martínez, professor of Law and Political Science at the Universitat Oberta de Barcelona (UOC), pointed out in an article published in La Vanguardia that the use of AI in the administration of justice is viable in simple cases. For Mario Adaro, a judge of the Supreme Court of Justice of the province of Mendoza, who applies Prometea on a daily basis, judgments in tax cases are serialised, “of large volume, where decisions are groupable in clear sets and everything is quite mechanical and predictable”. He pointed out that by using AI for these types of cases, Prometea makes the number of errors in data loading, typing and redundancy drop significantly.
In the same vein, some point out that the configuration of the admission phase of the contentious-administrative appeal offers an appropriate field for the successful use of automated systems that produce a decision on the admissibility or inadmissibility of the appeal, but that the decision phase is too complex for the use of such systems. Others point out that these systems cannot be used in the criminal or family courts, because, in addition to objective indicators, they contain a list of subjective rights that require interpretation by the judge. On the other hand, they indicate that AI could be applied in matters related to business law in which there are economic infringements or unfair competition, or in areas such as taxation (accounting cases), civil jurisdiction (debts, traffic fines, insurance companies) or trademarks and patents.
Consequently, it is of vital importance to delimit the scope of application of AI systems in the administration of justice, in order to avoid infringing procedural guarantees, such as effective judicial protection and the right to due process, and thus prevent AI systems from acting on their own, in a space where their results are not questioned or reviewed. Human rights must constitute an insurmountable limit to the application of automated systems.
Conclusions
The benefits of AI systems in the administration of justice are invaluable, as they contribute to judicial decongestion and to a prompt and predictable justice, but this development of technology and its application in justice delivery must be controlled by human beings. this necessitates the normative regulation of the use of AI in the justice system. AI cannot be a lawless zone. We need international and national regulatory frameworks that ensure that the use of AI will benefit the delivery of justice without colliding with their human rights. As the use of AI applications in the justice system is becoming imminent, the African Union should consider adopting a normative instrument which will guide members states and ensure the efficient delivery of justice while protecting human rights.
The use of automated systems in the administration of justice without an adequate normative delimitation may not be compatible with fundamental rights, such as the right to defence and to the due motivation of judicial decisions, which are essential elements of effective judicial protection.
Bibliography
- Aghemo, Raffaella. The challenges of Artificial Intelligence systems in the Nigerian legal system. DataDrivenInvestor. https://medium.datadriveninvestor.com/the challenges-of-artificial-intelligence-systems-in-the-nigerian-legal-system 5bc8bb0a26bb
- Berchi, Mauro (4 de marzo de 2020). La Inteligencia Artificial se asoma a la justicia, pero despierta dudas éticas. El País, https://elpais.com/retina/2020/03/03/ innovacion/1583236735_793682.html
- Castellanos Claramunt, Jorge y Montero Caro, María. 2020. “Perspectiva constitucional de las garantías de aplicación de la IA: la ineludible protección de los derechos fundamentales”. Revista IUS ET SCIENTIA, vol. 6, N.º 2, págs. 72-82.
- Muñoz Rodríguez, Ana Belén. 2020. “El Impacto De La Inteligencia Artificial En El Proceso Penal”. Anuario De La Facultad De Derecho. Universidad De Extremadura, N.º 36 (diciembre), págs. 695-728. https://doi.org/10.17398/2695-7728.36.695.
- San Miguel Caso, Cristina. 2021. “La aplicación de la Inteligencia Artificial en el proceso: ¿un nuevo reto para las garantías procesales? Revista Us Et Scientia, vol. 7, N.º 1, págs. 286-303. https://revistascientificas.us.es/index.php/ies/article/view/16115/15131
- Suarez Manrique, Wilson y León Vargas, Georgina. 2020. “Inteligencia Artificial y su aplicación en la administración de justicia”. Revista Jurídica Mario Alario D´Filippo. vol. 11, N.º 21, págs. 71-83
- Ruibal Pereira, Luz. “El necesario debate sobre la utilización de la Inteligencia Artificial en la justicia contencioso-administrativa”. Administrativando abogados, 21 de mayo de 2021. https://administrativando.es/el-necesario-debate-sobre-la-utilizacion-de la-inteligencia-artificial-en-la-justicia-contencioso-administrativa/
About the Author:
Jackeline Paye is a lawyer of the General Secretariat of the National Elections Jury and a Masters candidate in Constitutional Rights and Human Rights at the National University of San Marcos. She holds a Master of Governance and Human Rights degree from the University Autonomous of Madrid, and is a graduate from the National University of San Agustín
