Accès libre

Challenges of Digitalisation in Judicial System

  
30 sept. 2022
À propos de cet article

Citez
Télécharger la couverture

Introduction

British scientists have published a study looking at an algorithm that can predict the outcome of rulings by the European Court of Human Rights (Aletras et al., 2016). The forecast made by the automated system coincided with almost 600 judgements of the European Court of Human Rights in 80 % of cases.

Around the world, artificial intelligence has evolved in many industries. Attempts have been made to develop algorithms tailored to judicial system. There are also many examples of implementation of algorithms in judicial system in Europe. In the United Kingdom, the judiciary is developing an automated online tool for small claims in civil matters. This is called Online Dispute Resolution.

The United States has developed the prediction algorithm COMPAS, which is primarily used in the criminal justice industry used by the US State Court and Probation Service. The system helps to determine whether a person is liable to a suspended sentence or whether a custodial measure is still required. The United States is developing a predictive method of judicial decisions based on weight of evidence to determine whether there is prejudice (Classification module). This is the basis for launching a debate on the introduction of artificial intelligence in judicial system, including in Latvia.

Initially, there is a need to determine how to qualify artificial intelligence in judicial system. Two types of algorithms can be developed in judicial system: algorithms that support decision making, or “supporting algorithms”, and algorithms that make their own decisions, or “decision algorithms”.

Algorithms that support decision-making should aim to provide support to the decision-making process by facilitating and improving efficiency of the judicial process. In contrast, algorithms that make their own decisions would aim at providing an automated court system. It is this system introduced by algorithms that would be criticised, questioning the ability of artificial intelligence to respect fundamental human rights.

Artificial Intelligence as Auxiliary Tool for the Judicial Process

The European Convention of human rights (ECHR) is providing the right of a fair trial in its article 6.1:

In the determination of his civil rights and obligations or of any criminal charge against him, everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law”.

The Charter of fundamental rights of European Union is providing the right of an effective remedy and a fair trial in its article 47:

Everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal in compliance with the conditions laid down in this Article. Everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal previously established by law. Everyone shall have the possibility of being advised, defended and represented”.

These requirements can be supported by the use of algorithms, granting to the Court more tools to resolve litigation. Artificial intelligence could be used as an auxiliary tool for the judicial process by pre-procedural assistance, supporting judges for examinations of the admissibility of the action or even by assisting judge during the judicial process.

Pre-procedural assistance allows the litigant to evaluate the opportunity of their action by indicating in algorithms some information regarding their situation. Based on a large amount of judicial information (law, case law), the algorithm can indicate the probability to win a case. This information provided by statistics and graphs guarantees greater judicial security: the litigant can assess if it were interesting to go before a judge or if an alternative dispute resolution process would be more suitable. First, it is a guarantee in terms of reasonable time because considering the opportunity of its action by an anticipation of a potential decision, it can limit the congestion of the Courts.

By supporting judges for examination of the admissibility of the action, in fact, algorithms help judges in assessing time prescription, or also considering possibility to benefit from a legal aid during the trial. European Court of Human Rights has already considered this option and concluded that an examination procedure assessing chance of a potential appeal is acceptable regarding the ECHR (ECHR, 28 January 2003, 34763/02, Burg et a v. France). Moreover, a State can also reject a legal aid funding its decision on lack of legal basis in a demand (ECHR, 26 February 2002, 46800/99, Del Sol v. France). Finally, observing these possibilities offered to the States to limit access to Court algorithms can promote judicial security and good administration of justice (ECHR, Gd. Ch, 3 December 2009, 8917/05, Kart v. Turquie). Algorithms can also assist judges in their researches of jurisprudence giving them an easier way to find similar cases. Thus, regarding reasonable time requirement and facility to respond efficiently to a demand, they seem to benefit both for parties and judges.

Artificial Intelligence as Significant Risk for the Judicial Process

When analysing the nature of decision-making algorithms in terms of their potential functionality and impact on the judiciary, it is possible to speak of a number of potential risks. The existence of these risks is a major concern for the introduction of decision-making algorithms into judicial system. Whether there are definite solutions to these risks, there is no certainty at the moment. Therefore, it is important, in the author’s opinion, to outline them and allow further discussion with legal scientists, judges and experts.

It appears possible to doubt the objectivity of algorithms and positive consequences on judicial security by using objectives and reliable criteria (ECHR, 30 July 2015, 30123/10, Ferreira Santos Pardal v. Portugal). Nevertheless, algorithms remain a mathematical translation of human language. This translation, realised by a human intervention can be criticised regarding its objectivity and neutrality. Moreover, in collecting information on the jurisprudence, the robot is not considering the motivations of judges but only facts and the final decision. Moreover, the data are treated the same when jurisprudence comes from different Courts, and has an impact. Lack of precision and subjectivity of algorithms can be discussed regarding the requirement of neutrality to guarantee a fair trial.

The existence of the risk is a concern for the independence of judges (6.1 ECHR, 47 ECFR, ECHR, 6 October 2011, 23465/03, Agrokompleks v. Ukraine). Judges can be influenced by algorithms in their decisions. In fact, a risk of “performativité” (Garapon, 2017) exists. The distance between presenting jurisprudence and deciding on a case is insignificant; thus, by proposing a solution based on case law and influencing the judge in their decision, the independence of choice of the Courts would be questionable, creating uniformisation of jurisprudence and avoiding any evolution of case law.

The risk for impartiality of judges is worth mentioning as well. Parties cannot be submitted to an arbitrary decision (ECHR, 27 May 2010, 18811/02, Hohenzoblernc v. Roumanie). However, by considering the algorithm, there is a risk for judges to have an opinion about the case before the hearings that would be a risk in terms of partiality. Establishing control and comparative logic have become concerns of creating a serious pressure on the judges’ decisions. The motivation requirement would limit this risk by obliging judges to base their decision on particular facts and the arguments of the involved parties of the case (ECHR, 28 June 2007, 76240/01, Wagner et J.M.W.L. v. Luxembourg).

Automated decision-making process is questionable regarding the right to effective remedy. Algorithms helping to determine admissibility of the case can lead to rejection of access to Court on an unclear basis (Article 13 ECHR, Article 47 ECFR). A risk of biased algorithm that systematically discriminates one group in society has to be considered. Based on the police data, for example, decisions granted by an algorithm can create some discrimination (Article 14 ECHR, Article 21 ECFR).

Algorithms cannot respect exigence of motivation of justice decisions. This lack of motivation is revealed by three elements:

Decision proposed by algorithms is merely giving the answer of the question asked (guilty or not, for example) and not the reasoning way leading to this decision. There is lack of motivation in the decision itself.

Lack of motivation is observed in the process followed by algorithms providing a decision. Algorithms only consider the facts of the case, the law, and the case law in its final decision (position) (without considering motivation of past decisions). Thus, motivation is not considered as a based data to build a decision, because it is not analysed by the algorithm in the decision making process. In fact, algorithm is limited to recognise key words in past cases and to reapply past decisions corresponding current situation. That is to say, the process previously described does not allow an algorithm to give a subjective consideration of the facts and produce a personal reasoning. In brief, this lack of personal motivation by algorithms does not stimulate evolution of case law. For instance, evolution could exist with a human and so, subjective perspective, by taking in consideration social context. However, algorithms in Courts are limited to mechanical application of law, and mechanical use of case law, a system that does not let any marge of subjective consideration compared to human judges.

The objective of algorithms is only to reproduce behavior but not to reproduce reasoning. The aim is to create a decision on a statistic basis without any justification and motivation way. This point is linked to the previous one, because of the process not considering the motivation in the case law, the answer given cannot develop any motivation or justification. The process itself is biased because of global lack of motivation creating a mechanical process, leading to a mechanical decision. Motivation does not exit either in the process or in the decision itself.

It should be borne in mind that artificial intelligence is not responsible. Therefore, it can be interrogated whether the relationship of trust between a man and a judge can be transferred to artificial intelligence without the above guarantees. Lack of accountability in the case of artificial intelligence can be a reason to question legitimacy of such decisions. In addition, such a system can lead to creation of an anarchic society, disrupting foundations of social contract between the state and an individual.

Additionally, the question whether a robot-judge can be defined as a Tribunal arises. A definition of Tribunal has been defined by the ECHR, in its judgement from 22 June 2000, 32492/96, Coëme v. Belgique, point 99:

A tribunal “is characterised in the substantive sense of the term by its judicial function, that is to say determining matters within its competence on the basis of rules of law and after proceedings conducted in a prescribed manner” It must also satisfy a series of other conditions, including the independence of its members and the length of their terms of office, impartiality and the existence of procedural safeguards”.

This definition becomes even more interesting regarding the ambition to create a robot-judge replacing human judges. Regarding the definition of tribunal previously mentioned, this robot-judge would not respect the criteria provided by the ECHR, especially the motivation requirement.

Therefore, judicial decisions, replacing a judge with artificial intelligence should be made only in standardised judicial proceedings where the judge may review the AI’s decision. It correlates with guaranteeing independence of the judge. Artificial intelligence works with data and makes decisions based on algorithms and probability theory. Those powering artificial intelligence are programmed by information technology specialists. The debate remains who should take the responsibility for the AI’s decisions. Particularly if the algorithm is not transparent and accessible, it would be rather difficult to ensure that users are able to control choices made or control a decision solely issued by artificial intelligence.

Meanwhile, very little attention is paid to possible use of artificial intelligence in private international law where artificial intelligence software should be able to orient itself in different legal systems, avoiding situations where systematically preference is given to the law of one state over the law of other states. Even in cases where an artificial intelligence tool is used in a supportive capacity, objectivity on reliance of the data produced by artificial intelligence remains questionable if there is no possibility to verify the algorithm of the programme providing its decision.

Artificial Intelligence and Ethical Aspects

Along the introduction of artificial intelligence tools in judicial system, the question of ethics is very important and requires deeper research. Ethical aspects of artificial intelligence are fully linked with the right to a fair trial and have become topical not only in judicial system, but also across the entire spectrum of industries and social activities. As the result of rapid development of industries where artificial intelligence is used, not only the aspects of the right of fair trail have become important but also the aspects of the right to privacy and equal treatment.

Advantages of artificial intelligence tools to be used in improvement of efficiency of judicial systems have been widely discussed, but it is also important to be aware that high technologies require specific knowledge and that the chosen algorithms for the processing of information still create significant risks for respecting fundamental rights in courts’ decision-making processes.

Paragraph 1 of Article 17 of the Law on Judicial Power of Latvia states the obligation of the court, when examining any case, to establish the objective truth. All procedural laws governing courts’ decision-making in different law sectors similarly include the principle that the court shall assess evidence according to its own convictions which are based on evidence that has been thoroughly, completely, and objectively examined, and according to judicial consciousness based on the principles of logic, scientific findings, and principles of justice (Article 154 of Administrative Procedure Law, Article 97 of Civil Procedure Law, Article 94 of Law on Administrative Liability).

Judicial consciousness and principle of justice are tightly linked to moral and ethical aspects. These are the reasoning categories which artificial intelligence does not have in such a form. The society is not homogeneous and each individual guides their actions not only based on the prescriptions included in written legal norms but also adjusts their actions to particular legal situations. The behaviour of an individual is influenced by generally accepted moral norms and other values which are characteristic to particular society. However, each personality has a different level of judicial consciousness, which can be influenced by perception of written legal norms and also by denial of commonly accepted norms in society.

Therefore, even in standard cases, aspects of individual approach emerge between the judge and the parties of the case, which helps the judge to establish the objective truth. The ability to understand the merits of the case as well as to hear and assess the considerations of the parties in depth is one of the key professional competencies of the judge. Artificial intelligence lacks mentioned competencies, but they are very important to ensure justice in the courts’ decision-making process. It is essential that the party that has lost the case can accept the outcome of the case, because it forms trust to the judicial power. This level of trust is mostly formed by how the court has motivated its judgement and indicated considerations the preference was given to, and also by how communication with society was carried out in order to explain the essence of the judgement. Consequently, the question arises whether the judgement made by artificial intelligence will be perceived as fair in all cases. Fairness as philosophical category should be separated from the judicial category of access to the fair trial. However, strict and technical application of laws cannot always bring to the establishment of the objective truth.

Scientists, in developing automatised data processing systems, so far have relied on assumption that analysis of court judgements can be sufficient basis to elaborate algorithms for reliable predictions of outcome of the case (Aletras et al., 2016). Namely, artificial intelligence software presumes that reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact the relevant decision-making. However, the question whether it will be possible to integrate basic principles of natural law school into conclusions of artificial intelligence remains open. In most cases, current algorithms are based on basic principles of legal positivism. Therefore, it can be challenging for artificial intelligence software to derive from written legal norms enacted by state and base the outcome of the case on general principles of law. Such situations can arise also in apparently standard cases, for example, in cases of parental responsibility (the best interests of the child as a primary consideration), in cases concerning social rights (the concept of human dignity), and others. In the significant number of cases in order to achieve the result which is fair and corresponds to judicial consciousness, the court during the court procedure has to also evaluate irrational and emotional aspects.

In 2018, the European Commission for the Efficiency of Justice (CEPEJ) adopted European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment (Council of Europe, 2018).

Acknowledging the increasing importance of artificial intelligence in modern societies and the expected benefits when it is fully used at the service of the efficiency and quality of justice, the CEPEJ adopted five fundamental principles:

Principle of respect for fundamental rights: ensure that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights;

Principle of non-discrimination: specifically prevent the development or intensification of any discrimination between individuals or groups of individuals;

Principle of quality and security: with regard to the processing of judicial decisions and data, use certified sources and intangible data with models elaborated in a multi-disciplinary manner, in a secure technological environment;

Principle of transparency, impartiality and fairness: make data processing methods accessible and understandable, authorise external audits;

Principle “under user control”: preclude a prescriptive approach and ensure that users are informed actors and in control of the choices made.

Regarding respect for fundamental rights, it is admissible to use artificial intelligence tools for resolving disputes or for assistance in judicial decision-making only when it does not undermine the individual’s right to a fair trial, especially, the principle of the equality of arms and the right to be heard. The methods used to develop artificial intelligence tools also should not reproduce or aggravate discrimination. Therefore, particular care must be taken in both the development and deployment phases of software if the processing is based on sensitive data. Simultaneously, designers of machine learning models should draw widely on the expertise and knowledge of relevant justice system professionals and researchers in the fields of law and social sciences, including fields of sociology and philosophy.

Currently, there are cases in practice which evidence that it is too early to completely rely on conclusions drawn by artificial intelligence tools (Contini & Lanzara, 2017). For example, in the United Kingdom, the financial capacity of spouses in maintenance proceedings was determined by artificial intelligence software to enable the courts to decide on the amount of maintenance. The spouses were required to fill in the form regarding their income and as the result of the mistake in the calculation system, remaining unnoticed, in several thousands of cases the incorrect income calculations were made. Debts which spouses were indicated, instead of being deducted, were added to the assets and the incorrect court decisions on the maintenance amount were taken as the result.

Therefore, artificial intelligence tools can be viewed as providing support to the courts in the decision making and they are the most efficient when large amount of information should be structured. Nevertheless, drawing of conclusions and provision of assessment is still very questionable.

Another negative example follows from the analysis of the system of artificial intelligence which uses ethnicity data to profile and “predict” future criminality. In the Netherlands, the top 600 list includes persons with the highest risk of committing the high-impact crime. One in three included in the top 600 list is of Moroccan descent and is being followed and harassed by police (Fair Trials, n.d.).

Discrimination has a long history and the threat of breach of fundamental rights can exist on digital platforms. Without careful analysis of the data provided to machine learning systems, inequality can become part of the logic of everyday algorithmic systems. Therefore, ability for AI to produce useful outputs depends on the quality of the data.

Another case was pointed out by journalists in 2016, when it was revealed that Amazon’s same-day delivery service was unavailable for postal codes in predominantly black neighborhoods. Amazon promised to redress the gaps, but it reminds us how systemic inequality can be caused by machine intelligence (Craford, 2016).

The notion, that existing artificial intelligence systems are producing their results by engaging in a synthetic computer cognition that matches or surpasses human-level thinking, is incorrect. In reality, modern artificial intelligence systems are not reasoning tools with intellectual abilities (Surden et al., 2014). Therefore, artificial intelligence, considering its current stage of development, cannot fully protect fundamental rights. Artificial intelligence tools are able to analyse structured information which is based on facts. However, when the judge is applying legal norms which require to take into the consideration the principle of equality, the principle of proportionality or the principle of reasonable application of legal provisions, the judge grounds on their personal convictions and and life experience. Artificial intelligence lacks such aspects of reasoning and information processing.

Artificial intelligence, in the light of the progress in science, should become support of a country’s legal system and it should not come in contradiction with fundamental rights, principles of democratic state and rule of law. Therefore, the science faces many challenges to ensure that artificial intelligence is responsible, fair, traceable, trustworthy and controllable.

On April 21, 2021, European Commission unveiled a Proposal for a Regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial Intelligence Act). This Proposal for the Regulation is a result of political commitment of the European Commission to put forward legislation for coordinated European approach on the human and ethical implications of artificial intelligence (COM (2021) 206 final 2021/0106 (COD). European Commission, after announcing its political commitments, already in February 19, 2020 published the White Paper On Artificial Intelligence – A European approach to excellence and trust (COM (2020) 65 final). The White Paper sets out policy options on how to achieve the twin objective of promoting the uptake of artificial intelligence and addressing the risks associated with certain uses of such technology. Following the publication of the White Paper, the Commission launched a broad stakeholder consultation, which was met with a great interest by a large number of stakeholders who were largely supportive of regulatory intervention to address challenges and concerns raised by the increasing use of artificial intelligence.

Therefore, taking all this into account, the Proposal for a Regulation aims to implement the White Paper’s second objective, i.e., to develop an ecosystem of trust by proposing legal framework for trustworthy artificial intelligence. The Proposal for a Regulation is based on EU values and fundamental rights and aims to give people and other users confidence to embrace artificial intelligence-based solutions, while encouraging businesses to develop them.

Conclusions

Artificial intelligence in judicial system could be considered as an advantage. However, functioning of artificial intelligence would be permissible within a clear legal framework in order to prevent any possible infringement of fundamental rights, in particular the right to a fair trial.

Artificial intelligence, considering its current stage of development, cannot fully protect fundamental rights. It lacks such aspects of reasoning and information processing as consideration the principle of equality, the principle of proportionality or the principle of reasonable application of legal provisions.