To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
By 2024, collaboration between Japan’s government and the private sector had deepened to promote IT integration into the judiciary. In civil litigation, legal reforms have driven progress, and AI-supported legal tech is streamlining time-consuming tasks. Academia is also developing AI-based legal reasoning tools. However, criminal trials remain largely untouched by AI, due to Japan’s conservative legal culture and the judges’ reliance on precedent. Public expectations for fairness coexist with concerns over AI’s lack of empathy. The issue is especially sensitive in the context of Japan’s death penalty system. Japan now faces a critical juncture in balancing innovation and tradition in its judicial use of AI.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The challenges courts face in dealing with the demand for justice in the digital age have increased considerably in the last thirty years. These actors have always been under the spotlight as the traditional institutional mechanism to protect rights and ensure the rule of law, but have been increasingly confronted with limited resources and expertise, and an overwhelming amount of judicial workload. Digitalisation and automation have been seen as a possibility for political decision-makers to sort out new strategies and tools that ease judicial activity. This chapter argues that the increasing digitalisation of justice has resulted in two constitutional trends, respectively towards an increasing internalisation of AI and digital technologies into the judicial field, and externalisation of judicial functions to private actors and administrative authorities which also implement AI technologies. Both internalisation and externalisation raise constitutional challenges for judicial activities, touching the core of digital constitutionalism, primarily the protection of rights and the limits of power in the digital age.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Accountability is a foundational judicial value and a tenet of the rule of law. Drawing on contemporary examples from the UK, EU, USA, Latin America, Taiwan, and China, this chapter examines how artificial intelligence (AI) is being used to assist judicial decision-making at varying stages – ranging from case-sorting tools and legal research aids to fully automated ‘smart courts’. By categorising these judicial uses by level of AI intervention, the chapter interrogates two common claims: (1) that greater AI involvement increases threats to judicial accountability, and (2) that judicial oversight ensures such accountability is preserved. Contrary to these common claims, we argue that accountability is compromised at all levels of AI integration. This occurs because AI systems: (1) obscure transparency and open justice; (2) erode judicial independence and reasoning by amplifying cognitive biases; and (3) hinder appellate review, thus limiting opportunities to contest decisions. While governments often assert that judicial supervision and discretion are sufficient safeguards, the chapter argues that such protections are increasingly ineffective amid pervasive and elusive AI systems.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The criminal justice system is becoming automated. At every stage, from policing to evidence to parole, AI tools and other technologies guide outcomes. Debates over the pros and cons of these technologies have overlooked a crucial issue: ownership. Developers often claim that details about how their tools work are trade secrets and refuse to disclose that information to criminal defendants or their attorneys. The introduction of intellectual property claims into the criminal justice system raises under-theorised tensions between life, liberty, and property interests. This chapter argues that trade secrets should not be privileged in criminal proceedings. A criminal trade secret privilege is ahistorical, harmful to defendants, and unnecessary to protect the interests of the secret holder. Meanwhile, compared to substantive trade secret law, the privilege overprotects intellectual property. Further, privileging trade secrets in criminal proceedings fails to serve the theoretical purpose of either trade secret law or privilege law. The trade secret inquiry sheds new light on how evidence rules do, and should, function differently in civil and criminal cases.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Judicial independence is an essential part of democracies, based on the division of powers, rule of law, and respect for fundamental rights. In its most simplified version, judicial independence relies on the freedom from (and resilience to) the external and internal influences and pressures that the courts as institutions and judges as justice professionals are constantly subject to. Introducing AI into the judicial system could impact the judicial independence from a wide spectrum of angles: judicial independence can be compromised and shaped by the AI systems, in particular if these systems have been developed by private sector and/or designed by legislative or executive powers. Furthermore, AI systems can do this in much less perceptible ways that are difficult to detect and complicated to prove, for instance through the experts that courts rely on when the case requires specific knowledge or expertise. This chapter focuses on identifying these threats and addressing them in a constructive and solution-oriented manner, without compromising the potential of AI for the justice system.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter focuses on the legal framework for the use of AI in courts in Croatia and Slovenia, which results from their legal traditions as well as their membership in the Council of Europe and the EU. It also aims to discuss AI systems, either operational or in development, in both countries, and to evaluate their impact on fundamental rights and ethics. The findings demonstrate that while both countries experience a slow but gradual introduction of AI initiatives, in Slovenia this is happening without pre-existing or rigorous regulatory oversight.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This chapter argues that answers to this question have been excessively focused on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Artificial intelligence (AI) has started to make its way into Spanish court practice, especially in criminal justice. Furthermore, this trend has been accompanied by two new regulations: the EU AI Act, the world’s first comprehensive law on the topic, and the Spanish Royal Decree-Law No. 6 of 2023. At present, there are already several AI-based tools used by Spanish courts and the application of them proves highly beneficial, in particular in certain areas of criminal justice. Nevertheless, AI use can pose serious problems related to conflict with different fundamental rights of the accused. Therefore, its use should be considered with great caution.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Artificial intelligence (AI) increasingly intersects with judicial processes, raising new challenges for courts and judges. One significant concern linked to this development is the ability of judges and court personnel to understand, evaluate, and critically engage with AI systems. The EU Artificial Intelligence Act adopted in 2024 addresses this directly, requiring public bodies using AI to ensure their staff possess a ‘sufficient level of AI literacy’. This chapter argues that enhancing AI competence among judges and court personnel is essential to safeguarding the right to a fair trial, legal certainty, and the rule of law in an increasingly digitalised legal environment. After providing a brief overview of AI literacy obligations in the EU AI Act, the chapter offers insights into how national judicial training institutions could integrate AI literacy into their curricula.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter focuses on AI and its impact on transparency in judicial decision-making. Transparency is one of the core values of the rule of law, and essential for maintaining the trust and accountability of the judiciary and justice system as a whole. Drawing upon semi-structured expert interviews with members of judiciary and legal profession, case law and real-life examples of AI tools, the chapter considers four questions: why transparency matters in the context of judicial decision-making; the information that judges must have and communicate to satisfy the demands of transparency; whether they have access to this information; and, if not, what we might do about this deficit. We argue that two complementary solutions can strengthen judicial transparency in the age of AI: (1) a regulatory framework that mandates disclosure of specific information pertaining to the code and variables used in AI tools; and (2) robust use of the due process duty to provide adequate reasons for a judicial decision that depends upon the output of a predictive tool. These steps are essential to reconcile judicial use of AI with the need for transparency, as a foundational aspect of justice and rule of law.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Say an AI program passes a Turing test because it can converse in a way indistinguishable from a human. And say that its developers can then teach it to converse – and even present an extended persuasive argument – in a way indistinguishable from the sort of human we call a ‘lawyer’. The program could thus become an AI brief-writer, capable of regularly winning brief-writing competitions against human lawyers. If and when that happens, this chapter argues, the same technology can be used to create AI judges, judges that we should accept as no less reliable than human judges, and more cost-effective. If the software can create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges, we should accept it as a judge, even if the opinions do not stem from human judgment.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The judiciary must reflect the diversity of the population it serves to ensure justice is both impartial and perceived as fair. This chapter examines how AI in courts influences judicial diversity and legitimacy. While AI can uncover unconscious biases and enhance case analysis, judicial diversity remains essential to prevent AI from reinforcing existing prejudices. The chapter also explores identity awareness and institutional legitimacy. Like other democratic institutions, courts using AI must uphold representativeness. AI can foster collaborative constitutionalism by incorporating diverse perspectives in constitutional debates, helping address concerns about judicial legitimacy when unelected judges overturn decisions by elected representatives. Finally, the chapter considers how judges’ engagement with AI- driven social media affects transparency and public trust. As these technologies shape perceptions of the judiciary, they must be carefully managed to support judicial diversity and legitimacy. This is particularly important for judges from diverse backgrounds, who face greater risks of digital harassment, potentially undermining institutional trust and judicial integrity.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter examines the adoption of artificial intelligence (AI) tools and digital solutions within the Estonian justice system, highlighting its pioneering approach to digital transformation following independence in 1991. The initial section explores the factors behind Estonia’s rapid digital transition, emphasising the centralisation of support services and the development of a unified public-sector digital infrastructure that has significantly influenced court operations. The Estonian judiciary employs integrated information and communications technology systems that utilise shared data storage, enabling efficient digital and remote court proceedings. These advantages were particularly evident during the Covid-19 pandemic. While AI and machine-driven decisions are restricted to support functions, excluding substantive judicial roles (no ‘robo-judges’), efforts are ongoing to enhance data-driven practices and automation in court proceedings. However, a key legal challenge lies in aligning digital court processes with the constitutional mandate for public justice.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This concluding chapter affirms that the integration of AI into courts is no longer a question of if, but how. Courts, as constitutional institutions, face profound normative questions: how does AI affect transparency, impartiality, and public trust? While AI improves court operations in many jurisdictions, it also risks eroding judicial values and the rule of law. Global examples show diverse adoption paths, yet shared challenges, such as AI opacity, lack of judicial AI literacy, and accountability gaps, demand coordinated oversight. Ultimately, a human-centred approach to judicial AI is essential. Rather than rejecting AI or accepting it uncritically, the authors advocate a balanced path that preserves the human and interpretive role of judging.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Judicial systems, long considered the most tradition-bound of public institutions, are at a critical juncture. From Strasbourg to São Paulo, from Delhi to Wellington, courts around the globe confront a shared challenge: how to navigate justice in an era redefined by artificial intelligence (AI). The question is no longer whether courts will engage with AI – but how, and on whose terms. This Handbook is the first global and comparative volume that systematically examines the use of AI and digital technologies in courts. It provides an interdisciplinary and cross-jurisdictional perspective on how judicial institutions are responding to the opportunities and risks posed by AI – from e-filing systems and predictive tools to ‘robo-judging’ and AI- supported decision-making. The forty-five contributions of the Handbook are arranged across ‘Part I: AI and Courts: Context and Normative Positions’, ‘Part II: AI and Courts: Disciplinary Perspectives’, ‘Part III: AI & Tech Challenges to Judicial Values’, and ‘Part IV: AI in Courts across the Globe: Jurisdictional Perspectives’, with each part offering a distinct analytical lens on justice and judging in the age of AI. The Handbook examines not just what AI can do for courts, but also what courts must do to ensure AI enhances, rather than erodes, their fundamental role in democratic societies.
By design, the judiciary is meant to be independent from politics and thereby free from factional pressures. The power to review legislative and executive as well as state government actions for constitutionality is essential to controlling abuses of power and democratic excesses that infringe on individual rights. While the federal courts have generally performed these responsibilities well, the politization of judicial appointments combined with liberal standing requirements and reliance on an assortment of balancing tests that require policy judgments have invited factional pressures in the form of lawsuits. At the same time, a presumption of constitutionality has served to counter the Framers’ constraints on democratic excess and the abuse of power.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter surveys developments related to the use of generative AI in courts in the United States. It discusses a range of current uses of generative AI by judges, lawyers, and ordinary citizens, and explains commonly cited concerns that these uses raise, such as worries about inaccuracy and bias, as well as newly emerging concerns. The chapter also surveys efforts to regulate these tools in the US, such as judicial bans and requirements of disclosure and certification.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Artificial intelligence (AI) systems are used in court to analyse legal data, cite case law, evaluate and generate evidence, or support judges with prediction. As technological advancements enter the courtroom, assessing their impact on core judicial values is crucial. This chapter asks whether AI undermines procedural fairness in judicial decision-making. To address this question, it first presents procedural fairness as a normative concept studied across different disciplines. It shows why procedural fairness matters and what values fair procedures aim to foster. Drawing on case studies and scholarly work, it then illustrates how AI systems may impair these values. It then investigates how regulatory attempts and ethical frameworks for AI in judicial systems aim to address the resulting issues by analysing fundamental principles of technology regulation. The main argument of the chapter is that AI regulation must be complemented by specific procedural rules tailored to the judicial domain. In the age of AI, fair procedures should realise participation, increase trust, preserve neutrality, and provide mechanisms to detect errors in AI systems.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Impartiality, broadly meaning the absence of bias and according equal treatment before the law, is a foundational element of judicial decision-making around the world. In this chapter, we consider how the goal of judicial impartiality may be either enhanced and supported or undermined by the use of artificial intelligence. Key developments in legal AI include innovations directed toward courts and decision- makers. These may be process-driven – for example, triaging or decision-supporting systems; in the case of pre-trial processes, judges may need to manage technology-facilitated document discovery. AI systems may also be involved in the production of evidence submitted to the court. Finally, courts and judges themselves may be the subjects of AI tools, such as those which identify patterns in decision-making. As this chapter explores, these different uses all have implications for the way that judicial impartiality is enacted and tested.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
There is a deep scepticism concerning the idea that AI should be used in the making of judicial decisions. There are normative risks such as inaccuracy and a lack of explainability and accountability, and there are sociological risks to public trust in the judicial system. Prominent legal instruments such as the EU AI Act, Vilnius Convention, and General Data Protection Regulation (GDPR) seek to set some clear guardrails around the use of AI in judicial decision-making, but face two problems. First, they underappreciate the Collingridge dilemma, in which premature intervention risks over-regulation, while belated intervention risks under-regulation. Second, there is a misplaced faith in the power of legal obligations to provide sufficient (and enforceable) guidance. This chapter asks what model of governance should be adopted for the use of AI in courts. In doing so, it undertakes a survey of the current status and evolution of AI technology in courts, examines how we should evaluate risks, and considers competing governance models. It argues that a model of anticipatory governance, often suitable for long and complex problems, should be adopted, and some of the implications are discussed.