To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Failure to deliver a fair trial within a reasonable time is the most common violation found by the European Court of Human Rights (ECtHR) as almost half of all its judgments include a violation of Article 6. If the ECtHR were subject to its own jurisdiction, however, it, too, would be in violation of Article 6 in a sizable portion of its judgments. Therefore, both reports by the Court itself and academic literature have urged the Court to increase digitalisation and employ new technologies, including AI, in its procedures. Historically, the Court has employed an ambivalent approach to new technology, incorporating it in its caseload management, but insisting on the use of fax and physical mail in its communications with applicants. There are indicators, such as allowing electronic applications from Ukraine due to the suspension of physical mail during the war with Russia, that the Court may be abandoning this ambivalence. This chapter accounts for the current and potential use of AI at the ECtHR in each of the steps in its adjudication, evaluating the potential of existing AI technologies and the risks involved, considering the procedures and divisions of labour at the ECtHR.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
By 2024, collaboration between Japan’s government and the private sector had deepened to promote IT integration into the judiciary. In civil litigation, legal reforms have driven progress, and AI-supported legal tech is streamlining time-consuming tasks. Academia is also developing AI-based legal reasoning tools. However, criminal trials remain largely untouched by AI, due to Japan’s conservative legal culture and the judges’ reliance on precedent. Public expectations for fairness coexist with concerns over AI’s lack of empathy. The issue is especially sensitive in the context of Japan’s death penalty system. Japan now faces a critical juncture in balancing innovation and tradition in its judicial use of AI.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The challenges courts face in dealing with the demand for justice in the digital age have increased considerably in the last thirty years. These actors have always been under the spotlight as the traditional institutional mechanism to protect rights and ensure the rule of law, but have been increasingly confronted with limited resources and expertise, and an overwhelming amount of judicial workload. Digitalisation and automation have been seen as a possibility for political decision-makers to sort out new strategies and tools that ease judicial activity. This chapter argues that the increasing digitalisation of justice has resulted in two constitutional trends, respectively towards an increasing internalisation of AI and digital technologies into the judicial field, and externalisation of judicial functions to private actors and administrative authorities which also implement AI technologies. Both internalisation and externalisation raise constitutional challenges for judicial activities, touching the core of digital constitutionalism, primarily the protection of rights and the limits of power in the digital age.
This chapter explores the transformative impact of the digital sphere and artificial intelligence (AI) on environmental communication. The rise of digital platforms has significantly influenced how environmental issues are communicated, promoting awareness, fostering engagement and mobilising action. The first part of the chapter discusses the role of social media and influencers in shaping environmental discourses and collective identities. The second part examines the opportunities and challenges posed by AI, highlighting its potential to analyse large datasets and personalise engagement while also addressing issues pertaining to reliability and the spread of AI-enabled misinformation and fake news. The environmental costs associated with AI technologies, such as high energy and resource consumption, are also explored. The chapter underscores the dual nature of digital technologies, emphasising the need for critical engagement to ensure that technological innovations support environmental justice without exacerbating existing problems.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Accountability is a foundational judicial value and a tenet of the rule of law. Drawing on contemporary examples from the UK, EU, USA, Latin America, Taiwan, and China, this chapter examines how artificial intelligence (AI) is being used to assist judicial decision-making at varying stages – ranging from case-sorting tools and legal research aids to fully automated ‘smart courts’. By categorising these judicial uses by level of AI intervention, the chapter interrogates two common claims: (1) that greater AI involvement increases threats to judicial accountability, and (2) that judicial oversight ensures such accountability is preserved. Contrary to these common claims, we argue that accountability is compromised at all levels of AI integration. This occurs because AI systems: (1) obscure transparency and open justice; (2) erode judicial independence and reasoning by amplifying cognitive biases; and (3) hinder appellate review, thus limiting opportunities to contest decisions. While governments often assert that judicial supervision and discretion are sufficient safeguards, the chapter argues that such protections are increasingly ineffective amid pervasive and elusive AI systems.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
AI applications are increasingly deployed in the judiciary for a wide array of tasks, denoted as ‘judicial AI’. The implications for the legal system are vast. In this chapter, I focus on the effects of judicial AI on the rule of law, given the judiciary’s essential role in safeguarding this value. After examining what is meant by the rule of law, three sets of questions guide my analysis. First, how does the turn from text-driven to code- and data-driven legal interpretation affect the nature of law? Is there a risk that instead of fostering the rule of law, this leads to algorithmic rule by law? Second, since AI applications are designed by human beings, delegating judicial tasks to AI implies a delegation to the coders developing it. To what extent can this result in a rule of coders? And last, what impact does judicial AI have on the separation of powers, given that the executive and legislative branch of power control the judiciary’s resources? Can it undermine the judiciary’s ability to check and balance the other branches of power? The answers to these questions force me to conclude that many concerns must be addressed prior to judicial AI’s wide-scale adoption.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Judicial independence is an essential part of democracies, based on the division of powers, rule of law, and respect for fundamental rights. In its most simplified version, judicial independence relies on the freedom from (and resilience to) the external and internal influences and pressures that the courts as institutions and judges as justice professionals are constantly subject to. Introducing AI into the judicial system could impact the judicial independence from a wide spectrum of angles: judicial independence can be compromised and shaped by the AI systems, in particular if these systems have been developed by private sector and/or designed by legislative or executive powers. Furthermore, AI systems can do this in much less perceptible ways that are difficult to detect and complicated to prove, for instance through the experts that courts rely on when the case requires specific knowledge or expertise. This chapter focuses on identifying these threats and addressing them in a constructive and solution-oriented manner, without compromising the potential of AI for the justice system.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Integrating algorithmic tools into judicial systems prompts critical questions on public trust, due process, and fairness, alongside inherent risks of the pursuit of ‘technical fix’. In response to growing demands for transparency and consistency, Taiwan has introduced algorithmic and AI-powered sentencing tools, representing significant steps toward reforming sentencing practices and improving judicial accountability. However, their implementation has encountered formidable challenges, including low adoption rates, judicial misunderstandings, algorithmic biases, and insufficient regulatory frameworks. This chapter explores these issues within Taiwan’s historical and legal context, providing an in-depth analysis of empirical data and judicial practices. By situating Taiwan’s experience within the global discourse on AI in judicial systems, the chapter illuminates the complexities of integrating AI into a civil law tradition while striving to maintain judicial independence. Taiwan’s approach offers insights for jurisdictions worldwide, contributing to broader discussions on leveraging AI to enhance justice without compromising foundational legal principles and values.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter focuses on the legal framework for the use of AI in courts in Croatia and Slovenia, which results from their legal traditions as well as their membership in the Council of Europe and the EU. It also aims to discuss AI systems, either operational or in development, in both countries, and to evaluate their impact on fundamental rights and ethics. The findings demonstrate that while both countries experience a slow but gradual introduction of AI initiatives, in Slovenia this is happening without pre-existing or rigorous regulatory oversight.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter examines how artificial intelligence (AI) can address inefficiencies in India’s judicial system, focusing on Protection of Children from Sexual Offences (POCSO) cases. Analysis of 220,000 cases reveals significant regional disparities in processing and outcomes, reflecting broader systemic challenges. Despite digital infrastructure investments, we identify a disconnect between data collection and data- driven decision-making. We propose an AI-powered dashboard to provide real-time case tracking, identify bottlenecks, and improve resource allocation. While implementation faces challenges related to data quality and privacy, successful deployment could serve as a model for judicial reform in India and globally.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This chapter argues that answers to this question have been excessively focused on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Artificial intelligence (AI) has started to make its way into Spanish court practice, especially in criminal justice. Furthermore, this trend has been accompanied by two new regulations: the EU AI Act, the world’s first comprehensive law on the topic, and the Spanish Royal Decree-Law No. 6 of 2023. At present, there are already several AI-based tools used by Spanish courts and the application of them proves highly beneficial, in particular in certain areas of criminal justice. Nevertheless, AI use can pose serious problems related to conflict with different fundamental rights of the accused. Therefore, its use should be considered with great caution.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Artificial intelligence (AI) increasingly intersects with judicial processes, raising new challenges for courts and judges. One significant concern linked to this development is the ability of judges and court personnel to understand, evaluate, and critically engage with AI systems. The EU Artificial Intelligence Act adopted in 2024 addresses this directly, requiring public bodies using AI to ensure their staff possess a ‘sufficient level of AI literacy’. This chapter argues that enhancing AI competence among judges and court personnel is essential to safeguarding the right to a fair trial, legal certainty, and the rule of law in an increasingly digitalised legal environment. After providing a brief overview of AI literacy obligations in the EU AI Act, the chapter offers insights into how national judicial training institutions could integrate AI literacy into their curricula.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The integration of artificial intelligence (AI) into judicial decision-making presents both opportunities and challenges, particularly in balancing legal certainty and judicial discretion. While AI-driven tools are designed to enhance consistency and efficiency, their growing influence may subtly reshape judicial reasoning, potentially narrowing judicial discretion. This chapter examines how algorithmic recommendations – rather than merely assisting adjudication – can become dominant reference points, steering judicial outcomes toward standardisation over case-specific interpretation. Drawing on empirical psychological research, behavioural law, and economics, and the works of Richard Posner, Aharon Barak, and other legal theorists, the chapter explores the psychological mechanisms underlying this shift, particularly phenomena known as ‘automation bias’ and the ‘anchoring effect’, which may unconsciously influence judicial decision-making. The analysis highlights these psychological and structural challenges, inviting reflection on how AI-driven legal certainty impacts judicial discretion and the space for individualised legal reasoning in modern adjudication.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter focuses on AI and its impact on transparency in judicial decision-making. Transparency is one of the core values of the rule of law, and essential for maintaining the trust and accountability of the judiciary and justice system as a whole. Drawing upon semi-structured expert interviews with members of judiciary and legal profession, case law and real-life examples of AI tools, the chapter considers four questions: why transparency matters in the context of judicial decision-making; the information that judges must have and communicate to satisfy the demands of transparency; whether they have access to this information; and, if not, what we might do about this deficit. We argue that two complementary solutions can strengthen judicial transparency in the age of AI: (1) a regulatory framework that mandates disclosure of specific information pertaining to the code and variables used in AI tools; and (2) robust use of the due process duty to provide adequate reasons for a judicial decision that depends upon the output of a predictive tool. These steps are essential to reconcile judicial use of AI with the need for transparency, as a foundational aspect of justice and rule of law.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Say an AI program passes a Turing test because it can converse in a way indistinguishable from a human. And say that its developers can then teach it to converse – and even present an extended persuasive argument – in a way indistinguishable from the sort of human we call a ‘lawyer’. The program could thus become an AI brief-writer, capable of regularly winning brief-writing competitions against human lawyers. If and when that happens, this chapter argues, the same technology can be used to create AI judges, judges that we should accept as no less reliable than human judges, and more cost-effective. If the software can create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges, we should accept it as a judge, even if the opinions do not stem from human judgment.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter provides a comprehensive overview of the history and developments of AI in courts. In particular, through the lens of legal informatics, we explore four phases in the development and evolution of AI in courts: judicial information retrieval, human-made models of judicial reasoning, machine learning for judicial prediction, and large language models for courts. For each of these, we explore the opportunities and challenges in their implementation and adoption within the judicial system.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The judiciary must reflect the diversity of the population it serves to ensure justice is both impartial and perceived as fair. This chapter examines how AI in courts influences judicial diversity and legitimacy. While AI can uncover unconscious biases and enhance case analysis, judicial diversity remains essential to prevent AI from reinforcing existing prejudices. The chapter also explores identity awareness and institutional legitimacy. Like other democratic institutions, courts using AI must uphold representativeness. AI can foster collaborative constitutionalism by incorporating diverse perspectives in constitutional debates, helping address concerns about judicial legitimacy when unelected judges overturn decisions by elected representatives. Finally, the chapter considers how judges’ engagement with AI- driven social media affects transparency and public trust. As these technologies shape perceptions of the judiciary, they must be carefully managed to support judicial diversity and legitimacy. This is particularly important for judges from diverse backgrounds, who face greater risks of digital harassment, potentially undermining institutional trust and judicial integrity.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter introduces the EU AI Act and examines how it will apply to artificial intelligence (AI) used by judicial authorities. The chapter gives an overview of key regulatory concepts of the EU AI Act and discusses its risk classification, particularly which AI systems used by judicial authorities would fall under the category of high- risk AI systems and which other provisions could be relevant for the use of AI by judicial authorities. The chapter investigates practical examples of how the provisions are expected to apply in practice and which obligations follow for judicial authorities, including which exemptions could apply. The author also provides context and rationale for the relevant provisions and their evolution during the legislative process.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter examines the adoption of artificial intelligence (AI) tools and digital solutions within the Estonian justice system, highlighting its pioneering approach to digital transformation following independence in 1991. The initial section explores the factors behind Estonia’s rapid digital transition, emphasising the centralisation of support services and the development of a unified public-sector digital infrastructure that has significantly influenced court operations. The Estonian judiciary employs integrated information and communications technology systems that utilise shared data storage, enabling efficient digital and remote court proceedings. These advantages were particularly evident during the Covid-19 pandemic. While AI and machine-driven decisions are restricted to support functions, excluding substantive judicial roles (no ‘robo-judges’), efforts are ongoing to enhance data-driven practices and automation in court proceedings. However, a key legal challenge lies in aligning digital court processes with the constitutional mandate for public justice.