To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Painful as it is for a Remain campaigner like me to admit, the EU has always been dire when it comes to policies for supporting innovation and technology. Even more painfully, things have worsened over the past ten years. Longstanding structural weaknesses in EU innovation policy date from well before the Brexit referendum in 2016. The European Union had dismally failed to create a regulatory environment conducive to technological innovation. As I found whenever I visited to Brussels as a No. 10 adviser under David Cameron, the policy instincts of European Commission officials were overwhelmingly rooted in market stability and risk avoidance – values that, while defensible in themselves, often produced unintended consequences for fast-moving sectors such as digital technology and life sciences. Take the EU’s data privacy rules, which were debated and developed for years before being finally implemented in 2018. As Cameron’s team repeatedly warned at the time, the compliance costs fell disproportionately on small and early-stage firms. Even before fines or litigation, the administrative burden for smaller organisations typically ran into tens of thousands of pounds.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The challenges courts face in dealing with the demand for justice in the digital age have increased considerably in the last thirty years. These actors have always been under the spotlight as the traditional institutional mechanism to protect rights and ensure the rule of law, but have been increasingly confronted with limited resources and expertise, and an overwhelming amount of judicial workload. Digitalisation and automation have been seen as a possibility for political decision-makers to sort out new strategies and tools that ease judicial activity. This chapter argues that the increasing digitalisation of justice has resulted in two constitutional trends, respectively towards an increasing internalisation of AI and digital technologies into the judicial field, and externalisation of judicial functions to private actors and administrative authorities which also implement AI technologies. Both internalisation and externalisation raise constitutional challenges for judicial activities, touching the core of digital constitutionalism, primarily the protection of rights and the limits of power in the digital age.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Accountability is a foundational judicial value and a tenet of the rule of law. Drawing on contemporary examples from the UK, EU, USA, Latin America, Taiwan, and China, this chapter examines how artificial intelligence (AI) is being used to assist judicial decision-making at varying stages – ranging from case-sorting tools and legal research aids to fully automated ‘smart courts’. By categorising these judicial uses by level of AI intervention, the chapter interrogates two common claims: (1) that greater AI involvement increases threats to judicial accountability, and (2) that judicial oversight ensures such accountability is preserved. Contrary to these common claims, we argue that accountability is compromised at all levels of AI integration. This occurs because AI systems: (1) obscure transparency and open justice; (2) erode judicial independence and reasoning by amplifying cognitive biases; and (3) hinder appellate review, thus limiting opportunities to contest decisions. While governments often assert that judicial supervision and discretion are sufficient safeguards, the chapter argues that such protections are increasingly ineffective amid pervasive and elusive AI systems.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The criminal justice system is becoming automated. At every stage, from policing to evidence to parole, AI tools and other technologies guide outcomes. Debates over the pros and cons of these technologies have overlooked a crucial issue: ownership. Developers often claim that details about how their tools work are trade secrets and refuse to disclose that information to criminal defendants or their attorneys. The introduction of intellectual property claims into the criminal justice system raises under-theorised tensions between life, liberty, and property interests. This chapter argues that trade secrets should not be privileged in criminal proceedings. A criminal trade secret privilege is ahistorical, harmful to defendants, and unnecessary to protect the interests of the secret holder. Meanwhile, compared to substantive trade secret law, the privilege overprotects intellectual property. Further, privileging trade secrets in criminal proceedings fails to serve the theoretical purpose of either trade secret law or privilege law. The trade secret inquiry sheds new light on how evidence rules do, and should, function differently in civil and criminal cases.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This chapter argues that answers to this question have been excessively focused on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter focuses on AI and its impact on transparency in judicial decision-making. Transparency is one of the core values of the rule of law, and essential for maintaining the trust and accountability of the judiciary and justice system as a whole. Drawing upon semi-structured expert interviews with members of judiciary and legal profession, case law and real-life examples of AI tools, the chapter considers four questions: why transparency matters in the context of judicial decision-making; the information that judges must have and communicate to satisfy the demands of transparency; whether they have access to this information; and, if not, what we might do about this deficit. We argue that two complementary solutions can strengthen judicial transparency in the age of AI: (1) a regulatory framework that mandates disclosure of specific information pertaining to the code and variables used in AI tools; and (2) robust use of the due process duty to provide adequate reasons for a judicial decision that depends upon the output of a predictive tool. These steps are essential to reconcile judicial use of AI with the need for transparency, as a foundational aspect of justice and rule of law.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Say an AI program passes a Turing test because it can converse in a way indistinguishable from a human. And say that its developers can then teach it to converse – and even present an extended persuasive argument – in a way indistinguishable from the sort of human we call a ‘lawyer’. The program could thus become an AI brief-writer, capable of regularly winning brief-writing competitions against human lawyers. If and when that happens, this chapter argues, the same technology can be used to create AI judges, judges that we should accept as no less reliable than human judges, and more cost-effective. If the software can create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges, we should accept it as a judge, even if the opinions do not stem from human judgment.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The judiciary must reflect the diversity of the population it serves to ensure justice is both impartial and perceived as fair. This chapter examines how AI in courts influences judicial diversity and legitimacy. While AI can uncover unconscious biases and enhance case analysis, judicial diversity remains essential to prevent AI from reinforcing existing prejudices. The chapter also explores identity awareness and institutional legitimacy. Like other democratic institutions, courts using AI must uphold representativeness. AI can foster collaborative constitutionalism by incorporating diverse perspectives in constitutional debates, helping address concerns about judicial legitimacy when unelected judges overturn decisions by elected representatives. Finally, the chapter considers how judges’ engagement with AI- driven social media affects transparency and public trust. As these technologies shape perceptions of the judiciary, they must be carefully managed to support judicial diversity and legitimacy. This is particularly important for judges from diverse backgrounds, who face greater risks of digital harassment, potentially undermining institutional trust and judicial integrity.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This concluding chapter affirms that the integration of AI into courts is no longer a question of if, but how. Courts, as constitutional institutions, face profound normative questions: how does AI affect transparency, impartiality, and public trust? While AI improves court operations in many jurisdictions, it also risks eroding judicial values and the rule of law. Global examples show diverse adoption paths, yet shared challenges, such as AI opacity, lack of judicial AI literacy, and accountability gaps, demand coordinated oversight. Ultimately, a human-centred approach to judicial AI is essential. Rather than rejecting AI or accepting it uncritically, the authors advocate a balanced path that preserves the human and interpretive role of judging.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Judicial systems, long considered the most tradition-bound of public institutions, are at a critical juncture. From Strasbourg to São Paulo, from Delhi to Wellington, courts around the globe confront a shared challenge: how to navigate justice in an era redefined by artificial intelligence (AI). The question is no longer whether courts will engage with AI – but how, and on whose terms. This Handbook is the first global and comparative volume that systematically examines the use of AI and digital technologies in courts. It provides an interdisciplinary and cross-jurisdictional perspective on how judicial institutions are responding to the opportunities and risks posed by AI – from e-filing systems and predictive tools to ‘robo-judging’ and AI- supported decision-making. The forty-five contributions of the Handbook are arranged across ‘Part I: AI and Courts: Context and Normative Positions’, ‘Part II: AI and Courts: Disciplinary Perspectives’, ‘Part III: AI & Tech Challenges to Judicial Values’, and ‘Part IV: AI in Courts across the Globe: Jurisdictional Perspectives’, with each part offering a distinct analytical lens on justice and judging in the age of AI. The Handbook examines not just what AI can do for courts, but also what courts must do to ensure AI enhances, rather than erodes, their fundamental role in democratic societies.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
In recent years, the Brazilian judiciary has announced over 100 initiatives using artificial intelligence (AI) systems, while a mounting number of juridical decisions shape how AI can be used in the country. The chapter analyses how AI impacts the Brazilian judiciary. First, we introduce the Brazilian legal systems, and explore a selection of AI initiatives to expose their benefits and shortcomings. Then we proceed to examine the policy for the usage of AI within courts in the country, focusing on the recent Resolution of the Brazilian National Council of Justice, aimed at regulating the judiciary’s production and use of AI systems. Last, we argue that the integration of AI within Brazil’s judiciary has the potential to enhance procedural efficiency and innovation. Yet, we emphasise that the guarantee of transparency, accountability, legal certainty, and digital sovereignty largely depend on the adoption and coherent implementation of a new AI Regulatory Framework and a new AI Strategy.
The means by which factions persist are many, including political parties, lobbying, partisan media, passion and prejudice, rent-seeking, the permanent campaign, the politics of identity and principle, and today’s high-tech political campaigns.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
The evolution of AI presents both challenges and opportunities for courts. To date, most discussion and examination of AI and courts has focused on decision-making tools, reflecting a broader trend in discourse around courts that unduly centres on courts’ adjudicatory functions. Yet courts perform a far wider set of functions and societal roles. This chapter examines the current and potential uses of AI, questions of suitability and ethics, and the challenges and opportunities that arise through this broader consideration of what it is courts actually do, beyond determining disputes. While AI may enhance access to justice, reduce costs, save time, and potentially improve the quality of justice, significant challenges arise, including the potential erosion of judicial respect, inaccuracy, and concerns for the separation of powers. Crucially, court users and the public ought to be more widely consulted in how AI is developed and deployed for courts to achieve better, fairer, and more effective justice.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Impartiality, broadly meaning the absence of bias and according equal treatment before the law, is a foundational element of judicial decision-making around the world. In this chapter, we consider how the goal of judicial impartiality may be either enhanced and supported or undermined by the use of artificial intelligence. Key developments in legal AI include innovations directed toward courts and decision- makers. These may be process-driven – for example, triaging or decision-supporting systems; in the case of pre-trial processes, judges may need to manage technology-facilitated document discovery. AI systems may also be involved in the production of evidence submitted to the court. Finally, courts and judges themselves may be the subjects of AI tools, such as those which identify patterns in decision-making. As this chapter explores, these different uses all have implications for the way that judicial impartiality is enacted and tested.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
This chapter explores the implications of AI for human judges through the lens of judicial professional competence. It draws on Australasian experience to make two universal arguments: to include competence on the front bench of judicial regulatory values, and to embed digital literacy in the definition and pursuit of judicial competence. There is a deep-rooted, but increasingly problematic, assumption in common law jurisdictions that judges emerge ready-made from the ranks of senior lawyers. The breadth and complexity of potential judicial engagement with AI poses a profound challenge to this assumption. Even in ‘career’ judiciaries, traditional markers of competence for judicial work do not reliably translate to competence for AI. While other dimensions of modern judicial competence, like cross-cultural skills, may be seen to raise similar concerns, AI-related risks and opportunities are proving unique in the speed at which they emerge and evolve. There is an urgent need for more open discussion about equipping future (and current) judicial cohorts to meet this challenge.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Research on judicial use of AI has mainly focused on general attitudes toward algorithmic decision-making, leaving open the question of how policy choices shape public perceptions of the courts. This chapter addresses this gap through a comparative analysis of judicial AI policies across four major jurisdictions: the EU, UK, US, and China. We identify three key dimensions along which these approaches differ: the choice between hard and soft law, transparency requirements, and restrictions on substantive versus administrative use. Drawing on insights from rational choice theory and behavioural economics, we analyse how each regulatory choice might influence public trust and legitimacy. Our analysis suggests that the effectiveness of different approaches likely depends on institutional fit, including the pre-existing legal culture, levels of trust in courts and technology, and broader societal attitudes toward automation. These findings help explain the emergence of divergent regulatory approaches across jurisdictions and offer insights for policy-makers seeking to maintain public confidence in the courts while integrating AI into judicial systems.
Technology has been the bedrock of human existence from time immemorial as every aspect of human life is dependent on one form of technology or the other for their satisfaction. The desire to improve the quality of life and living had induced man to continually invent and innovate. The global economy has become a knowledge economy and the bedrock on which the river of the knowledge economy flows is intellectual property and allied rights. A country’s actualisation of its vision of industrialisation and attendant sustainable development changes as the role of man changes in every era and stage of technology based on such country’s efforts at a particular period to develop its frontiers of intellectual property towards meeting the dynamics of technology. With increased technology and innovation employed in manufacturing, agriculture, and transportation comes environmental pollution and degradation. Many traditional societies in Nigeria fostered strong belief systems and social norms which encouraged or even enforced limits to exploitation of biological resources. These traditional practices are being eroded by several factors. This chapter appraises the IPR, traditional knowledge systems and Islamic law perspective in the protection and preservation of the environment.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Adjudication in the various courts in Nigeria has been struggling to break through the jinx of case backlogs, slow legal service delivery, limited access to justice, particularly in rural areas, overburdened courts, and insufficient legal resources. However, Covid-19 led to the adoption of digital technology in the filing, service of court processes, and speedy dispensation of justice through virtual court sittings. Technologies and artificial intelligence (AI) are the driving force behind the digital administration of law and justice, and courts in Nigeria stand at the precipice of a potential technological breakthrough. However, this is not without challenges, such as inadequate infrastructure, poor funding, and lack of capacity building for the judges to understand the complexities. This chapter examines the development of the AI-driven court system and attempts to put in place a national AI policy for the justice sector in Nigeria.
Until the Japanese attack on Pearl Harbour, America did not want war, with the 1930s marked by strong isolationism and an emphasis on defense. However, in December 1941, it wasn't defensive aircraft the Army Air Corps had been steadily procuring, but offensive long-range heavy bombers, whilst US pursuit planes were decidedly inferior to their European counterparts. In this new history of the development of American air power, Phillip Meilinger dispels the notion that young air zealots pushed for a bomber-heavy force, revealing instead the technological, economic and bureaucratic forces which shaped the air force. He examines the role of scientists and engineers, developments in commercial aviation, and conflicting priorities of the Army and Air Corps, as well as how these were in turn influenced by America's political leaders. Building an Air Force is essential for understanding a conflict in which whoever controlled the skies controlled the land and seas beneath.
A thorough understanding of the fundamental aspects of radiologic image formation is key to assessing the appropriateness, advantages, limitations and potential risks in the imaging evaluation of child abuse. This chapter reviews two of the most frequently used imaging modalities that utilize ionizing radiation; planar digital radiography and CT. It is accompanied by a summary of the lesser-used techniques of x-ray fluoroscopy and nuclear medicine (planar gamma camera imaging, single photon emission CT, positron emission tomography). The purpose of this work is to offer the reader, whether radiologist, nonradiologist physician or allied health provider (medical radiation technologist, nurse, etc.) a sufficient accounting of the physical principles, technology and radiation dose considerations of these imaging choices to supplement their clinical expertise in making imaging decisions for their patients. Special attention will be allotted to core concepts of radiation dose and its practical and contextual considerations. Familiarity with typical dose estimates across relevant patient size and age is essential for planning and relative risk assessment. Communicating radiation risk in the context of benefit remains a core responsibility of all associated with medical imaging, one that should be embraced, and not feared, by the clinical team.