{"id":64321,"date":"2025-09-12T12:20:24","date_gmt":"2025-09-12T11:20:24","guid":{"rendered":"https:\/\/www.cambridge.org\/core\/blog\/?p=64321"},"modified":"2025-09-12T12:20:24","modified_gmt":"2025-09-12T11:20:24","slug":"when-justice-becomes-code-rebuilding-trust-when-ai-enters-courts","status":"publish","type":"post","link":"https:\/\/www.cambridge.org\/core\/blog\/2025\/09\/12\/when-justice-becomes-code-rebuilding-trust-when-ai-enters-courts\/","title":{"rendered":"When Justice Becomes Code: Rebuilding Trust when AI enters Courts"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div>\n<p>Artificial intelligence (AI) is making its way into courtrooms worldwide, promising efficiency and consistency in judicial decision-making. From predicting case outcomes to drafting legal documents, AI tools are reshaping the way courts operate. But as these technologies become increasingly embedded in judicial systems, an urgent question arises: <strong><em>Can we trust a justice system that is governed, in part, by code?<\/em><\/strong><\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>From Rule of Law to Rule of Code?<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1240\" height=\"660\" src=\"https:\/\/www.cambridge.org\/core\/blog\/wp-content\/uploads\/2025\/09\/When-Justice-Becomes-Code-Blog.jpg\" alt=\"\" class=\"wp-image-64323\" style=\"width:286px;height:auto\" srcset=\"https:\/\/www.cambridge.org\/core\/blog\/wp-content\/uploads\/2025\/09\/When-Justice-Becomes-Code-Blog.jpg 1240w, https:\/\/www.cambridge.org\/core\/blog\/wp-content\/uploads\/2025\/09\/When-Justice-Becomes-Code-Blog-420x224.jpg 420w, https:\/\/www.cambridge.org\/core\/blog\/wp-content\/uploads\/2025\/09\/When-Justice-Becomes-Code-Blog-768x409.jpg 768w\" sizes=\"auto, (max-width: 1240px) 100vw, 1240px\" \/><figcaption class=\"wp-element-caption\"><sub><sup>Unsplash\/Markus Spiske.<\/sup><\/sub><\/figcaption><\/figure><\/div>\n\n\n<p>This shift is part of a larger phenomenon known as algocracy, or governance by algorithms\u2014a concept explored by <a href=\"https:\/\/philarchive.org\/rec\/DANTAE-2\" target=\"_blank\" rel=\"noopener\" title=\"\">John Danaher<\/a>, who warns that when public decision-making is delegated to opaque computational systems, democratic legitimacy may erode. The concern is not just theoretical. Real-world cases from the Netherlands (<a href=\"https:\/\/doi.org\/10.1017\/dap.2022.39\" target=\"_blank\" rel=\"noopener\" title=\"\">SyRI<\/a>) and Denmark (<a href=\"https:\/\/www.computerweekly.com\/news\/366615893\/Denmarks-AI-powered-welfare-system-fuels-mass-surveillance\" target=\"_blank\" rel=\"noopener\" title=\"\">Udbetaling Danmark<\/a>) illustrate how algorithmic systems can automate fraud detection and welfare decisions with limited human oversight, often at the cost of fairness, privacy, and transparency.<\/p>\n\n\n\n<p>In judicial contexts, where decisions carry personal consequences, the implications of algorithmic governance are especially critical. Courts are more than service providers\u2014they are symbols of justice, expected to exercise discernment, impartiality, and empathy. These human qualities are difficult to replicate, let alone guarantee, in algorithmic systems.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>The Crisis of Trust in Algorithmic Justice<\/strong><\/p>\n\n\n\n<p>Trust is <a href=\"https:\/\/www-jstor-org.proxy-ub.rug.nl\/stable\/10.1086\/667838?seq=1\" target=\"_blank\" rel=\"noopener\" title=\"\">subjective<\/a>. It directly relates to the AI user\u2019s will to be vulnerable to its decision-making within the judicial system. Trust in the judiciary is further built on principles of fairness, transparency, and human judgment. According to <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/eulj.12436\" target=\"_blank\" rel=\"noopener\" title=\"\">Popelier et al. (2022),<\/a> trust in multilevel judicial systems is not just about competence, it also hinges on perceptions of integrity and benevolence. As such, trust is directly related to what individuals anticipate from a tribunal\u2019s carriage of justice. And most times, when individuals enter a courtroom, they expect a fair trial where their case is carefully considered by a human judge who understands the nuances of their situation.<\/p>\n\n\n\n<p>This is breached with the interference of AI in judicial decisions. Furthermore, beyond <strong>individual trust,<\/strong> there is also <strong>societal trust<\/strong> that represents a collective faith in the judicial system <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s12027-024-00795-8\" target=\"_blank\" rel=\"noopener\" title=\"\">as a symbol of justice and equality<\/a>, as well as an <strong>organizational trust<\/strong> that is tied with a perception of <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/10439463.2023.2267733\" target=\"_blank\" rel=\"noopener\" title=\"\">procedural fairness<\/a>. Without public trust, AI systems breach the principles of transparency, since they <a href=\"https:\/\/www.frontiersin.org\/journals\/human-dynamics\/articles\/10.3389\/fhumd.2024.1421273\/full\" target=\"_blank\" rel=\"noopener\" title=\"\">alienate<\/a> the citizens from judicial decision-making and open algorithmic justice to different kinds of societal interpretations that vary amongst each other.<\/p>\n\n\n\n<p>AI tools often operate as \u201cblack boxes\u201d, they provide outputs without clear explanations. This explainability gap undermines procedural justice: if people cannot understand or contest a decision, they may feel that their legal rights have been bypassed. Scholars like <a href=\"https:\/\/academic.oup.com\/ijlit\/article\/doi\/10.1093\/ijlit\/eaae024\/7877312\" target=\"_blank\" rel=\"noopener\" title=\"\">Kinchin (2024)<\/a> argue, fully automated systems risk creating a procedural void where individuals are unable to meaningfully participate in the adjudication of their own cases.<\/p>\n\n\n\n<div style=\"height:14px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Bias by Design<\/strong><\/p>\n\n\n\n<p>Far from neutral, AI systems are shaped by the data they are trained on. When historical legal data reflects systemic bias, based on race, gender, or socioeconomic status, algorithms trained on this data can replicate and reinforce injustice. Studies of AI risk assessment tools in criminal justice systems, particularly in the U.S., have shown how these tools can disproportionately penalize marginalized communities. As <a href=\"https:\/\/www.amnesty.nl\/content\/uploads\/2021\/10\/20211014_FINAL_Xenophobic-Machines.pdf\" target=\"_blank\" rel=\"noopener\" title=\"\">Amnesty International (2021)<\/a> has noted, the use of AI in public administration without safeguards may result in discriminatory outcomes that violate fundamental rights.<\/p>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Towards Trustworthy Judicial AI<\/strong><\/p>\n\n\n\n<p><em>What, then, can be done? <\/em>Rather than rejecting AI outright, scholars and practitioners are advocating for a trust-centered approach. Any attempt of algorithm governance in the judiciary should call for:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Meaningful Transparency: Clear disclosure of how AI tools work and their role in judicial decisions;<\/li>\n\n\n\n<li>Responsibility: Human oversight mechanisms that prevent AI from supplanting moral and legal judgment;<\/li>\n\n\n\n<li>Understanding: Education and training for judges and legal staff to use AI responsibly;<\/li>\n\n\n\n<li>Social Justice: Mitigating bias and ensuring inclusivity in AI design and deployment<\/li>\n\n\n\n<li>Trustworthy Oversight: Independent bodies to audit, evaluate, and guide the use of AI in courts.<\/li>\n<\/ul>\n\n\n\n<p>AI can and should play a role in modern judicial systems, particularly when used to enhance access to justice, reduce backlogs, or improve consistency. But its integration must be strategic, cautious, and always subordinate to human judgment.<\/p>\n\n\n\n<p>The question is not whether algorithms will shape the future of law\u2014they already are. The real question is whether our legal institutions will be equipped to govern that change in a way that preserves public trust. The stakes are high. As courts move deeper into algorithmic territory, the legitimacy of judicial systems may hinge not only on their rulings, but on their ability to explain, justify, and be held accountable for decisions, whether made by human hands or digital code.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.rug.nl\/staff\/m.l.florez.rojas\/\"><em>Lorena Fl\u00f3rez Rojas<\/em><\/a><em> is an Assistant Professor of Law at the University of Groningen, the Netherlands and part of the Area 5 Editorial Committee of the Data &amp; Policy journal and Data for Policy Conference, focused on <\/em><a href=\"https:\/\/www.cambridge.org\/core\/journals\/data-and-policy\/information\/author-instructions\/preparing-your-materials#areas\"><em>Algorithmic Governance<\/em><\/a><em>.<\/em><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>For discussion of the landmark SyRI case involving algorithmic decision-making within the Dutch public sector, see the following article in <a href=\"https:\/\/www.cambridge.org\/core\/journals\/data-and-policy\" target=\"_blank\" rel=\"noopener\" title=\"\">Data &amp; Policy<\/a>:<\/em> <em>Wieringa M. \u201c<a href=\"https:\/\/www.cambridge.org\/core\/journals\/data-and-policy\/article\/hey-syri-tell-me-about-algorithmic-accountability-lessons-from-a-landmark-case\/22A3086554B0486BB4BBAF2D5A33A3D0\" target=\"_blank\" rel=\"noopener\" title=\"\">Hey SyRI, tell me about algorithmic accountability\u201d: Lessons from a landmark case<\/a>. Data &amp; Policy. 2023;5:e2.<\/em><\/p>\n<\/blockquote>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence (AI) is making its way into courtrooms worldwide, promising efficiency and consistency in judicial decision-making. From predicting case outcomes to drafting legal documents, AI tools are reshaping the way courts operate.<\/p>\n","protected":false},"author":821,"featured_media":64358,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1593,9,7],"tags":[2484,11893,11823,11821,11892,2413,83,11891,1709,4659],"coauthors":[11889],"class_list":["post-64321","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-science","category-science-technology","category-social-sciences","tag-ai","tag-algorithms","tag-dap","tag-dataandpolicy","tag-dataforpolicy","tag-justice","tag-law-2","tag-llm","tag-policy","tag-policymaking"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/posts\/64321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/users\/821"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/comments?post=64321"}],"version-history":[{"count":22,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/posts\/64321\/revisions"}],"predecessor-version":[{"id":64360,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/posts\/64321\/revisions\/64360"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/media\/64358"}],"wp:attachment":[{"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/media?parent=64321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/categories?post=64321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/tags?post=64321"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.cambridge.org\/core\/blog\/wp-json\/wp\/v2\/coauthors?post=64321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}