When Justice Becomes Code: Rebuilding Trust when AI enters Courts
Artificial intelligence (AI) is making its way into courtrooms worldwide, promising efficiency and consistency in judicial decision-making. From predicting case outcomes to drafting legal documents, AI tools are reshaping the way courts operate. But as these technologies become increasingly embedded in judicial systems, an urgent question arises: Can we trust a justice system that is governed, in part, by code?
From Rule of Law to Rule of Code?

This shift is part of a larger phenomenon known as algocracy, or governance by algorithms—a concept explored by John Danaher, who warns that when public decision-making is delegated to opaque computational systems, democratic legitimacy may erode. The concern is not just theoretical. Real-world cases from the Netherlands (SyRI) and Denmark (Udbetaling Danmark) illustrate how algorithmic systems can automate fraud detection and welfare decisions with limited human oversight, often at the cost of fairness, privacy, and transparency.
In judicial contexts, where decisions carry personal consequences, the implications of algorithmic governance are especially critical. Courts are more than service providers—they are symbols of justice, expected to exercise discernment, impartiality, and empathy. These human qualities are difficult to replicate, let alone guarantee, in algorithmic systems.
The Crisis of Trust in Algorithmic Justice
Trust is subjective. It directly relates to the AI user’s will to be vulnerable to its decision-making within the judicial system. Trust in the judiciary is further built on principles of fairness, transparency, and human judgment. According to Popelier et al. (2022), trust in multilevel judicial systems is not just about competence, it also hinges on perceptions of integrity and benevolence. As such, trust is directly related to what individuals anticipate from a tribunal’s carriage of justice. And most times, when individuals enter a courtroom, they expect a fair trial where their case is carefully considered by a human judge who understands the nuances of their situation.
This is breached with the interference of AI in judicial decisions. Furthermore, beyond individual trust, there is also societal trust that represents a collective faith in the judicial system as a symbol of justice and equality, as well as an organizational trust that is tied with a perception of procedural fairness. Without public trust, AI systems breach the principles of transparency, since they alienate the citizens from judicial decision-making and open algorithmic justice to different kinds of societal interpretations that vary amongst each other.
AI tools often operate as “black boxes”, they provide outputs without clear explanations. This explainability gap undermines procedural justice: if people cannot understand or contest a decision, they may feel that their legal rights have been bypassed. Scholars like Kinchin (2024) argue, fully automated systems risk creating a procedural void where individuals are unable to meaningfully participate in the adjudication of their own cases.
Bias by Design
Far from neutral, AI systems are shaped by the data they are trained on. When historical legal data reflects systemic bias, based on race, gender, or socioeconomic status, algorithms trained on this data can replicate and reinforce injustice. Studies of AI risk assessment tools in criminal justice systems, particularly in the U.S., have shown how these tools can disproportionately penalize marginalized communities. As Amnesty International (2021) has noted, the use of AI in public administration without safeguards may result in discriminatory outcomes that violate fundamental rights.
Towards Trustworthy Judicial AI
What, then, can be done? Rather than rejecting AI outright, scholars and practitioners are advocating for a trust-centered approach. Any attempt of algorithm governance in the judiciary should call for:
- Meaningful Transparency: Clear disclosure of how AI tools work and their role in judicial decisions;
- Responsibility: Human oversight mechanisms that prevent AI from supplanting moral and legal judgment;
- Understanding: Education and training for judges and legal staff to use AI responsibly;
- Social Justice: Mitigating bias and ensuring inclusivity in AI design and deployment
- Trustworthy Oversight: Independent bodies to audit, evaluate, and guide the use of AI in courts.
AI can and should play a role in modern judicial systems, particularly when used to enhance access to justice, reduce backlogs, or improve consistency. But its integration must be strategic, cautious, and always subordinate to human judgment.
The question is not whether algorithms will shape the future of law—they already are. The real question is whether our legal institutions will be equipped to govern that change in a way that preserves public trust. The stakes are high. As courts move deeper into algorithmic territory, the legitimacy of judicial systems may hinge not only on their rulings, but on their ability to explain, justify, and be held accountable for decisions, whether made by human hands or digital code.
Lorena Flórez Rojas is an Assistant Professor of Law at the University of Groningen, the Netherlands and part of the Area 5 Editorial Committee of the Data & Policy journal and Data for Policy Conference, focused on Algorithmic Governance.
For discussion of the landmark SyRI case involving algorithmic decision-making within the Dutch public sector, see the following article in Data & Policy: Wieringa M. “Hey SyRI, tell me about algorithmic accountability”: Lessons from a landmark case. Data & Policy. 2023;5:e2.