As judges increasingly rely on algorithmic and AI systems in their decision-making process, concerns have been raised about their impact on the judicial duty to state reasons – a duty that is essential to the right to a fair trial and the rule of law. This paper examines whether existing legal frameworks adequately ensure this duty and its underlying normative goals in the age of automation and explores how best to safeguard this duty. Central to the analysis is the question of whether the duty to state reasons itself should be redefined and impose stricter reasoning requirements on judges using AI, requiring them to articulate both ‘pragmatic’ and technical explanations of the systems’ role in their decisions. While this approach may enhance the duty’s normative goals of legitimacy, transparency and accountability, the paper also considers the potential drawbacks of imposing such a more robust reasoning obligation in AI-assisted judicial processes.