Amid the rapidly evolving landscape of artificial intelligence (AI) regulation, a significant concern has emerged regarding the predominant focus on preemptive measures aimed at preventing or mitigating potential AI-related harms. While these preemptive measures are undeniably important, they must be complemented by effective redress mechanisms to address and remedy materialized harms. This paper highlights the crucial role of redress in empowering individuals to challenge and rectify the adverse effects of AI systems, emphasizing that access to redress is, in fact, access to justice. We critically evaluate whether current AI governance sufficiently address the need for remedies for AI-related harms, arguing that they fall short in protecting individuals’ rights. To address this gap, we outline four key steps in the redress process: (1) initiating the redress process, (2) determining appropriate avenues for redress, (3) collecting evidence to support claims and (4) receiving and responding to decisions. Each step is explored in detail, presenting distinct challenges and requirements, illustrated with real-world examples. Our findings underscore the urgent need to integrate robust redress pathways into AI governance frameworks to safeguard individual rights as AI technologies become increasingly embedded in society.