Metrics
Full text views
Full text views help
Loading metrics...
* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.
Usage data cannot currently be displayed.
How can we build and govern trustworthy AI? Operationalizing Responsible AI brings together leading scholars and practitioners to address this urgent question. Each chapter explores a key dimension of responsibility - fairness, explainability, psychological safety, accountability, consent, transparency, auditability, and contextualization – defining what it means, why it matters, and how it can be achieved in practice. Through interdisciplinary perspectives and real-world examples, the book bridges ethical principles, legal frameworks such as the EU AI Act, and technical approaches including explainable AI and audit methodologies. Written for researchers, policymakers, and professionals, the book offers both conceptual clarity and practical guidance for advancing Responsible AI that is fair, transparent, and aligned with human values.
Loading metrics...
* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.
Usage data cannot currently be displayed.
This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.
Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.