6 - Transparency
from Part II - Tools
Published online by Cambridge University Press: 15 July 2021
Summary
Transparency has been embraced as a means of limiting the risks associated with AI. This chapter considers the manner in which transparency and the related concept of ‘explainability’ are being elaborated, notably the ‘right to explanation’ in the European Union and a move towards explainable AI (XAI) among developers. These are more promising than the arguments for legal personality, but the limits of transparency are already beginning to show as AI systems demonstrate abilities that even their programmers struggle to understand. That is leading regulators to cede ground and settle for explanations of adverse decisions rather than transparency of decision-making processes themselves. Such a backward-looking approach relies on individuals knowing that they have been harmed – which will not always be the case – and should be supplemented with forward-looking mechanisms like impact assessments, audits, and an ombudsperson.
Keywords
- Type
- Chapter
- Information
- We, the Robots?Regulating Artificial Intelligence and the Limits of the Law, pp. 144 - 170Publisher: Cambridge University PressPrint publication year: 2021