Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-09T07:41:31.108Z Has data issue: false hasContentIssue false

Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach

Published online by Cambridge University Press:  05 May 2021

Tae Wan Kim
Affiliation:
Carnegie Mellon University
Bryan R. Routledge
Affiliation:
Carnegie Mellon University
Rights & Permissions [Opens in a new window]

Abstract

Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of the Society for Business Ethics
Figure 0

Figure 1: Schematic of a Decision Algorithm Highlighting Individual i Who Provides Input xi and Experiences Outcome yi

Figure 1

Table 1: Contours of a Right to Explanation

Figure 2

Figure 2: Schematic of a Decision Algorithm Including Individuals i, j, and allNote. Also included is company a. Inputs are denoted x (characteristics, payments, etc.), and outputs (decisions) are denoted y.

Figure 3

Table 2: Three Different Types of Data Subjects and Their Rights