Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-07T09:36:33.340Z Has data issue: false hasContentIssue false

Understanding, Idealization, and Explainable AI

Published online by Cambridge University Press:  03 November 2022

Will Fleisher*
Affiliation:
Georgetown University, Washington, DC, USA
Rights & Permissions [Opens in a new window]

Abstract

Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. I argue for a unified account of these key concepts that treats the concept of understanding as fundamental. This allows resources from the philosophy of science and the epistemology of understanding to help guide opacity alleviation efforts. A first significant benefit of this understanding account is that it defuses one of the primary, in-principle objections to post hoc explainable AI (XAI) methods. This “rationalization objection” argues that XAI methods provide mere rationalizations rather than genuine explanations. This is because XAI methods involve using a separate “explanation” system to approximate the original black box system. These explanation systems function in a completely different way than the original system, yet XAI methods make inferences about the original system based on the behavior of the explanation system. I argue that, if we conceive of XAI methods as idealized scientific models, this rationalization worry is dissolved. Idealized scientific models misrepresent their target phenomena, yet are capable of providing significant and genuine understanding of their targets.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. A unit or artificial neuron. From “Artificial Neuron Model” by Chrislb. Wikimedia Commons (https://commons.wikimedia.org/wiki/File:ArtificialNeuronModel.png). CC BY-SA 3.0.

Figure 1

Figure 2. A simple Deep Neural Network structure. From “Artificial Neural Network” by Cburnett. Wikimedia Commons (https://commons.wikimedia.org/wiki/File:Artificial_neural_network.svg). CC BY-SA 3.0.

Figure 2

Figure 3. Illustrating LIME in action (image amended from Ribeiro et al. 2016).