Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-04-18T11:14:54.403Z Has data issue: false hasContentIssue false

Argumentation and explainable artificial intelligence: a survey

Published online by Cambridge University Press:  05 April 2021

Alexandros Vassiliades
Affiliation:
School of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece; e-mail: valexande@csd.auth.gr, nbassili@csd.auth.gr Institute of Computer Science, Foundation for Research and Technology - Hellas, 70013, Heraklion, Greece; e-mail: patkos@ics.forth.gr
Nick Bassiliades
Affiliation:
School of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece; e-mail: valexande@csd.auth.gr, nbassili@csd.auth.gr
Theodore Patkos
Affiliation:
Institute of Computer Science, Foundation for Research and Technology - Hellas, 70013, Heraklion, Greece; e-mail: patkos@ics.forth.gr
Rights & Permissions [Opens in a new window]

Abstract

Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models.

Information

Type
Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press
Figure 0

Figure 1. Google searches with XAI (Adadi & Berrada 2018)

Figure 1

Table 1. Overview of argumentation systems for XAI