Hostname: page-component-77f85d65b8-pkds5 Total loading time: 0 Render date: 2026-03-29T19:35:34.315Z Has data issue: false hasContentIssue false

Generating Global and Local Explanations for Tree-Ensemble Learning Methods by Answer Set Programming

Published online by Cambridge University Press:  13 November 2024

AKIHIRO TAKEMURA
Affiliation:
The Graduate University for Advanced Studies, SOKENDAI, Chiyoda-ku, Tokyo, Japan National Institute of Informatics, Chiyoda-ku, Tokyo, Japan (e-mail: atakemura@nii.ac.jp)
KATSUMI INOUE
Affiliation:
National Institute of Informatics, Chiyoda-ku, Tokyo, Japan The Graduate University for Advanced Studies, SOKENDAI, Chiyoda-ku, Tokyo, Japan (e-mail: inoue@nii.ac.jp)
Rights & Permissions [Opens in a new window]

Abstract

We propose a method for generating rule sets as global and local explanations for tree-ensemble learning methods using answer set programming (ASP). To this end, we adopt a decompositional approach where the split structures of the base decision trees are exploited in the construction of rules, which in turn are assessed using pattern mining methods encoded in ASP to extract explanatory rules. For global explanations, candidate rules are chosen from the entire trained tree-ensemble models, whereas for local explanations, candidate rules are selected by only considering rules that are relevant to the particular predicted instance. We show how user-defined constraints and preferences can be represented declaratively in ASP to allow for transparent and flexible rule set generation, and how rules can be used as explanations to help the user better understand the models. Experimental evaluation with real-world datasets and popular tree-ensemble algorithms demonstrates that our approach is applicable to a wide range of classification tasks.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Fig 1. Overview of our framework.

Figure 1

Fig 2. A simple decision tree-ensemble consisting of two decision trees. The rule associated with each node is given by the conjunction of all conditions associated with nodes on the paths from the root node to that node.

Figure 2

Algorithm 1 Construct candidate rule set R

Figure 3

Table 1. List of predicates representing a rule in ASP

Figure 4

Table 2. List of minimum and maximum values for the bounds used in defining$\texttt{invalid/1}$

Figure 5

Table 3. Examples of global and local explanations from the adult dataset (LightGBM + ASP)

Figure 6

Table 4. Dataset description, selected hyperparameters and candidate rule counts

Figure 7

Table 5. Size of rule sets, total and average number of conditions in rules

Figure 8

Table 6. Average ratio of rule-based classifiers’ performance vs. original tree-ensembles, averaged over 5 folds. (Global explanations)

Figure 9

Table 7. Fidelity metrics, averaged over 5 folds. (Global explanations)

Figure 10

Table 8. Average ratio of rule-based classifier’s precision vs. original tree-ensembles, averaged over 5 folds. (Global explanations)

Figure 11

Table 9. Average running time of generating global explanations, averaged over 5 folds. (Global explanations)

Figure 12

Table 10. Decision tree local explanation metrics

Figure 13

Table 11. Random forest local explanation metrics

Figure 14

Table 12. LightGBM local explanation metrics

Figure 15

Table A1. Base model accuracy and F1-scores, averaged over 5 folds

Figure 16

Table A2. Base model precision and recall, averaged over 5 folds

Figure 17

Table B1. Search space definition for hyperparameter optimization