Hostname: page-component-89b8bd64d-9prln Total loading time: 0 Render date: 2026-05-07T12:28:08.006Z Has data issue: false hasContentIssue false

Inductive Learning for Possibilistic Logic Programs Under Stable Models

Published online by Cambridge University Press:  18 December 2025

HONGBO HU
Affiliation:
Multi-dimensional data perception and intelligent recognition Chongqing Engineering Research Center, Chongqing University of Arts and Sciences, Chongqing 402160, China College of Computer Science and Technology, Guizhou University, Guiyang 550025, China (e-mail: gs.hbhu19@gzu.edu.cn)
YISONG WANG
Affiliation:
State Key Laboratory of Public Big Data; Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province; College of Computer Science and Technology; Institute for Artificial Intelligence, Guizhou University, Guiyang, 550025, China (e-mail: yswang@gzu.edu.cn)
YI HUANG
Affiliation:
Multi-dimensional data perception and intelligent recognition Chongqing Engineering Research Center, Chongqing University of Arts and Sciences, Chongqing 402160, China (e-mail: cqhy@21cn.com)
KEWEN WANG
Affiliation:
School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia (e-mail: k.wang@griffith.edu.au)
Rights & Permissions [Opens in a new window]

Abstract

Possibilistic logic programs (poss-programs) under stable models are a major variant of answer set programming. While its semantics (possibilistic stable models) and properties have been well investigated, the problem of inductive reasoning has not been investigated yet. This paper presents an approach to extracting poss-programs from a background program and examples (parts of intended possibilistic stable models). To this end, the notion of induction tasks is first formally defined, its properties are investigated and two algorithms ilpsm and ilpsmmin for computing induction solutions are presented. An implementation of ilpsmmin is also provided and experimental results show that when inputs are ordinary logic programs, the prototype outperforms a major inductive learning system for normal logic programs from stable models on the datasets that are randomly generated.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Algorithm 1 Existence${\langle {\overline {B}, E^+, E^-}\rangle }$

Figure 1

Algorithm 2 ilpsm$(\overline {B}, E^+, E^-)$

Figure 2

Algorithm 3 ilpsmmin$(\overline {B}, E^+, E^-)$

Figure 3

Fig 1. The architecture of ilsmmin.

Figure 4

Table 1. A comparison of ilsmmin and ilasp4 against three benchmark datasets. The id of each induction task set is in the form $D\_M\_L\_U$ where $D$ is the name of the dataset, $M$ is the number of induction tasks with $L \leq \vert {\mathcal{A}} \vert \leq U$. Cnt(TO) (resp., Cnt(OOM)) is the number of induction tasks that the program runs out of CPU time (resp., runs out of memory)

Figure 5

Fig 2. Runtime of algorithm ILSMmin solving induction tasks with different scales. The coordinates of the highest and lowest points are labeled in both figures. The $\vert E$-$\vert$-axis indicates the scale of the negative examples.