Hostname: page-component-77f85d65b8-zzw9c Total loading time: 0 Render date: 2026-03-27T22:41:38.856Z Has data issue: false hasContentIssue false

Symbolic Parameter Learning in Probabilistic Answer Set Programming

Published online by Cambridge University Press:  15 January 2025

DAMIANO AZZOLINI
Affiliation:
Department of Environmental and Prevention Sciences, University of Ferrara, Ferrara, Italy (e-mail: damiano.azzolini@unife.it)
ELISABETTA GENTILI
Affiliation:
Department of Engineering, University of Ferrara, Ferrara, Italy (e-mail: elisabetta.gentili1@unife.it)
FABRIZIO RIGUZZI
Affiliation:
Department of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy (e-mail: fabrizio.riguzzi@unife.it)
Rights & Permissions [Opens in a new window]

Abstract

Parameter learning is a crucial task in the field of Statistical Relational Artificial Intelligence: given a probabilistic logic program and a set of observations in the form of interpretations, the goal is to learn the probabilities of the facts in the program such that the probabilities of the interpretations are maximized. In this paper, we propose two algorithms to solve such a task within the formalism of Probabilistic Answer Set Programming, both based on the extraction of symbolic equations representing the probabilities of the interpretations. The first solves the task using an off-the-shelf constrained optimization solver while the second is based on an implementation of the Expectation Maximization algorithm. Empirical results show that our proposals often outperform existing approaches based on projected answer set enumeration in terms of quality of the solution and in terms of execution time.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Table 1. Worlds and their probabilities for Example1. The second, third, and fourth columns contain 0 or 1 if the corresponding probabilistic fact is respectively false or true in the considered world. The LP/UP column indicates whether the considered world contributes to the lower (LP) or upper (UP) bound (or does not contribute, marked with a dash) for the probability of the query $path(1,4)$

Figure 1

Algorithm 1 Function LearningOPT: solving the parameter learning task targeting the upper probability with constrained optimization in a PASP P with learnable probabilistic facts II and interpretations I.

Figure 2

Algorithm 2 Function LearningEM: solving the parameter learning task targeting the upper probability with Expectation Maximization in a PASP $P$ with learnable probabilistic facts $\Pi$ and with interpretations $I$.

Figure 3

Fig. 1. Execution times for EM, constrained optimization solved with COBYLA and SLSQP, and PASTA, by increasing the number of interpretations. For path15 and coloring5 the line for EM is missing since it cannot solve any instance. The initial probabilities for learnable facts are set to 0.5.

Figure 4

Table 2. Final log-likelihood (LL) values for the tested algorithms on six selected instances with the initial probability of the learnable facts set to 0.5. The column # int. contains the number of interpretations considered, the column EM contains the results obtained with expectation maximization, columns C. COBYLA and C. SLSQP stands for constrained optimization solved with, respectively, COBYLA and SLSQP, and the column PASTA contains the results obtained with the PASTA solver

Figure 5

Fig. 2. Execution times of constrained optimization solved with COBYLA and SLSQP on coloring5, path15, shop12, and smoke6 with 0.1, 0.5, and 0.9 as initial values for the learnable facts.