Hostname: page-component-89b8bd64d-ktprf Total loading time: 0 Render date: 2026-05-08T18:54:57.568Z Has data issue: false hasContentIssue false

Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation

Published online by Cambridge University Press:  15 January 2025

FIEKE HILLERSTRÖM
Affiliation:
TNO, Netherlands (e-mails: fieke.hillerstrom@tno.nl, gertjan.burghouts@tno.nl)
GERTJAN BURGHOUTS
Affiliation:
TNO, Netherlands (e-mails: fieke.hillerstrom@tno.nl, gertjan.burghouts@tno.nl)
Rights & Permissions [Opens in a new window]

Abstract

Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, for example, coming from sensory data or neural networks with probabilities. We propose Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (binary cross-entropy) and a relaxation of the hypothesis constrainer (NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few as 8 examples. It outperforms binary ILP and statistical models such as a graph neural network.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Fig. 1. Our method Propper extends the ILP method popper that learns from failures (left) with neurosymbolic inference to test logical programs on probabilistic background knowledge, for example, objects detected in images with a certain probability (right).

Figure 1

Fig. 2. Examples of images with the detected objects and their probabilities.

Figure 2

Fig. 3. Hard cases due to incorrect groundtruths (right) or incorrect detections (others).

Figure 3

Table 1. The tested model variants and their properties

Figure 4

Fig. 4. Performance of the models on finding a relational pattern in satellite images, for increasing hardness of image sets. The best performer is Propper BCE, indicated in each graph by * for comparison. Our probabilistic ILP outperforms binary ILP and statistical ML.

Figure 5

Fig. 5. Performance of the models on finding a relational pattern in satellite images, for increasing training sets. The best performer is Propper BCE, indicated in each graph by * for comparison. Our probabilistic ILP outperforms binary ILP and statistical ML.

Figure 6

Fig. 6. Examples of the MS-COCO dataset with images of everyday scenes.

Figure 7

Table 2. Model variants and performance on MS-COCO

Figure 8

Table 3. Learned programs, prevalence and performance on MS-COCO