To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent efforts in interpreting convolutional neural networks (CNNs) focus on translating the activation of CNN filters into a stratified Answer Set Program (ASP) rule-sets. The CNN filters are known to capture high-level image concepts, thus the predicates in the rule-set are mapped to the concept that their corresponding filter represents. Hence, the rule-set exemplifies the decision-making process of the CNN w.r.t the concepts that it learns for any image classification task. These rule-sets help understand the biases in CNNs, although correcting the biases remains a challenge. We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN. Given symbolic concepts, as ASP constraints, that the CNN is biased toward, we convert the concepts to their corresponding vector representations. Then, the CNN is retrained using our novel semantic similarity loss that pushes the filters away from (or toward) learning the desired/undesired concepts. The final ASP rule-set obtained after retraining, satisfies the constraints to a high degree, thus showing the revision in the knowledge of the CNN. We demonstrate that our NeSyBiCor framework successfully corrects the biases of CNNs trained with subsets of classes from the Places dataset while sacrificing minimal accuracy and improving interpretability.
The development of large language models (LLMs), such as GPT, has enabled the construction of several socialbots, like ChatGPT, that are receiving a lot of attention for their ability to simulate a human conversation. However, the conversation is not guided by a goal and is hard to control. In addition, because LLMs rely more on pattern recognition than deductive reasoning, they can give confusing answers and have difficulty integrating multiple topics into a cohesive response. These limitations often lead the LLM to deviate from the main topic to keep the conversation interesting. We propose AutoCompanion, a socialbot that uses an LLM model to translate natural language into predicates (and vice versa) and employs commonsense reasoning based on answer set programming (ASP) to hold a social conversation with a human. In particular, we rely on s(CASP), a goal-directed implementation of ASP as the backend. This paper presents the framework design and how an LLM is used to parse user messages and generate a response from the s(CASP) engine output. To validate our proposal, we describe (real) conversations in which the chatbot’s goal is to keep the user entertained by talking about movies and books, and s(CASP) ensures (i) correctness of answers, (ii) coherence (and precision) during the conversation—which it dynamically regulates to achieve its specific purpose—and (iii) no deviation from the main topic.
When we want to compute the probability of a query from a probabilistic answer set program, some parts of a program may not influence the probability of a query, but they impact on the size of the grounding. Identifying and removing them is crucial to speed up the computation. Algorithms for SLG resolution offer the possibility of returning the residual program which can be used for computing answer sets for normal programs that do have a total well-founded model. The residual program does not contain the parts of the program that do not influence the probability. In this paper, we propose to exploit the residual program for performing inference. Empirical results on graph datasets show that the approach leads to significantly faster inference. The paper has been accepted at the ICLP2024 conference and under consideration in Theory and Practice of Logic Programming (TPLP).
The highly digitalised nature of contemporary society has made digital literacy important for newly arrived migrants. However, for teachers, the use of information and communication technologies can be challenging. The aim of the present study is to gain a deeper understanding of how teachers perceive digital resources as useful for teaching migrants language and subject skills. The research question is, In what way do teachers at the language introduction programme for newly arrived migrants in Sweden articulate the use of digital resources in relation to language teaching and in relation to subject teaching? This qualitative study is based on observations of 28 lessons in different subjects in the language introduction programme, as well as interviews with the observed teachers. In analysing the material, we first used the TPACK in situ model (Pareto & Willermark, 2019) to organise the data on the use of digital resources, and thereafter discourse theory (Howarth, 2005) was used to analyse the data. The results show that the teachers limited their students’ use of digital resources during the lessons, which is apparent in two discourses: distrust and dichotomy. In the discourse on distrust, digital technology is seen as an obstacle to teaching, and the discourse dichotomy is about the opposition between the digital and the physical. Moreover, articulations were often expressed in terms of identity; the teachers talked about themselves in relation to digital resources, rather than talking about how they use digital resources in their teaching.
Variable sharing is a fundamental property in the static analysis of logic programs, since it is instrumental for ensuring correctness and increasing precision while inferring many useful program properties. Such properties include modes, determinacy, non-failure, cost, etc. This has motivated significant work on developing abstract domains to improve the precision and performance of sharing analyses. Much of this work has centered around the family of set-sharing domains, because of the high precision they offer. However, this comes at a price: their scalability to a wide set of realistic programs remains challenging and this hinders their wider adoption. In this work, rather than defining new sharing abstract domains, we focus instead on developing techniques which can be incorporated in the analyzers to address aspects that are known to affect the efficiency of these domains, such as the number of variables, without affecting precision. These techniques are inspired in others used in the context of compiler optimizations, such as expression reassociation and variable trimming. We present several such techniques and provide an extensive experimental evaluation of over 1100 program modules taken from both production code and classical benchmarks. This includes the Spectector cache analyzer, the s(CASP) system, the libraries of the Ciao system, the LPdoc documenter, the PLAI analyzer itself, etc. The experimental results are quite encouraging: we have obtained significant speedups, and, more importantly, the number of modules that require a timeout was cut in half. As a result, many more programs can be analyzed precisely in reasonable times.
Minimal models of a Boolean formula play a pivotal role in various reasoning tasks. While previous research has primarily focused on qualitative analysis over minimal models; our study concentrates on the quantitative aspect, specifically counting of minimal models. Exact counting of minimal models is strictly harder than $\#\mathsf{P}$, prompting our investigation into establishing a lower bound for their quantity, which is often useful in related applications. In this paper, we introduce two novel techniques for counting minimal models, leveraging the expressive power of answer set programming: the first technique employs methods from knowledge compilation, while the second one draws on recent advancements in hashing-based approximate model counting. Through empirical evaluations, we demonstrate that our methods significantly improve the lower bound estimates of the number of minimal models, surpassing the performance of existing minimal model reasoning systems in terms of runtime.
We are interested in automating reasoning with and about study regulations, catering to various stakeholders, ranging from administrators, over faculty, to students at different stages. Our work builds on an extensive analysis of various study programs at the University of Potsdam. The conceptualization of the underlying principles provides us with a formal account of study regulations. In particular, the formalization reveals the properties of admissible study plans. With these at end, we propose an encoding of study regulations in Answer Set Programming that produces corresponding study plans. Finally, we show how this approach can be extended to a generic user interface for exploring study plans.
We propose a stable model semantics for higher-order logic programs. Our semantics is developed using Approximation Fixpoint Theory (AFT), a powerful formalism that has successfully been used to give meaning to diverse non-monotonic formalisms. The proposed semantics generalizes the classical two-valued stable model semantics of Gelfond and Lifschitz as well as the three-valued one of Przymusinski, retaining their desirable properties. Due to the use of AFT, we also get for free alternative semantics for higher-order logic programs, namely supported model, Kripke-Kleene, and well-founded. Additionally, we define a broad class of stratified higher-order logic programs and demonstrate that they have a unique two-valued higher-order stable model which coincides with the well-founded semantics of such programs. We provide a number of examples in different application domains, which demonstrate that higher-order logic programming under the stable model semantics is a powerful and versatile formalism, which can potentially form the basis of novel ASP systems.
Answer Set Programming with Quantifiers (ASP(Q)) has been introduced to provide a natural extension of ASP modeling to problems in the polynomial hierarchy (PH). However, ASP(Q) lacks a method for encoding in an elegant and compact way problems requiring a polynomial number of calls to an oracle in $\Sigma _n^p$ (that is, problems in $\Delta _{n+1}^p$). Such problems include, in particular, optimization problems. In this paper, we propose an extension of ASP(Q), in which component programs may contain weak constraints. Weak constraints can be used both for expressing local optimization within quantified component programs and for modeling global optimization criteria. We showcase the modeling capabilities of the new formalism through various application scenarios. Further, we study its computational properties obtaining complexity results and unveiling non-obvious characteristics of ASP(Q) programs with weak constraints.
Environmental data science for spatial extremes has traditionally relied heavily on max-stable processes. Even though the popularity of these models has perhaps peaked with statisticians, they are still perceived and considered as the “state of the art” in many applied fields. However, while the asymptotic theory supporting the use of max-stable processes is mathematically rigorous and comprehensive, we think that it has also been overused, if not misused, in environmental applications, to the detriment of more purposeful and meticulously validated models. In this article, we review the main limitations of max-stable process models, and strongly argue against their systematic use in environmental studies. Alternative solutions based on more flexible frameworks using the exceedances of variables above appropriately chosen high thresholds are discussed, and an outlook on future research is given. We consider the opportunities offered by hybridizing machine learning with extreme-value statistics, highlighting seven key recommendations moving forward.
Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, for example, coming from sensory data or neural networks with probabilities. We propose Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (binary cross-entropy) and a relaxation of the hypothesis constrainer (NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few as 8 examples. It outperforms binary ILP and statistical models such as a graph neural network.
Answer set programming is a well-understood and established problem-solving and knowledge representation paradigm. It has become more prominent amongst a wider audience due to its multiple applications in science and industry. The constant development of advanced programming and modeling techniques extends the toolset for developers and users regularly. This paper compiles and demonstrates different techniques to reuse logic program parts (multi-shot) by solving the arcade game snake. This game is particularly interesting because a victory can be assured by solving the NP-hard problem of Hamiltonian Cycles. We will demonstrate five hands-on implementations in clingo and compare their performance in an empirical evaluation. In addition, our implementation utilizes clingraph to generate a simple yet informative image representation of the game’s progress.
Dung’s abstract Argumentation Framework (AF) has emerged as a key formalism for argumentation in artificial intelligence. It has been extended in several directions, including the possibility to express supports, leading to the development of the Bipolar Argumentation Framework (BAF), and recursive attacks and supports, resulting in the Recursive BAF (Rec-BAF). Different interpretations of supports have been proposed, whereas for Rec-BAF (where the target of attacks and supports may also be attacks and supports) even different semantics for attacks have been defined. However, the semantics of these frameworks have either not been defined in the presence of support cycles or are often quite intricate in terms of the involved definitions. We encompass this limitation and present classical semantics for general BAF and Rec-BAF and show that the semantics for specific BAF and Rec-BAF frameworks can be defined by very simple and intuitive modifications of that defined for the case of AF. This is achieved by providing a modular definition of the sets of defeated and acceptable elements for each AF-based framework. We also characterize, in an elegant and uniform way, the semantics of general BAF and Rec-BAF in terms of logic programming and partial stable model semantics.
Assurance cases offer a structured way to present arguments and evidence for certification of systems where safety and security are critical. However, creating and evaluating these assurance cases can be complex and challenging, even for systems of moderate complexity. Therefore, there is a growing need to develop new automation methods for these tasks. While most existing assurance case tools focus on automating structural aspects, they lack the ability to fully assess the semantic coherence and correctness of the assurance arguments.
In prior work, we introduced the Assurance 2.0 framework that prioritizes the reasoning process, evidence utilization, and explicit delineation of counter-claims (defeaters) and counter-evidence. In this paper, we present our approach to enhancing Assurance 2.0 with semantic rule-based analysis capabilities using common-sense reasoning and answer set programming solvers, specifically s(CASP). By employing these analysis techniques, we examine the unique semantic aspects of assurance cases, such as logical consistency, adequacy, indefeasibility, etc. The application of these analyses provides both system developers and evaluators with increased confidence about the assurance case.
Text-to-image models are enabling efficient design space exploration, rapidly generating images from text prompts. However, many generative AI tools are imperfect for product design applications as they are not built for the goals and requirements of product design. The unclear link between text input and image output further complicates their application. This work empirically investigates design space exploration strategies that can successfully yield product images that are feasible, novel and aesthetic – three common goals in product design. Specifically, users’ actions within the global and local editing modes, including their time spent, prompt length, mono versus multi-criteria prompts, and goal orientation of prompts, are analyzed. Key findings reveal the pivotal role of mono versus multi-criteria and goal orientation of prompts in achieving specific design goals over time and prompt length. The study recommends prioritizing the use of multi-criteria prompts for feasibility and novelty during global editing while favoring mono-criteria prompts for aesthetics during local editing. Overall, this article underscores the nuanced relationship between the AI-driven text-to-image models and their effectiveness in product design, urging designers to carefully structure prompts during different editing modes to better meet the unique demands of product design.
This article considers modular composition as an approach to engendering structural plasticity in musical works. Structural plasticity, in this case, is defined as the ability for the components of a musical work (e.g., events, ideas, sequences, textures, timbres) to vary in how and when they are presented. In this research, modular composition is the process for creating a collection of individual musical ideas (e.g., sequences, patterns, phrases) termed ‘modules’, and designing a dynamic system for their assembly into cohesive structures. This approach results in musical works that exist in a state of constant structural flux, allowing for real-time alteration while progressing beyond similar existing approaches observed in video game music and interactive music apps, from which this research takes inspiration. Approaches involving compositionally focused intelligent music systems are also observed, highlighting how modular composition bridges traditional compositional practices and the design of interactive music systems. Two of the authors’ own works are discussed with regard to how modular composition can be implemented in varying creative ways. The outcome of this work illuminates the creative possibilities of integrating traditional compositional practices with new digital approaches to arrive at a more structurally plastic and alterable form of music.
The logico-algebraic study of Lewis’s hierarchy of variably strict conditional logics has been essentially unexplored, hindering our understanding of their mathematical foundations, and the connections with other logical systems. This work starts filling this gap by providing a logico-algebraic analysis of Lewis’s logics. We begin by introducing novel finite axiomatizations for Lewis’s logics on the syntactic side, distinguishing between global and local consequence relations on Lewisian sphere models on the semantical side, in parallel to the case of modal logic. As first main results, we prove the strong completeness of the calculi with respect to the corresponding semantical consequence on spheres, and a deduction theorem. We then demonstrate that the global calculi are strongly algebraizable in terms of a variety of Boolean algebras with a binary operator representing the counterfactual implication; in contrast, we show that the local ones are generally not algebraizable, although they can be characterized as the degree-preserving logic over the same algebraic models. This yields the strong completeness of all the logics with respect to the algebraic models.