To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Pointer analysis is a fundamental static program analysis for computing the set of objects that an expression can refer to. Decades of research has gone into developing methods of varying precision and efficiency for pointer analysis for programs that use different language features, but determining precisely how efficient a particular method is has been a challenge in itself.
For programs that use different language features, we consider methods for pointer analysis using Datalog and extensions to Datalog. When the rules are in Datalog, we present the calculation of precise time complexities from the rules using a new algorithm for decomposing rules for obtaining the best complexities. When extensions such as function symbols and universal quantification are used, we describe algorithms for efficiently implementing the extensions and the complexities of the algorithms.
The definition of stable models for propositional formulas with infinite conjunctions and disjunctions can be used to describe the semantics of answer set programming languages. In this note, we enhance that definition by introducing a distinction between intensional and extensional atoms. The symmetric splitting theorem for first-order formulas is then extended to infinitary formulas and used to reason about infinitary definitions.
We present ABETS, an assertion-based, dynamic analyzer that helps diagnose errors in Maude programs. ABETS uses slicing to automatically create reduced versions of both a run's execution trace and executed program, reduced versions in which any information that is not relevant to the bug currently being diagnosed is removed. In addition, ABETS employs runtime assertion checking to automate the identification of bugs so that whenever an assertion is violated, the system automatically infers accurate slicing criteria from the failure. We summarize the main services provided by ABETS, which also include a novel assertion-based facility for program repair that generates suitable program fixes when a state invariant is violated. Finally, we provide an experimental evaluation that shows the performance and effectiveness of the system.
Standard answer set programming (ASP) targets at solving search problems from the first level of the polynomial time hierarchy (PH). Tackling search problems beyond NP using ASP is less straightforward. The class of disjunctive logic programs offers the most prominent way of reaching the second level of the PH, but encoding respective hard problems as disjunctive programs typically requires sophisticated techniques such as saturation or meta-interpretation. The application of such techniques easily leads to encodings that are inaccessible to non-experts. Furthermore, while disjunctive ASP solvers often rely on calls to a (co-)NP oracle, it may be difficult to detect from the input program where the oracle is being accessed. In other formalisms, such as Quantified Boolean Formulas (QBFs), the interface to the underlying oracle is more transparent as it is explicitly recorded in the quantifier prefix of a formula. On the other hand, ASP has advantages over QBFs from the modeling perspective. The rich high-level languages such as ASP-Core-2 offer a wide variety of primitives that enable concise and natural encodings of search problems. In this paper, we present a novel logic programming–based modeling paradigm that combines the best features of ASP and QBFs. We develop so-called combined logic programs in which oracles are directly cast as (normal) logic programs themselves. Recursive incarnations of this construction enable logic programming on arbitrarily high levels of the PH. We develop a proof-of-concept implementation for our new paradigm.
For some applications, standard resource analyses do not provide the information required. Such analyses estimate the total resource usage of a program (without executing it) as functions on input data sizes. However, some applications require knowing how such total resource usage is distributed over selected parts of a program. We propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing a wide range of resource usage analyses, including both static profiling and the inference of the standard notion of cost. We extend and generalize standard resource analysis techniques, so that the relations generated include additional Boolean control variables for switching on or off different terms in the relations, as required by the desired resource usage profile. We also instantiate our framework to perform static profiling of accumulated cost (also parameterized by input data sizes). Such information is much more useful to the software developer than the standard notion of cost: it identifies the parts of the program that have the greatest impact on the total program cost, and which therefore should be optimized first. We also report on an implementation of our framework within the CiaoPP system, and its instantiation for accumulated cost, and provide some experimental results. In addition to generality, our new method brings important advantages over our previous approach based on a program transformation, including support for non-deterministic programs, better and easier integration in the compiler, and higher efficiency.
We study autostability spectra relative to strong constructivizations (SC-autostability spectra). For a decidable structure $\mathcal{S}$, the SC-autostability spectrum of $\mathcal{S}$ is the set of all Turing degrees capable of computing isomorphisms among arbitrary decidable copies of $\mathcal{S}$. The degree of SC-autostability for $\mathcal{S}$ is the least degree in the spectrum (if such a degree exists).
We prove that for a computable successor ordinal α, every Turing degree c.e. in and above 0(α) is the degree of SC-autostability for some decidable structure. We show that for an infinite computable ordinal β, every Turing degree c.e. in and above 0(2β+1) is the degree of SC-autostability for some discrete linear order. We prove that the set of all PA-degrees is an SC-autostability spectrum. We also obtain similar results for autostability spectra relative to n-constructivizations.
The infinitary propositional logic of here-and-there is important for the theory of answer set programming in view of its relation to strongly equivalent transformations of logic programs. We know a formal system axiomatizing this logic exists, but a proof in that system may include infinitely many formulas. In this note we describe a relationship between the validity of infinitary formulas in the logic of here-and-there and the provability of formulas in some finite deductive systems. This relationship allows us to use finite proofs to justify the validity of infinitary formulas.
Systems for symbolic event recognition infer occurrences of events in time using a set of event definitions in the form of first-order rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, direct connections to machine learning, via Inductive Logic Programming (ILP). We present an ILP system for online learning of Event Calculus theories. To allow for a single-pass learning strategy, we use the Hoeffding bound for evaluating clauses on a subset of the input stream. We employ a decoupling scheme of the Event Calculus axioms during the learning process, that allows to learn each clause in isolation. Moreover, we use abductive-inductive logic programming techniques to handle unobserved target predicates. We evaluate our approach on an activity recognition application and compare it to a number of batch learning techniques. We obtain results of comparable predicative accuracy with significant speed-ups in training time. We also outperform hand-crafted rules and match the performance of a sound incremental learner that can only operate on noise-free datasets.
In this paper, we study an extension of the stable model semantics for disjunctive logic programs where each true atom in a model is associated with an algebraic expression (in terms of rule labels) that represents its justifications. As in our previous work for non-disjunctive programs, these justifications are obtained in a purely semantic way, by algebraic operations (product, addition and application) on a lattice of causal values. Our new definition extends the concept of causal stable model to disjunctive logic programs and satisfies that each (standard) stable model corresponds to a disjoint class of causal stable models sharing the same truth assignments, but possibly varying the obtained explanations. We provide a pair of illustrative examples showing the behaviour of the new semantics and discuss the need of introducing a new type of rule, which we call causal-choice. This type of rule intuitively captures the idea of “A may cause B” and, when causal information is disregarded, amounts to a usual choice rule under the standard stable model semantics.
The main track of the Thirty Second International Conference on Logic Programming (ICLP) took place in New York City, USA, from the 18th to the 21st October 2016. It seems fitting to hold a significant, power of two, ICLP in New York because the city has a long and distinguished association with logic programming: XSB was developed at Stony Brook, as was HiLog before that, and SB-Prolog before that. Moreover, Picat was developed at the City University of New York, as was B-Prolog, and other logic programming-based systems, such as Ergo. New York has also been (and is) the cradle of several start-ups based on logic programming.
Tabling is a powerful resolution mechanism for logic programs that captures their least fixed point semantics more faithfully than plain Prolog. In many tabling applications, we are not interested in the set of all answers to a goal, but only require an aggregation of those answers. Several works have studied efficient techniques, such as lattice-based answer subsumption and mode-directed tabling, to do so for various forms of aggregation.
While much attention has been paid to expressivity and efficient implementation of the different approaches, soundness has not been considered. This paper shows that the different implementations indeed fail to produce least fixed points for some programs. As a remedy, we provide a formal framework that generalises the existing approaches and we establish a soundness criterion that explains for which programs the approach is sound.
Symmetry in combinatorial problems is an extensively studied topic. We continue this research in the context of model expansion problems, with the aim of automating the workflow of detecting and breaking symmetry. We focus on local domain symmetry, which is induced by permutations of domain elements, and which can be detected on a first-order level. As such, our work is a continuation of the symmetry exploitation techniques of model generation systems, while it differs from more recent symmetry breaking techniques in answer set programming which detect symmetry on ground programs. Our main contributions are sufficient conditions for symmetry of model expansion problems, the identification of local domain interchangeability, which can often be broken completely, and efficient symmetry detection algorithms for both local domain interchangeability as well as local domain symmetry in general. Our approach is implemented in the model expansion system IDP, and we present experimental results showcasing the strong and weak points of our approach compared to sbass, a symmetry breaking technique for answer set programming.
The runtime system of dynamic languages such as Prolog or Lisp and their derivatives contain a symbol table, in Prolog often called the atom table. A simple dynamically resizing hash-table used to be an adequate way to implement this table. As Prolog becomes fashionable for 24 × 7 server processes we need to deal with atom garbage collection and concurrent access to the atom table. Classical lock-based implementations to ensure consistency of the atom table scale poorly and a stop-the-world approach to implement atom garbage collection quickly becomes a bottle-neck, making Prolog unsuitable for soft real-time applications. In this article we describe a novel implementation for the atom table using lock-free techniques where the atom-table remains accessible even during atom garbage collection. Relying only on CAS (Compare And Swap) and not on external libraries, the implementation is straightforward and portable.
Word puzzles and the problem of their representations in logic languages have received considerable attention in the last decade (Ponnuru et al. 2004; Shapiro 2011; Baral and Dzifcak 2012; Schwitter 2013). Of special interest is the problem of generating such representations directly from natural language (NL) or controlled natural language (CNL). An interesting variation of this problem, and to the best of our knowledge, scarcely explored variation in this context, is when the input information is inconsistent. In such situations, the existing encodings of word puzzles produce inconsistent representations and break down. In this paper, we bring the well-known type of paraconsistent logics, called Annotated Predicate Calculus (APC) (Kifer and Lozinskii 1992), to bear on the problem. We introduce a new kind of non-monotonic semantics for APC, called consistency preferred stable models and argue that it makes APC into a suitable platform for dealing with inconsistency in word puzzles and, more generally, in NL sentences. We also devise a number of general principles to help the user choose among the different representations of NL sentences, which might seem equivalent but, in fact, behave differently when inconsistent information is taken into account. These principles can be incorporated into existing CNL translators, such as Attempto Controlled English (ACE) (Fuchs et al. 2008) and PENG Light (White and Schwitter 2009). Finally, we show that APC with the consistency preferred stable model semantics can be equivalently embedded in ASP with preferences over stable models, and we use this embedding to implement this version of APC in Clingo (Gebser et al. 2011) and its Asprin add-on (Brewka et al. 2015).
Programmers currently enjoy access to a very high number of code repositories and libraries of ever increasing size. The ensuing potential for reuse is however hampered by the fact that searching within all this code becomes an increasingly difficult task. Most code search engines are based on syntactic techniques such as signature matching or keyword extraction. However, these techniques are inaccurate (because they basically rely on documentation) and at the same time do not offer very expressive code query languages. We propose a novel approach that focuses on querying for semantic characteristics of code obtained automatically from the code itself. Program units are pre-processed using static analysis techniques, based on abstract interpretation, obtaining safe semantic approximations. A novel, assertion-based code query language is used to express desired semantic characteristics of the code as partial specifications. Relevant code is found by comparing such partial specifications with the inferred semantics for program elements. Our approach is fully automatic and does not rely on user annotations or documentation. It is more powerful and flexible than signature matching because it is parametric on the abstract domain and properties, and does not require type definitions. Also, it reasons with relations between properties, such as implication and abstraction, rather than just equality. It is also more resilient to syntactic code differences. We describe the approach and report on a prototype implementation within the Ciao system.
Answer set programming (ASP) is a well-established logic programming language that offers an intuitive, declarative syntax for problem solving. In its traditional application, a fixed ASP program for a given problem is designed and the actual instance of the problem is fed into the program as a set of facts. This approach typically results in programs with comparably short and simple rules. However, as is known from complexity analysis, such an approach limits the expressive power of ASP; in fact, an entire NP-check can be encoded into a single large rule body of bounded arity that performs both a guess and a check within the same rule. Here, we propose a novel paradigm for encoding hard problems in ASP by making explicit use of large rules which depend on the actual instance of the problem. We illustrate how this new encoding paradigm can be used, providing examples of problems from the first, second, and even third level of the polynomial hierarchy. As state-of-the-art solvers are tuned towards short rules, rule decomposition is a key technique in the practical realization of our approach. We also provide some preliminary benchmarks which indicate that giving up the convenient way of specifying a fixed program can lead to a significant speed-up.
In this paper we exploit Answer Set Programming (ASP) for reasoning in a rational extension SROEL (⊓,×)RT of the low complexity description logic SROEL(⊓, ×), which underlies the OWL EL ontology language. In the extended language, a typicality operator T is allowed to define concepts T(C) (typical C's) under a rational semantics. It has been proven that instance checking under rational entailment has a polynomial complexity. To strengthen rational entailment, in this paper we consider a minimal model semantics. We show that, for arbitrary SROEL(⊓,×)RT knowledge bases, instance checking under minimal entailment is ΠP2-complete. Relying on a Small Model result, where models correspond to answer sets of a suitable ASP encoding, we exploit Answer Set Preferences (and, in particular, the asprin framework) for reasoning under minimal entailment.
In recent work we defined resource-based answer set semantics, which is an extension to answer set semantics stemming from the study of its relationship with linear logic. In fact, the name of the new semantics comes from the fact that in the linear-logic formulation every literal (including negative ones) were considered as a resource. In this paper, we propose a query-answering procedure reminiscent of Prolog for answer set programs under this extended semantics as an extension of XSB-resolution for logic programs with negation.1 We prove formal properties of the proposed procedure. Under consideration for acceptance in TPLP.
Recent work has provided delimited control for Prolog to dynamically manipulate the program control-flow, and to implement a wide range of control-flow and dataflow effects on top of. Unfortunately, delimited control is a rather primitive language feature that is not easy to use.
As a remedy, this work introduces algebraic effect handlers for Prolog, as a high-level and structured way of defining new side-effects in a modular fashion. We illustrate the expressive power of the feature and provide an implementation by means of elaboration into the delimited control primitives.
The latter add a non-negligible performance overhead when used extensively. To address this issue, we present an optimised compilation approach that combines partial evaluation with dedicated rewrite rules. The rewrite rules are driven by a lightweight effect inference that analyses what effect operations may be called by a goal. We illustrate the effectiveness of this approach on a range of benchmarks.