To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In a recent line of research, two familiar concepts from logic programming semantics (unfounded sets and splitting) were extrapolated to the case of epistemic logic programs. The property of epistemic splitting provides a natural and modular way to understand programs without epistemic cycles but, surprisingly, was only fulfilled by Gelfond’s original semantics (G91), among the many proposals in the literature. On the other hand, G91 may suffer from a kind of self-supported, unfounded derivations when epistemic cycles come into play. Recently, the absence of these derivations was also formalised as a property of epistemic semantics called foundedness. Moreover, a first semantics proved to satisfy foundedness was also proposed, the so-called Founded Autoepistemic Equilibrium Logic (FAEEL). In this paper, we prove that FAEEL also satisfies the epistemic splitting property something that, together with foundedness, was not fulfilled by any other approach up to date. To prove this result, we provide an alternative characterisation of FAEEL as a combination of G91 with a simpler logic we called Founded Epistemic Equilibrium Logic (FEEL), which is somehow an extrapolation of the stable model semantics to the modal logic S5.
Abstract solvers are a method to formally analyze algorithms that have been profitably used for describing, comparing and composing solving techniques in various fields such as Propositional Satisfiability (SAT), Quantified SAT, Satisfiability Modulo Theories, Answer Set Programming (ASP), and Constraint ASP.
In this paper, we design, implement and test novel abstract solutions for cautious reasoning tasks in ASP. We show how to improve the current abstract solvers for cautious reasoning in ASP with new techniques borrowed from backbone computation in SAT, in order to design new solving algorithms. By doing so, we also formally show that the algorithms for solving cautious reasoning tasks in ASP are strongly related to those for computing backbones of Boolean formulas. We implement some of the new solutions in the ASP solver wasp and show that their performance are comparable to state-of-the-art solutions on the benchmark problems from the past ASP Competitions.
When programs feature a complex control flow, existing techniques for resource analysis produce cost relation systems (CRS) whose cost functions retain the complex flow of the program and, consequently, might not be solvable into closed-form upper bounds. This paper presents a novel approach to resource analysis that is driven by the result of a termination analysis. The fundamental idea is that the termination proof encapsulates the flows of the program which are relevant for the cost computation so that, by driving the generation of the CRS using the termination proof, we produce a linearly-bounded CRS (LB-CRS). A LB-CRS is composed of cost functions that are guaranteed to be locally bounded by linear ranking functions and thus greatly simplify the process of CRS solving. We have built a new resource analysis tool, named MaxCore, that is guided by the VeryMax termination analyzer and uses CoFloCo and PUBS as CRS solvers. Our experimental results on the set of benchmarks from the Complexity and Termination Competition 2019 for C Integer programs show that MaxCore outperforms all other resource analysis tools.
Magic sets are a Datalog to Datalog rewriting technique to optimize query answering. The rewritten program focuses on a portion of the stable model(s) of the input program which is sufficient to answer the given query. However, the rewriting may introduce new recursive definitions, which can involve even negation and aggregations, and may slow down program evaluation. This paper enhances the magic set technique by preventing the creation of (new) recursive definitions in the rewritten program. It turns out that the new version of magic sets is closed for Datalog programs with stratified negation and aggregations, which is very convenient to obtain efficient computation of the stable model of the rewritten program. Moreover, the rewritten program is further optimized by the elimination of subsumed rules and by the efficient handling of the cases where binding propagation is lost. The research was stimulated by a challenge on the exploitation of Datalog/dlv for efficient reasoning on large ontologies. All proposed techniques have been hence implemented in the dlv system, and tested for ontological reasoning, confirming their effectiveness.
In this paper we consider Epistemic Logic Programs, which extend Answer Set Programming (ASP) with “ epistemic operators” and “ epistemic negation”, and a recent approach to the semantics of such programs in terms of World Views. We propose some observations on the existence and number of world views. We show how to exploit an extended ASP semantics in order to: (i) provide a characterization of world views, different from existing ones; (ii) query world views and query the whole set of world views.
Abstract Dialectical Frameworks (ADFs) are argumentation frameworks where each node is associated with an acceptance condition. This allows us to model different types of dependencies as supports and attacks. Previous studies provided a translation from Normal Logic Programs (NLPs) to ADFs and proved the stable models semantics for a normal logic program has an equivalent semantics to that of the corresponding ADF. However, these studies failed in identifying a semantics for ADFs equivalent to a three-valued semantics (as partial stable models and well-founded models) for NLPs. In this work, we focus on a fragment of ADFs, called Attacking Dialectical Frameworks (ADF+s), and provide a translation from NLPs to ADF+s robust enough to guarantee the equivalence between partial stable models, well-founded models, regular models, stable models semantics for NLPs and respectively complete models, grounded models, preferred models, stable models for ADFs. In addition, we define a new semantics for ADF+s, called L-stable, and show it is equivalent to the L-stable semantics for NLPs.
Answer Set Programming (ASP) is a well-known declarative formalism in logic programming. Efficient implementations made it possible to apply ASP in many scenarios, ranging from deductive databases applications to the solution of hard combinatorial problems. State-of-the-art ASP systems are based on the traditional ground&solve approach and are general-purpose implementations, i.e., they are essentially built once for any kind of input program. In this paper, we propose an extended architecture for ASP systems, in which parts of the input program are compiled into an ad-hoc evaluation algorithm (i.e., we obtain a specific binary for a given program), and might not be subject to the grounding step. To this end, we identify a condition that allows the compilation of a sub-program, and present the related partial compilation technique. Importantly, we have implemented the new approach on top of a well-known ASP solver and conducted an experimental analysis on publicly-available benchmarks. Results show that our compilation-based approach improves on the state of the art in various scenarios, including cases in which the input program is stratified or the grounding blow-up makes the evaluation unpractical with traditional ASP systems.
We present a system for online composite event recognition over streaming positions of commercial vehicles. Our system employs a data enrichment module, augmenting the mobility data with external information, such as weather data and proximity to points of interest. In addition, the composite event recognition module, based on a highly optimised logic programming implementation of the Event Calculus, consumes the enriched data and identifies activities that are beneficial in fleet management applications. We evaluate our system on large, real-world data from commercial vehicles, and illustrate its efficiency.
Epistemic Logic Programs (ELPs) extend Answer Set Programming (ASP) with epistemic negation and have received renewed interest in recent years. This led to the development of new research and efficient solving systems for ELPs. In practice, ELPs are often written in a modular way, where each module interacts with other modules by accepting sets of facts as input, and passing on sets of facts as output. An interesting question then presents itself: under which conditions can such a module be replaced by another one without changing the outcome, for any set of input facts? This problem is known as uniform equivalence, and has been studied extensively for ASP. For ELPs, however, such an investigation is, as of yet, missing. In this paper, we therefore propose a characterization of uniform equivalence that can be directly applied to the language of state-of-the-art ELP solvers. We also investigate the computational complexity of deciding uniform equivalence for two ELPs, and show that it is on the third level of the polynomial hierarchy.
Answer Set Programming (ASP) solvers are highly-tuned and complex procedures that implicitly solve the consistency problem, i.e., deciding whether a logic program admits an answer set. Verifying whether a claimed answer set is formally a correct answer set of the program can be decided in polynomial time for (normal) programs. However, it is far from immediate to verify whether a program that is claimed to be inconsistent, indeed does not admit any answer sets. In this paper, we address this problem and develop the new proof format ASP-DRUPE for propositional, disjunctive logic programs, including weight and choice rules. ASP-DRUPE is based on the Reverse Unit Propagation (RUP) format designed for Boolean satisfiability. We establish correctness of ASP-DRUPE and discuss how to integrate it into modern ASP solvers. Later, we provide an implementation of ASP-DRUPE into the wasp solver for normal logic programs.
Answer Set Programming (ASP) is a well-established formalism for logic programming. Problem solving in ASP requires to write an ASP program whose answers sets correspond to solutions. Albeit the non-existence of answer sets for some ASP programs can be considered as a modeling feature, it turns out to be a weakness in many other cases, and especially for query answering. Paracoherent answer set semantics extend the classical semantics of ASP to draw meaningful conclusions also from incoherent programs, with the result of increasing the range of applications of ASP. State of the art implementations of paracoherent ASP adopt the semi-equilibrium semantics, but cannot be lifted straightforwardly to compute efficiently the (better) split semi-equilibrium semantics that discards undesirable semi-equilibrium models. In this paper an efficient evaluation technique for computing a split semi-equilibrium model is presented. An experiment on hard benchmarks shows that better paracoherent answer sets can be computed consuming less computational resources than existing methods.
The input language of the answer set solver clingo is based on the definition of a stable model proposed by Paolo Ferraris. The semantics of the ASP-Core language, developed by the ASP Standardization Working Group, uses the approach to stable models due to Wolfgang Faber, Nicola Leone, and Gerald Pfeifer. The two languages are based on different versions of the stable model semantics, and the ASP-Core document requires, “for the sake of an uncontroversial semantics,” that programs avoid the use of recursion through aggregates. In this paper we prove that the absence of recursion through aggregates does indeed guarantee the equivalence between the two versions of the stable model semantics, and show how that requirement can be relaxed without violating the equivalence property.
In the last years, abstract argumentation has met with great success in AI, since it has served to capture several non-monotonic logics for AI. Relations between argumentation framework (AF) semantics and logic programming ones are investigating more and more. In particular, great attention has been given to the well-known stable extensions of an AF, that are closely related to the answer sets of a logic program. However, if a framework admits a small incoherent part, no stable extension can be provided. To overcome this shortcoming, two semantics generalizing stable extensions have been studied, namely semi-stable and stage. In this paper, we show that another perspective is possible on incoherent AFs, called paracoherent extensions, as they have a counterpart in paracoherent answer set semantics. We compare this perspective with semi-stable and stage semantics, by showing that computational costs remain unchanged, and moreover an interesting symmetric behaviour is maintained.
Information extraction is the process of converting unstructured text into a structured data base containing selected information from the text. It is an essential step in making the information content of the text usable for further processing. In this paper, we describe how information extraction has changed over the past 25 years, moving from hand-coded rules to neural networks, with a few stops on the way. We connect these changes to research advances in NLP and to the evaluations organized by the US Government.
Balance theory has advanced with interdisciplinary contributions from social science, physical science, engineering, and mathematics. The common focus of attention is social networks in which every individual has either a positive or negative, cognitive or emotional, appraisal of every other individual. The current frontier of work on balance theory is a hunt for a dynamical model that predicts the temporal evolution of any such appraisal network to a particular structure in the complete set of balanced networks allowed by the theory. Finding such a model has proved to be a difficult problem. In this article, we contribute a parsimonious solution of the problem that explicates the conditions under which a network will evolve either to a set of mutually antagonistic cliques or to an asymmetric structure that allows agreement, cooperation, and compromise among cliques.
Automatic text summarisation is a topic that has been receiving attention from the research community from the early days of computational linguistics, but it really took off around 25 years ago. This article presents the main developments from the last 25 years. It starts by defining what a summary is and how its definition changed over time as a result of the interest in processing new types of documents. The article continues with a brief history of the field and highlights the main challenges posed by the evaluation of summaries. The article finishes with some thoughts about the future of the field.
Network science gathers methods coming from various disciplines which sometimes hardly cross the boundaries between these disciplines. Widely used in molecular biology in the study of protein interaction networks, the enumeration, in a network, of all possible subgraphs of a limited size (usually around four or five nodes), often called graphlets, can only be found in a few works dealing with social networks. In the present work, we apply this approach to an original corpus of about 10,000 non-overlapping Facebook ego networks gathered from voluntary participants by a survey application. To deal with so many similar networks, we adapt the relative graphlet frequency to a measure that we call graphlet representativity, which we show to be more effective to classify random networks having slight structural differences. From our data, we produce two clusterings, one of graphlets (paths, star-like, holes, light triangles, and dense), one of networks. The latter is presented with a visualization scheme using our representativity measure. We describe the distinct structural characteristics of the five clusters of Facebook ego networks so obtained and discuss the empirical differences between results obtained with 4-node and 5-node graphlets. We also provide suggestions of follow-ups of this work, both in sociology and in network science.
Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, since properties such as decidable type-checking are easily lost. This paper presents a family of calculi called pure iso-type systems (PITSs), which employs unified syntax, supports general recursion and preserves decidable type-checking. PITS is comparable in simplicity to pure type systems (PTSs), and is useful to serve as a foundation for functional languages that stand in-between traditional ML-like languages and fully blown dependently typed languages. In PITS, recursion and recursive types are completely unrestricted and type equality is simply based on alpha-equality, just like traditional ML-style languages. However, like most dependently typed languages, PITS uses unified syntax, naturally supporting many advanced type system features. Instead of implicit type conversion, PITS provides a generalization of iso-recursive types called iso-types. Iso-types replace the conversion rule typically used in dependently typed calculus and make every type-level computation explicit via cast operators. Iso-types avoid the complexity of explicit equality proofs employed in other approaches with casts. We study three variants of PITS that differ on the reduction strategy employed by the cast operators: call-by-name, call-by-value and parallel reduction. One key finding is that while using call-by-value or call-by-name reduction in casts loses some expressive power, it allows those variants of PITS to have simple and direct operational semantics and proofs. In contrast, the variant of PITS with parallel reduction retains the expressive power of PTS conversion, at the cost of a more complex metatheory.
Researchers collecting survey data on inter-organizational networks typically choose a single informant with the most senior job title in each organization from whom to obtain a report of the organization’s inter-organizational ties. This approach to informant selection is based on the logic that greater seniority confers greater knowledge of inter-organizational relationships. The present study investigated the wisdom of this logic, using data in which multiple informants’ reports of inter-organizational network ties were collected for each organization. We calculated the degree of agreement in network reports between the informant with the most senior job title and a second informant in the organization. To determine if alternative criteria to seniority serve as better approaches to informant selection, we assessed other potential predictors of agreement in informants’ reports. Results indicated that (1) informants’ perceptions of the network differed significantly according to job title, suggesting little agreement between senior informants and their more junior colleagues; and (2) greater informant tenure in the network and industry were associated with greater agreement among informants. These results call a common data collection practice into question and suggest that tenure may trump title as a criterion for informant selection in inter-organizational network research.
This essay discusses rules and semantic clauses relating to Substitution—Leibniz’s law in the conjunctive-implicational form $s\dot{ = }t \wedge A\left( s \right) \to A\left( t \right)$—as these are put forward in Priest’s books In Contradiction and An Introduction to Non-Classical Logic: From If to Is. The stated rules and clauses are shown to be too weak in some cases and too strong in others. New ones are presented and shown to be correct. Justification for the various rules is probed and it is argued that Substitution ought to fail.