To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Unsatisfiable core analysis can boost the computation of optimum stable models for logic programs with weak constraints. However, current solvers employing unsatisfiable core analysis either run to completion, or provide no suboptimal stable models but the one resulting from the preliminary disjoint cores analysis. This drawback is circumvented here by introducing a progression based shrinking of the analyzed unsatisfiable cores. In fact, suboptimal stable models are possibly found while shrinking unsatisfiable cores, hence resulting into an anytime algorithm. Moreover, as confirmed empirically, unsatisfiable core analysis also benefits from the shrinking process in terms of solved instances.
The dlvhex system implements the hex-semantics, which integrates answer set programming (ASP) with arbitrary external sources. Since its first release ten years ago, significant advancements were achieved. Most importantly, the exploitation of properties of external sources led to efficiency improvements and flexibility enhancements of the language, and technical improvements on the system side increased user's convenience. In this paper, we present the current status of the system and point out the most important recent enhancements over early versions. While existing literature focuses on theoretical aspects and specific components, a bird's eye view of the overall system is missing. In order to promote the system for real-world applications, we further present applications which were already successfully realized on top of dlvhex.
Answer Set Programming (ASP) is a popular logic programming paradigm that has been applied for solving a variety of complex problems. Among the most challenging real-world applications of ASP are two industrial problems defined by Siemens: the Partner Units Problem (PUP) and the Combined Configuration Problem (CCP). The hardest instances of PUP and CCP are out of reach for state-of-the-art ASP solvers. Experiments show that the performance of ASP solvers could be significantly improved by embedding domain-specific heuristics, but a proper effective integration of such criteria in off-the-shelf ASP implementations is not obvious. In this paper the combination of ASP and domain-specific heuristics is studied with the goal of effectively solving real-world problem instances of PUP and CCP. As a byproduct of this activity, the ASP solver wasp was extended with an interface that eases embedding new external heuristics in the solver. The evaluation shows that our domain-heuristic-driven ASP solver finds solutions for all the real-world instances of PUP and CCP ever provided by Siemens.
Modular logic programs provide a way of viewing logic programs as consisting of many independent, meaningful modules. This paper introduces first-order modular logic programs, which can capture the meaning of many answer set programs. We also introduce conservative extensions of such programs. This concept helps to identify strong relationships between modular programs as well as between traditional programs. We show how the notion of a conservative extension can be used to justify the common projection rewriting.
We present an extension of Logic Programming (under stable models semantics) that, not only allows concluding whether a true atom is a cause of another atom, but also deriving new conclusions from these causal-effect relations. This is expressive enough to capture informal rules like “if some agent's actions have been necessary to cause an event E then conclude atom caused(, E),” something that, to the best of our knowledge, had not been formalised in the literature. To this aim, we start from a first attempt that proposed extending the syntax of logic programs with so-called causal literals. These causal literals are expressions that can be used in rule bodies and allow inspecting the derivation of some atom A in the program with respect to some query function ψ. Depending on how these query functions are defined, we can model different types of causal relations such as sufficient, necessary or contributory causes, for instance. The initial approach was specifically focused on monotonic query functions. This was enough to cover sufficient cause-effect relations but, unfortunately, necessary and contributory are essentially non-monotonic. In this work, we define a semantics for non-monotonic causal literals showing that, not only extends the stable model semantics for normal logic programs, but also preserves many of its usual desirable properties for the extended syntax. Using this new semantics, we provide precise definitions of necessary and contributory causal relations and briefly explain their behaviour on a pair of typical examples from the Knowledge Representation literature.
Nowadays, clusters of multicores are becoming the norm and, although, many or-parallel Prolog systems have been developed in the past, to the best of our knowledge, none of them was specially designed to explore the combination of shared and distributed memory architectures. In recent work, we have proposed a novel computational model specially designed for such combination which introduces a layered model with two scheduling levels, one for workers sharing memory resources, which we named a team of workers, and another for teams of workers (not sharing memory resources). In this work, we present a first implementation of such model and for that we revive and extend the YapOr system to exploit or-parallelism between teams of workers. We also propose a new set of built-in predicates that constitute the syntax to interact with an or-parallel engine in our platform. Experimental results show that our implementation is able to increase speedups as we increase the number of workers per team, thus taking advantage of the maximum number of cores in a machine, and to increase speedups as we increase the number of teams, thus taking advantage of adding more computer nodes to a cluster.
In recent years, several frameworks and systems have been proposed that extend Inductive Logic Programming (ILP) to the Answer Set Programming (ASP) paradigm. In ILP, examples must all be explained by a hypothesis together with a given background knowledge. In existing systems, the background knowledge is the same for all examples; however, examples may be context-dependent. This means that some examples should be explained in the context of some information, whereas others should be explained in different contexts. In this paper, we capture this notion and present a context-dependent extension of the Learning from Ordered Answer Sets framework. In this extension, contexts can be used to further structure the background knowledge. We then propose a new iterative algorithm, ILASP2i, which exploits this feature to scale up the existing ILASP2 system to learning tasks with large numbers of examples. We demonstrate the gain in scalability by applying both algorithms to various learning tasks. Our results show that, compared to ILASP2, the newly proposed ILASP2i system can be two orders of magnitude faster and use two orders of magnitude less memory, whilst preserving the same average accuracy.
This paper presents CoreALMlib, an $\mathscr{ALM}$ library of commonsense knowledge about dynamic domains. The library was obtained by translating part of the Component Library (CLib) into the modular action language $\mathscr{ALM}$. CLib consists of general reusable and composable commonsense concepts, selected based on a thorough study of ontological and lexical resources. Our translation targets CLibstates (i.e., fluents) and actions. The resulting $\mathscr{ALM}$ library contains the descriptions of 123 action classes grouped into 43 reusable modules that are organized into a hierarchy. It is made available online and of interest to researchers in the action language, answer-set programming, and natural language understanding communities. We believe that our translation has two main advantages over its CLib counterpart: (i) it specifies axioms about actions in a more elaboration tolerant and readable way, and (ii) it can be seamlessly integrated with ASP reasoning algorithms (e.g., for planning and postdiction). In contrast, axioms are described in CLib using STRIPS-like operators, and CLib's inference engine cannot handle planning nor postdiction.
Inspiration is a widely recognized phenomenon in everyday life. However, researchers still know very little about what the process of inspiration entails. This paper investigates designers’ approaches when selecting inspirational stimuli during the initial phases of a design process. We conducted a think-aloud protocol study and interviews with 31 design Masters students while generating ideas for a design problem. The results indicate that searching for and selecting stimuli require different levels of cognitive effort, depending on whether there is unlimited or limited access to stimuli. Furthermore, three important stages of the inspiration process were identified: keyword definition, stimuli search and stimuli selection. For each of these stages, we elaborate on how designers define keywords, which search approaches they use and what drives their selection of stimuli. This paper contributes to an understanding of how designers can be supported in their inspiration process in a more detailed manner.
This paper presents the pl-nauty library, a Prolog interface to the nauty graph-automorphism tool. Adding the capabilities of nauty to Prolog combines the strength of the “generate and prune” approach that is commonly used in logic programming and constraint solving, with the ability to reduce symmetries while reasoning over graph objects. Moreover, it enables the integration of nauty in existing tool-chains, such as SAT-solvers or finite domain constraints compilers which exist for Prolog. The implementation consists of two components: pl-nauty, an interface connecting nauty's C library with Prolog, and pl-gtools, a Prolog framework integrating the software component of nauty, called gtools, with Prolog. The complete tool is available as a SWI-Prolog module. We provide a series of usage examples including two that apply to generate Ramsey graphs.
Management of chronic diseases such as chronic heart failure (CHF) is a major problem in health care. A standard approach followed by the medical community is to have a committee of experts develop guidelines that all physicians should follow. These guidelines typically consist of a series of complex rules that make recommendations based on a patient's information. Due to their complexity, often the guidelines are ignored or not complied with at all. It is not even clear whether it is humanly possible to follow these guidelines due to their length and complexity. For instance, for CHF, the guidelines run nearly eighty pages. In this paper we describe a physician-advisory system for CHF management that codes the entire set of clinical practice guidelines for CHF using answer set programming (ASP). Our approach is based on developing reasoning templates, that we call knowledge patterns, and using them to systemically code the clinical guidelines for CHF as ASP rules. Use of the knowledge patterns greatly facilitates the development of our system. Given a patient's medical information, our system generates a recommendation for treatment just as a human physician would, using the guidelines. Our system works even in the presence of incomplete information.
This comprehensive text provides a modern and technically precise exposition of the fundamental theory and applications of temporal logics in computer science. Part I presents the basics of discrete transition systems, including constructions and behavioural equivalences. Part II examines the most important temporal logics for transition systems and Part III looks at their expressiveness and complexity. Finally, Part IV describes the main computational methods and decision procedures for model checking and model building - based on tableaux, automata and games - and discusses their relationships. The book contains a wealth of examples and exercises, as well as an extensive annotated bibliography. Thus, the book is not only a solid professional reference for researchers in the field but also a comprehensive graduate textbook that can be used for self-study as well as for teaching courses.
This chapter presents preliminaries on set-theoretical notions, binary relations, linear orderings, fixpoint theory and computational complexity classes. Mainly, we provide notations for standard notions rather than giving a thorough introduction to these notions. More definitions are provided in the book, and we invite the reader to consult textbooks on these subjects for further information. For instance, in Moschovakis (2006) any reader can find material about set-theoretical notions, ordinals or fixpoints far beyond what is sketched in this chapter. Still, we implicitly assume that the reader has basic set-theoretic background. As stated already in Chapter 1, we do not intend to teach this material here but rather to recall the most basic notions, terminology and notation. The current chapter is included for the convenience of the reader as a quick reference.
Structure of the chapter. The chapter is divided into two sections. The first section contains standard material on sets and relations. Section 2.1.1 presents standard set-theoretical notions that are used throughout the book. Binary relations are ubiquitous structures in this volume, and Section 2.1.2 is dedicated to standard definitions about them. In Section 2.1.3, we provide basic definitions about partial and linear orders.
The second section contains material that is more specialised and needed for the development of a theory of and algorithms for temporal logics. Section 2.2.1 presents the basics of fixpoint theory; Chapter 8, which deals with the modal μ-calculus with fixpoint operators, uses some of the results stated herein. In Section 2.2.2, we recall standard complexity classes defined via deterministic and nondeterministic time- and space-bounded Turing machines. Other classes, in particular involving alternating Turing machines, are discussed in Chapter 11. Section 2.2.3 provides an introduction to 2-player zero-sum games of perfect information that are useful, for instance, in defining the game-theoretic approach to temporal logics.
Sets and Relations
Operations on Sets
Throughout this book we use the standard notations for set-theoretical notions:membership (∈), inclusion (⊆), strict (or proper) inclusion (⊂), union of sets (∪), intersection of sets (∩), difference of sets (\) and product of sets (×). The empty set is denoted by ∅.
This fourth and last part of the book provides algorithmic methods for the main decision problems that come with temporal logics: satisfiability, validity and model checking. Model checking is typically easier, particularly for branching-time logics, and therefore admits simpler solutions that have been presented in the chapters of Part II already. Since temporal logics are usually closed under complementation, satisfiability and validity are very closely related and methods dealing with one of them can easily be used to solve the other, so we will not consider them separately. Indeed, in order to check a formula φ for validity, one can check ¬φ for satisfiability and invert the result since φ is valid iff ¬φ is unsatisfiable. Satisfiability is reducible to validity likewise. Furthermore, a satisfiability-checking procedure would typically yield not only the answer but also, in the positive case, a model witnessing the satisfiability of the input formula. Such an interpreted transition system would refute validity of ¬φ, i.e. be a countermodel for its validity. Hence, the focus of this part is on satisfiability checking.
The methods presented here are closely linked to Chapter 11, which provided lower bounds on the computational complexity of these decision problems, i.e. it explained how difficult these problems are from a computational perspective. The following chapters provide themissing halves to an exact analysis of temporal logics’ computational complexities: by estimating the time and space consumption that these methods need in order to check for satisfiability, satisfaction, etc., we obtain upper bounds on these decision problems. Thus, while Chapter 11 showed how hard at least these problems are, the following chapters show how hard they are at most, by presenting concrete algorithmic solutions for these decision problems.
The methods presented in the following three chapters are in fact methodologies in the sense that each chapter introduces a particular framework for obtaining methods for certain temporal logics. Each of these frameworks – tableaux, automata and games – has its own characteristics, strengths and weaknesses and may or may not be particularly suited for particular temporal logics. Axiomatic systems, presented in the respective chapters of Part II, provide an alternativemethodology that historically appeared first, but they can only be used to establish validity (resp. nonsatisfiability) when it is the case, and provide no answer otherwise, so they are not really decision methods.
This chapter is a brief introduction to the basic multimodal logic BML, interpreted as the simplest natural temporal logic for reasoning about transition systems. Indeed, transition systems are nothing but Kripke frames and interpreted transition systems are simply Kripke models, so a standard Kripke semantics is provided for a multimodal language with modal operators □a and ◊a, associated with each transition relation Ra. These bear natural meaning in interpreted transition systems, stating what must be true in all /respectively, what may be true at some/ Ra-successors of the current state. In order to emphasise these readings of the modal operators, we will use a notation that is unusual for modal logic, but is more suitable in the context of temporal logics: AXa (read as for all paths starting from the current state, at the next state) and EXa (for some path starting from the current state, at the next state). Thus, BML is the minimal natural logical language to specify local properties of transition systems.
Since the chapter is written from the primary perspective of transition systems, rather than from modal logic perspective, we have put an emphasis on certain topics such as expressiveness, bisimulation, model checking, the finite model property and deciding satisfiability, while other fundamental topics in modal logic – such as deductive systems and proof theory, model theory, correspondence theory, algebraic semantics and duality theory – are almost left untouched here. Of all deductive systems developed for modal logics we only mention the axiomatic system for BML here and present a version of the tableaubased method for it in Chapter 13; for the rest we only provide basic references in the bibliographic notes.
This chapter can also be viewed as a stepping stone towards the more expressive and interesting temporal logics that are presented further.
Structure of this chapter. Section 5.1 presents the syntax and semantics for BML. The relational translation from BML into first-order logic (FO) is also presented emphasising the fact that BML can be viewed as a fragment of classical first-order predicate logic. Section 5.2 presents some techniques for renaming and transformation of BML formulae to equisatisfiable ones in certain normal form of modal depth two.
We have noted that the basic modal logic BML suffers from the deficiency of not being able to make assertions about connectivity, i.e. every BML formula can only ‘look’ up to a certain depth into a transition system. This is of course not enough for many purposes, and this is why richer formalisms like reachability logic TLR and the computation tree logic CTL have been introduced. As we demonstrated in Chapter 7, these logics possess temporal operatorswhich directly translate such assertions into the syntax of a logic. As we showed in Section 7.1.5, all temporal operators in CTL added on top of BML have simple and elegant characterisations in terms of least or greatest fixpoint solutions to certain equations.
The modal μ-calculus Lμ uses this idea as a general principle in order to add expressive power to the basic modal logic BML. It only features two additional syntactic constructs: a least and a greatest fixpoint operator. Thus, it differs from the other logics studied here in the way that the fixpoint character of a formula is being made explicit in it. This has pros and cons: it allows any least or greatest fixpoint solution of an equation expressed with basic modal logic to be defined; on the other hand this results in a far less intuitive syntax of the logic when compared to the other temporal logics.
These two aspects determine the role that the modal μ-calculus plays in the world of temporal logics. The generic use of fixpoint quantifiers gives it a relatively high expressive power. Many other temporal logics can be embedded into themodal μ-calculus. The explicit use of fixpoint quantifiers come with a generic instruction for doing model checking using fixpoint iteration. Such algorithms can then be specialised to the embedded temporal logics. Thus, the modal μ-calculus is often called the backbone of temporal logics.
Structure of the chapter. In Section 8.1 we start by introducing the concept of fixpoint quantification which leads to the formal logic called the modal μ-calculus. Early on we show that it can embed CTL simply because this translation is helpful in understanding the use of fixpoint quantifiers for the specification of temporal behaviour.
The transitions in the transition systems that we have studied so far are primitive and abstract objects. The nature of the possible transitions between states and the mechanisms that generate and determine them have not been essential in the context of the linear and branching time temporal logics we have studied in the previous chapters.
Here we introduce and study a more involved type of transition system, called concurrent game structures, for modelling scenarios that typically arise in open or multiagent systems. An open system is a system (computer, device, agent) interacting with an environment. The properties and behaviour of such open systems can be modelled by 2-player games. More generally, a multiagent system may involve several interacting (possibly, cooperating or competing) agents, each pursuing their own goals or acting randomly (like nature or the environment). Concurrent game structures are special types of multiagent transition systems, where the transitions are determined by tuples of simultaneous actions performed by a fixed set of agents. Various logical formalisms extending linear and branching-time temporal logics can be used for the specification, verification and reasoning about dynamic properties of open and multiagent systems.
In this chapter we present and study concurrent game structures as models for the family of so-called alternating-time temporal logics. They are themost popular and influential logics for strategic reasoning in multiagent systems and are multiagent versions of branchingtime temporal logics which correspond to closed or single-agent systems in a sense that we will discuss in the chapter.
Structure of the chapter. We introduce concurrent multiagent transition systems and models in Section 9.1 and then the logics ATL* and ATL in Section 9.2.We then discuss model checking and satisfiability testing for these logics in Section 9.3. As usual, the chapter ends with exercises and bibliographic notes.
Concurrent Multiagent Transition Systems
Concurrent Game Structures and Models
We begin with a motivating example. Figure 9.1 depicts two transition systems. The one on top involves two robots, Robot1 and Robot2, and a carriage. There are three different positions of the carriage, denoted by states s0, s1 and s2. Each robot has two possible actions at any of the states: push and wait. Robot1 can only push the carriage in clockwise direction, whereas Robot2 can only push it in anticlockwise direction.