To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We will leave it to the researchers of the next ten years and beyond to decide which of the results presented in this book are the most important. Here we will only indicate what are, in our opinion, the main open problems and research directions, among those that are closely related with the topics of this book.
Algorithmic applications
This topic, studied in Chapter 6, is the most difficult to discuss because of the large number of new articles published every year. Hence, the following comments have a good chance of becoming rapidly obsolete.
The algorithmic consequences of the Recognizability Theorem depend crucially on algorithms that construct graph decompositions of the relevant types or, equivalently, that construct terms over the signatures FHR, FVR which evaluate to the input graphs or relational structures. The efficiency of such algorithms, either for all graphs, but perhaps more promisingly for particular classes of graphs, is a bottleneck for applications.
Another difficulty is due to the sizes of the automata that result from the “compilation” of monadic second-order formulas. Their “cosmological” sizes (cf. [FriGro04], [StoMey], [Wey]) make their constructions intractable for general formulas. However, for certain formulas arising from concrete problems of hardware and software verification they remain manageable, as reported by Klarlund et al., who developed software for that purpose called MONA (cf. [Hen+], [BasKla]). Soguet [Sog] tried using MONA for graphs of bounded tree-width and clique-width, but even for basic graph properties such as connectivity or 3-colorability, the automata become too large to be constructed as soon as one wishes to handle graphs of clique-width more than 3.
Monadic second-order transductions are transformations of relational structures specified by monadic second-order formulas. They can be used to represent transformations of graphs and related combinatorial structures via appropriate representations of these objects by relational structures, as shown in the examples discussed in Section 1.7.1.
These transductions are important for several reasons. First, because they are useful tools for constructing monadic second-order formulas with the help of the Backwards Translation Theorem (Theorem 7.10). Second, because by means of the Equationality Theorems (Theorems 7.36 and 7.51) they yield logical characterizations of the HR- and VR-equational sets of graphs that are independent of the signatures FHR and FVR. From these characterizations, we get short proofs that certain sets of graphs have bounded, or unbounded, tree-width or clique-width.
They also play the role of transducers in formal language theory: the image of a VR-equational set of graphs under a monadic second-order transduction is VR-equational, and a similar result holds for HR-equational sets and monadic second-order transductions that transform incidence graphs. Finally, the decidability of the monadic second-order satisfiability problem for a set of structures C implies the decidability of the same problem for its image τ(C) under a monadic second-order transduction τ. Hence, monadic second-order transductions make it possible to relate decidability and undecidability results concerning monadic second-order satisfiability problems for graphs and relational structures.
Section 7.1 presents the definitions and the fundamental properties. Section 7.2 is devoted to the Equationality Theorem for the VR algebra, one of the main results of this book.
This book contributes to several fields of Fundamental Computer Science. It extends to finite graphs several central concepts and results of Formal Language Theory and it establishes their relationship to results about Fixed-Parameter Tractability. These developments and results have applications in Structural Graph Theory. They make an essential use of logic for expressing graph problems in a formal way and for specifying graph classes and graph transformations. We will start by giving the historical background to these contributions.
Formal Language Theory
This theory has been developed with different motivations. Linguistics and compilation have been among the first ones, around 1960. In view of the applications to these fields, different types of grammars, automata and transducers have been defined to specify formal languages, i.e., sets of words, and transformations of words called transductions, in finitary ways. The formalization of the semantics of sequential and parallel programming languages, that uses respectively program schemes and traces, the modeling of biological development and yet other applications have motivated the study of new objects, in particular of sets of terms. These objects and their specifying devices have since been investigated from a mathematical point of view, independently of immediate applications. However, all these investigations have been guided by three main types of questions: comparison of descriptive power, closure properties (with effective constructions in case of positive answers) and decidability problems.
A context-free grammar generates words, hence specifies a formal language. However, each generated word has a derivation tree that represents its structure relative to the considered grammar.
In the early stages of developing a programming language or paradigm, the focus is on programming-in-the-small. As the language matures, programming-in-the-large becomes important and a second modules language is often imposed on the previously existing core language. This second language must support the partitioning of code and name spaces into manageable chunks, the enforcement of encapsulation and information hiding, and the interactions between separately defined blocks of code. The addition of such modularity features typically is manifest syntactically in the form of new constructs and directives, such as local, use, import, and include, that affect parsing and compilation. Since the second language is born out of the necessity to build large programs, there may be little or no connection between the semantics of the added modular constructs and the semantics of the core language. The resulting hybrid language consequently may become complex and also may lack declarativeness, even when the core language is based on, say, logic.
In the logic programming setting, it is possible to support some of the abstractions needed for modular programming directly through logical mechanisms. For example, the composition of code can be realized naturally via the conjunction of program clauses, and suitably scoped existential quantifiers can be used to control the visibility of names across program regions. This chapter develops this observation into the design of a specific module language.
This book is about the nature and benefits of logic programming in the setting of a higher-order logic. We provide in this Introduction a perspective on the different issues that are relevant to a discussion of these topics. Logic programming is but one way in which logic has been used in recent decades to understand, specify, and effect computations. In Section I.1, we categorize the different approaches that have been employed in connecting logic with computation, and we use this context to explain the particular focus we will adopt. The emphasis in this book will be on interpreting logic programming in an expressive way. A key to doing so is to allow for the use of an enriched set of logical primitives while preserving the essential characteristics of this style of specification and programming. In Section I.2, we discuss a notion of expressivity that supports our later claims that some of the logic programming languages that we present are more expressive than others. The adjective “higher order” has been applied to logic in the past in a few different ways, one of which might even raise concern about our plan to use such a logic to perform computations. In Section I.3, we sort these uses out and make clear the kind of higher-order logic that will interest us in subsequent chapters. Section I.4 explains the style of presentation that we follow in this book: Broadly, our goal is to show how higher-order logic can influence programming without letting the discussion devolve into a formal presentation of logic or a description of a particular programming language. The last two sections discuss the prerequisites expected of the reader and the organization of the book.
We have presented sample λProlog programs to illustrate various computations throughout this book. Being able to execute and experiment with those programs should help the reader understand the λProlog programming language and the logic underlying it. To that end, this appendix presents a short introduction to the Teyjus implementation of λProlog. This system can be freely downloaded over the web. The various programs presented in the earlier chapters are also available in electronic form from the website associated with this book.
An overview of the Teyjus system
The Teyjus implementation of λProlog is based on two components. One component is the emulator of an abstract or virtual machine that has an instruction set and runtime system that realizes all the high-level computations implicit in a λProlog program. The second component is a compiler that translates λProlog programs into the instructions of the abstract machine.
Another important aspect of the Teyjus system is that it uses the modules language discussed in Chapter 6. A programmer therefore, must, organize the kind and type declarations and the clauses into modules and then attach signatures to such modules in order to mediate their external view. The compiler is responsible for taking a given module of λProlog code, certifying its internal consistency, ensuring that it satisfies its associated signature, and finally, translating it into a byte-code form. This byte-code form consists of a “header” part containing constant and type names and other related data structures as well as a sequence of instructions that can be run on the virtual machine once it has understood the header information. A critical part of the emulator is a loader that can read in such byte-code files and put the emulator in a state where it is ready to respond to user queries. The other part of the emulator is, of course, a byte-code interpreter that steps through instructions in the manner called for by the user input.
Chapter 1 discussed the use of first-order terms to represent data. This chapter describes logic programming over such representations using a typed variant of first-order Horn clauses. We begin this presentation by developing a view of logic programming that will allow us to introduce extensions smoothly in later chapters, leading eventually to the full set of logical features that underlie the λProlog language. From this perspective, we will take this paradigm of programming to have two defining characteristics. First, languages within the paradigm provide a relational approach to programming. In particular, relations over data descriptions are defined or axiomatized through formulas that use logical connectives and quantifiers. Second, the paradigm views computation as a search process. In the approach underlying λProlog, this view is realized by according to each logical symbol a fixed search-related interpretation. These interpretations lead, in turn, to specific programming capabilities.
The first two sections that follow provide a more detailed exposition of a general framework for logic programming along the lines just sketched. The rest of the chapter is devoted to presenting first-order Horn clauses as a specific elaboration of this framework.
First-order formulas
The first step toward allowing for the description of relations over objects represented by first-order terms is to ease a restriction on signatures: We permit the target types of constants to be ο. Constants that have this type are called relation or predicate symbols. Well-formed first-order expressions are otherwise constructed in the same fashion as that described in Section 1.3. Expressions that have the type ο in this setting are referred to as first-order atomic formulas.
Formal systems in computer science frequently involve specifications of computations over syntactic structures such as λ-terms, π-calculus expressions, first-order formulas, types, and proofs. This book is concerned, in part, with using higher-order logic to express such specifications. Properties are often associated with expressions by formal systems via syntax-based inference rules. Examples of such descriptions include presentations of typing and operational semantics. Logic programming, with its orientation around rule-based specifications, provides a natural framework for encoding and animating these kinds of descriptions. Variable binding is integral to most syntactic expressions, and its presence typically translates into side conditions accompanying inference rules. While many of the concepts related to binding, such as variable renaming, substitution, and scoping, are logically well understood, their treatment at a programming level is surprisingly difficult. We show here that a programming language based on a simply typed version of higher-order logic provides an elegant approach to performing computations over structures embodying binding.
The agenda just described has a prerequisite: We must be able to make sense of a higher-order logic as a programming language. This is a nontrivial task that defines a second theme that permeates this book. Usual developments of logic programming are oriented around formulas in clausal form with resolution as the sole inference rule. Sometimes a semantics-based presentation is also used, expanding typically into the idea of minimal (Herbrand) models.
This chapter considers the encoding of a process calculus within a higher-order logic programming language. Process calculi have been proposed in the literature as a means for modeling concurrent systems. The π-calculus in particular makes use of a sophisticated binding mechanism to encode communication between processes. Our goal here is to show that such binding mechanisms can be treated naturally using λ-tree syntax in λProlog. Since we do not discuss the π-calculus itself in any detail, a reader probably would need a prior exposure to this calculus to best appreciate the nuances of our encodings. However, our primary focus is on showing how a presentation of a formal system can be transformed into a complete and logically precise description in λProlog and how such a description can be used computationally. Thus a reader who has understood the earlier chapters also should be able to follow our development and perhaps will learn something about the π-calculus from it.
The first two sections of this chapter describe an abstract syntax representation for processes in the π-calculus and the specification of the standard transition relation over such processes. A highlight of this specification is that the transition rules are encoded in a completely logical fashion through the use of λ-tree syntax: The usual side conditions involving names are captured completely using binders and their mobility. Sections 11.3 and 11.4 discuss how our encoding can be used in analyzing computational behavior. This discussion also illuminates shortcomings of the logic programming setting in specifying what is known as the must behavior of processes. The last section further illustrates our approach to abstract syntax by showing the translation of a mapping of the λ -calculus under a call-by-name evaluation semantics into the π -calculus.