To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book contributes to several fields of Fundamental Computer Science. It extends to finite graphs several central concepts and results of Formal Language Theory and it establishes their relationship to results about Fixed-Parameter Tractability. These developments and results have applications in Structural Graph Theory. They make an essential use of logic for expressing graph problems in a formal way and for specifying graph classes and graph transformations. We will start by giving the historical background to these contributions.
Formal Language Theory
This theory has been developed with different motivations. Linguistics and compilation have been among the first ones, around 1960. In view of the applications to these fields, different types of grammars, automata and transducers have been defined to specify formal languages, i.e., sets of words, and transformations of words called transductions, in finitary ways. The formalization of the semantics of sequential and parallel programming languages, that uses respectively program schemes and traces, the modeling of biological development and yet other applications have motivated the study of new objects, in particular of sets of terms. These objects and their specifying devices have since been investigated from a mathematical point of view, independently of immediate applications. However, all these investigations have been guided by three main types of questions: comparison of descriptive power, closure properties (with effective constructions in case of positive answers) and decidability problems.
A context-free grammar generates words, hence specifies a formal language. However, each generated word has a derivation tree that represents its structure relative to the considered grammar.
In the early stages of developing a programming language or paradigm, the focus is on programming-in-the-small. As the language matures, programming-in-the-large becomes important and a second modules language is often imposed on the previously existing core language. This second language must support the partitioning of code and name spaces into manageable chunks, the enforcement of encapsulation and information hiding, and the interactions between separately defined blocks of code. The addition of such modularity features typically is manifest syntactically in the form of new constructs and directives, such as local, use, import, and include, that affect parsing and compilation. Since the second language is born out of the necessity to build large programs, there may be little or no connection between the semantics of the added modular constructs and the semantics of the core language. The resulting hybrid language consequently may become complex and also may lack declarativeness, even when the core language is based on, say, logic.
In the logic programming setting, it is possible to support some of the abstractions needed for modular programming directly through logical mechanisms. For example, the composition of code can be realized naturally via the conjunction of program clauses, and suitably scoped existential quantifiers can be used to control the visibility of names across program regions. This chapter develops this observation into the design of a specific module language.
This book is about the nature and benefits of logic programming in the setting of a higher-order logic. We provide in this Introduction a perspective on the different issues that are relevant to a discussion of these topics. Logic programming is but one way in which logic has been used in recent decades to understand, specify, and effect computations. In Section I.1, we categorize the different approaches that have been employed in connecting logic with computation, and we use this context to explain the particular focus we will adopt. The emphasis in this book will be on interpreting logic programming in an expressive way. A key to doing so is to allow for the use of an enriched set of logical primitives while preserving the essential characteristics of this style of specification and programming. In Section I.2, we discuss a notion of expressivity that supports our later claims that some of the logic programming languages that we present are more expressive than others. The adjective “higher order” has been applied to logic in the past in a few different ways, one of which might even raise concern about our plan to use such a logic to perform computations. In Section I.3, we sort these uses out and make clear the kind of higher-order logic that will interest us in subsequent chapters. Section I.4 explains the style of presentation that we follow in this book: Broadly, our goal is to show how higher-order logic can influence programming without letting the discussion devolve into a formal presentation of logic or a description of a particular programming language. The last two sections discuss the prerequisites expected of the reader and the organization of the book.
We have presented sample λProlog programs to illustrate various computations throughout this book. Being able to execute and experiment with those programs should help the reader understand the λProlog programming language and the logic underlying it. To that end, this appendix presents a short introduction to the Teyjus implementation of λProlog. This system can be freely downloaded over the web. The various programs presented in the earlier chapters are also available in electronic form from the website associated with this book.
An overview of the Teyjus system
The Teyjus implementation of λProlog is based on two components. One component is the emulator of an abstract or virtual machine that has an instruction set and runtime system that realizes all the high-level computations implicit in a λProlog program. The second component is a compiler that translates λProlog programs into the instructions of the abstract machine.
Another important aspect of the Teyjus system is that it uses the modules language discussed in Chapter 6. A programmer therefore, must, organize the kind and type declarations and the clauses into modules and then attach signatures to such modules in order to mediate their external view. The compiler is responsible for taking a given module of λProlog code, certifying its internal consistency, ensuring that it satisfies its associated signature, and finally, translating it into a byte-code form. This byte-code form consists of a “header” part containing constant and type names and other related data structures as well as a sequence of instructions that can be run on the virtual machine once it has understood the header information. A critical part of the emulator is a loader that can read in such byte-code files and put the emulator in a state where it is ready to respond to user queries. The other part of the emulator is, of course, a byte-code interpreter that steps through instructions in the manner called for by the user input.
Chapter 1 discussed the use of first-order terms to represent data. This chapter describes logic programming over such representations using a typed variant of first-order Horn clauses. We begin this presentation by developing a view of logic programming that will allow us to introduce extensions smoothly in later chapters, leading eventually to the full set of logical features that underlie the λProlog language. From this perspective, we will take this paradigm of programming to have two defining characteristics. First, languages within the paradigm provide a relational approach to programming. In particular, relations over data descriptions are defined or axiomatized through formulas that use logical connectives and quantifiers. Second, the paradigm views computation as a search process. In the approach underlying λProlog, this view is realized by according to each logical symbol a fixed search-related interpretation. These interpretations lead, in turn, to specific programming capabilities.
The first two sections that follow provide a more detailed exposition of a general framework for logic programming along the lines just sketched. The rest of the chapter is devoted to presenting first-order Horn clauses as a specific elaboration of this framework.
First-order formulas
The first step toward allowing for the description of relations over objects represented by first-order terms is to ease a restriction on signatures: We permit the target types of constants to be ο. Constants that have this type are called relation or predicate symbols. Well-formed first-order expressions are otherwise constructed in the same fashion as that described in Section 1.3. Expressions that have the type ο in this setting are referred to as first-order atomic formulas.
Formal systems in computer science frequently involve specifications of computations over syntactic structures such as λ-terms, π-calculus expressions, first-order formulas, types, and proofs. This book is concerned, in part, with using higher-order logic to express such specifications. Properties are often associated with expressions by formal systems via syntax-based inference rules. Examples of such descriptions include presentations of typing and operational semantics. Logic programming, with its orientation around rule-based specifications, provides a natural framework for encoding and animating these kinds of descriptions. Variable binding is integral to most syntactic expressions, and its presence typically translates into side conditions accompanying inference rules. While many of the concepts related to binding, such as variable renaming, substitution, and scoping, are logically well understood, their treatment at a programming level is surprisingly difficult. We show here that a programming language based on a simply typed version of higher-order logic provides an elegant approach to performing computations over structures embodying binding.
The agenda just described has a prerequisite: We must be able to make sense of a higher-order logic as a programming language. This is a nontrivial task that defines a second theme that permeates this book. Usual developments of logic programming are oriented around formulas in clausal form with resolution as the sole inference rule. Sometimes a semantics-based presentation is also used, expanding typically into the idea of minimal (Herbrand) models.
This chapter considers the encoding of a process calculus within a higher-order logic programming language. Process calculi have been proposed in the literature as a means for modeling concurrent systems. The π-calculus in particular makes use of a sophisticated binding mechanism to encode communication between processes. Our goal here is to show that such binding mechanisms can be treated naturally using λ-tree syntax in λProlog. Since we do not discuss the π-calculus itself in any detail, a reader probably would need a prior exposure to this calculus to best appreciate the nuances of our encodings. However, our primary focus is on showing how a presentation of a formal system can be transformed into a complete and logically precise description in λProlog and how such a description can be used computationally. Thus a reader who has understood the earlier chapters also should be able to follow our development and perhaps will learn something about the π-calculus from it.
The first two sections of this chapter describe an abstract syntax representation for processes in the π-calculus and the specification of the standard transition relation over such processes. A highlight of this specification is that the transition rules are encoded in a completely logical fashion through the use of λ-tree syntax: The usual side conditions involving names are captured completely using binders and their mobility. Sections 11.3 and 11.4 discuss how our encoding can be used in analyzing computational behavior. This discussion also illuminates shortcomings of the logic programming setting in specifying what is known as the must behavior of processes. The last section further illustrates our approach to abstract syntax by showing the translation of a mapping of the λ -calculus under a call-by-name evaluation semantics into the π -calculus.
The previous chapters have dealt with logic programming in the context of first-order logic. We are now interested in moving the discussion to the setting of a higher-order logic. The particular logic that we will use for this purpose is one based on the simply typed λ-calculus, generalized to allow for a form of polymorphic typing. This underlying calculus has several nontrivial computational characteristics that themselves merit discussion. We undertake this task in this chapter, delaying the presentation of the higher-order logic and the logic programming language based on it until Chapter 5.
The first two sections of this chapter describe the syntax of the simply typed λ-calculus and an equality relation called λ-conversion that endows the expressions of this calculus with a notion of functionality. The λ-conversion operation brings with it considerable computational power. We discuss this aspect in Section 4.3. In the logic programming setting, λ-conversion will not be deployed directly as a computational device but instead will be used indirectly in the course of solving unification problems between λ-terms. A discussion of this kind of unification, commonly called higher-order unification, is the focus of the second half of this chapter. Section 4.4 presents a general format for such problems, introduces terminology relating to them, and tries to develop intuitions about the solutions to these problems. Section 4.5 begins to develop the structure for a procedure that might be used to solve higher-order unification problems; this discussion is incomplete and meant only as a prelude to the more detailed treatment of higher-order unification that appears in Chapter 8.
The treatment of programs as objects is a theme common to systems such as interpreters, compilers, and program transformers. These systems typically use an abstract representation of programs that they then manipulate in accordance with the syntax-directed operational semantics of the underlying programming language. The λProlog language can capture such representation and manipulation of programs in a succinct and declarative manner. We illustrate this strength of λProlog by considering various computations over programs in a simple but representative functional language. In the first section we describe this language through its λ-tree syntax; we assume that the reader is sufficiently familiar with functional programming notions to be able to visualize a corresponding concrete syntax. In Section 10.2we present two different specifications of evaluation with respect to this language. In Section 10.3 we consider the encoding of some transformations on programs that are driven by an analysis of their syntactic structure.
The miniFP programming language
The functional programming language that we use in this illustration is called miniFP. While miniFP is a typed language, in its encoding we initially treat its programs as being untyped: We later introduce a language of types and consider a program to be proper only if a type can be associated with it.
The core of the language of program expressions, then, is the untyped λ-calculus. We use the type tm for these expressions, and we encode them in the manner described in Section 7.1.2 for this calculus, with the difference that we use the symbol @ instead of app to represent the application of two expressions, and we write @ as an infix operator.