We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Operational (or “procedural”) semantics, as I mentioned in the Introduction, are used to provide characterisations of programming languages which meet certain “computational” criteria: giving a detailed description of the language for implementation purposes, and giving a computational model to which programmers can refer.
For logic programming, operational semantics are particularly important because it is in them that the innovations of logic programming lie. The notions of resolution and unification are not immediately apparent; unification, though defined by Herbrand in his thesis [44], was virtually ignored until Prawitz's work [62], and resolution was not defined until 1965 [66]. These notions must be explained within the context of a full description of the computational model of the language.
If we want to do such things as soundness and completeness proofs, or indeed any formal comparison of the operational semantics to other characterisations of the language, the operational semantics must also be mathematically precise – for instance, in the form of a formal system. (Plotkin [58] has explored the idea of structured operational semantics in detail, and gives a taxonomy to which I will refer in this chapter.) SLD-resolution [49], SLDNF-resolution [50], and the operational semantics in this chapter are just a few examples of formal operational semantics for logic programming. Other examples include Voda's tree-rewriting system [76], Deransart and Ferrand's [29] and Börger's [13] standardisation efforts, and the abstract computation engines for Andorra Prolog [43] and the “Pure Logic Language”, PLL [10, 52].
Although we have proven some useful completeness theorems about the proof systems in the last two chapters, we have not been able to prove absolute completeness: that every valid sequent is derivable. Because of some formal incompleteness results, we will never be able to prove such a completeness theorem, for any finitary proof system; but there are several ways in which we can, at least partially, escape the effect of these incompleteness results. In this chapter, I present the incompleteness theorems and some of the partial solutions.
There are two main incompleteness results, as discussed in the first section below. The first says that we will never be able to derive all valid closed sequents which have signed formulae in negative contexts, and follows from the non-existence of a solution to the Halting Problem. (We can deal with many of the important cases of this result by adding extra rules which I will describe.) The second result says that we will never be able to derive all valid sequents with free variables, even if they have no signed formulae in negative contexts, and is a version of Gödel's Incompleteness Theorem.
The “mathematical” solution to these problems is to bring the proof theory closer to a kind of model theory, by allowing infinitary elements into the proof systems. Though these are not adequate solutions for practical theorem proving, they are useful in that they shed light on the extent to which the proof systems in question are complete.
In this chapter a number of existence proofs and theoretical discussions are presented. These are related to the earlier chapters, but were not presented there in order not to distract too much from the main line of those chapters. Sections 9.2 and 9.3 are related to Chapter 1. Sections 9.4 and 9.5 are related to Chapters 2 and 3, respectively. Finally Sections 9.6 and 9.7 are related to Chapter 5.
Undefinedness revisited
In this section we explain precisely how the truth and falsity of COLD-K assertions with respect to the partial many-sorted algebras is established. In particular the issue of undefinedness deserves a careful treatment. In this section we focus on the terms and assertions as presented in in Chapter 1 (see Tables 1.1 and 1.2).
Recall that a partial many-sorted Σ-algebra M is a system of carrier sets SM (one for each sort name S in Σ), partial functions fM (one for each function name f in Σ), and relations rM (one for each relation name r in Σ). The functions fM must be compatible with their typing in the following sense: if f : S1#…# Sm→ V1 #…# Vn is in £ we have that fM is a partial function from to. Similarly the predicates must be compatible with their typing, i.e. if r : S1×…× Sm is in Σ we have that rM is a relation on.
In this appendix a concrete syntax for COLD-K is defined. It is concerned with the full language, including the constructs presented in Chapter 10 and 11. The notions of term, expression and statement are integrated into a single syntactical category called <expression>. We give an (extended) BNF grammar defining a set of strings of ASCII characters which are used as concrete representations of the COLD-K constructs.
For user convenience, it is allowed to omit redundant type information. In the applied occurrence of a name the associated type information is generally omitted (otherwise the use of names would become very clumsy). Though in many situations the missing type information can be reconstructed from the context, there are situations where ambiguities may occur. We leave it to the parser to report such ambiguities; there is a special syntactic operator (the cast) to disambiguate the type of an expression.
In the concrete syntax defined here prefix, infix and postfix notations are used for the built-in operators of the language. For the user-defined operators (predicates, functions, procedures) only a prefix notation is provided. The main reason for not introducing infix, postfix or mixfix notations for the latter is simplicity. The possibility to define special notations for user-defined operators is typical for user-oriented versions of COLD, which can be defined as a kind of macro-extension of COLD-K.
Concrete syntax
We define the concrete syntax of COLD-K by means of a context free grammar and priority and associativity rules for the built-in operators. Below we shall define the lexical elements (tokens).
Chapters 6 and 7 deal with expansion calculi that exploit the fact that a design specification is ground confluent. First, directed expansion restricts paramodulation to left-to-right applications of prefix extensions of equational axioms (cf. Chapter 4). Narrowing (cf. Sect. 7.2) goes a step further and confines the input of expansion rules to pure axioms. Reductive expansion provides an alternative to inductive expansion, which originates from the idea of proving inductive validity by proving consistency (cf. Sect. 3.4) and reducing consistency to ground confluence (cf. Sects. 7.4 and 7.5).
Variables of a Horn clause occurring in its premise, but not in (the left-hand side of) its conclusion, called fresh variables, are usually banished as soon as one turns to a reduction calculus. This restriction cannot be maintained when arbitrary declarative programs are to be treated: if one follows the decomposition principle (cf. Sect. 2.6), then fresh variables are created automatically. They are then forbidden because they violate the usual condition that there is a Noetherian reduction ordering, i.e., that a reduction calculus admits only finite derivations. We shall see in Sect. 6.2 that other conditions on a reduction ordering can be weakened so as to often preserve the Noetherian property even if fresh variables are permitted.
This chapter is about setting up flat algebraic specifications. This involves the introduction of more COLD-K constructs and the formulation of various methodological guidelines. At the end of the previous chapter we had to conclude that we almost succeeded in specifying the natural numbers, the only problem being that the expressive power to express the minimality of Nat was lacking. This expressive power will be available after we have introduced the inductive predicate definitions below. We shall complete the example of the natural numbers and we shall investigate various technical aspects of inductive definitions – which unfortunately are quite non-trivial. In addition to inductive predicate definitions, we shall also have inductive function definitions. We address issues like ‘proof obligations’ for inductive definitions, consistency and completeness. Finally we give a number of complete examples of flat algebraic specifications: queues, stacks, bags and symbolic expressions.
Inductive predicate definitions
An inductive predicate definition defines a predicate as the least predicate satisfying some assertion (provided that this predicate exists). Before turning our attention to the syntactic machinery available in COLD-K for expressing this, we ought to explain this notion of ‘least’. Therefore we shall formulate what we mean by one predicate being ‘less than or equal to’ another predicate.
Definition. A predicate r is less than or equal to a predicate q if for each argument x we have that r(x) implies q(x).
We illustrate this definition by means of two unary predicates p1 and p2. We assume the sort Nat with its operations zero and succ as specified before.
The cut calculus for Horn clauses is simple, but rather inefficient as the basis of a theorem prover. To prove a goal γ via this calculus means to derive γ from axioms (those of the specification and congruence axioms for equality predicates) using CUT and SUB (cf. Sect. 1.2). In contrast, the inference rules resolution (cf. [Rob65]) and paramodulation (cf. [RW69]) allow us to start out from γ and apply axioms for transforming γ into the empty goal ∅. The actual purpose of resolution and paramodulation is to compute goal solutions (cf. Sect. 1.2): If γ can be transformed into ∅, then γ is solvable. The derivation process involves constructing a solution f, and ∅ indicates the validity of γ[f].
A single derivation step from γ to via resolution or paramodulation proves the clause γ[g]⇐δ for some g. Since γ[g] is the conclusion of a Horn clause, which, if viewed as a logic program, is expanded (into γ), we call such derivations expansions. More precisely, the rules are input resolution and paramodulation where one of the two clauses to be transformed stems from an “input” set of axioms or, in the case of inductive expansion (cf. Chapter 5), arbitrary lemmas or induction hypotheses.
While input resolution is always “solution complete”, input paramodulation has this property only if the input set includes all functionally-reflexive axioms, i.e., equations of the form Fx≡Fx (cf., e.g., [Hö189]).