To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter a number of existence proofs and theoretical discussions are presented. These are related to the earlier chapters, but were not presented there in order not to distract too much from the main line of those chapters. Sections 9.2 and 9.3 are related to Chapter 1. Sections 9.4 and 9.5 are related to Chapters 2 and 3, respectively. Finally Sections 9.6 and 9.7 are related to Chapter 5.
Undefinedness revisited
In this section we explain precisely how the truth and falsity of COLD-K assertions with respect to the partial many-sorted algebras is established. In particular the issue of undefinedness deserves a careful treatment. In this section we focus on the terms and assertions as presented in in Chapter 1 (see Tables 1.1 and 1.2).
Recall that a partial many-sorted Σ-algebra M is a system of carrier sets SM (one for each sort name S in Σ), partial functions fM (one for each function name f in Σ), and relations rM (one for each relation name r in Σ). The functions fM must be compatible with their typing in the following sense: if f : S1#…# Sm→ V1 #…# Vn is in £ we have that fM is a partial function from to. Similarly the predicates must be compatible with their typing, i.e. if r : S1×…× Sm is in Σ we have that rM is a relation on.
In this appendix a concrete syntax for COLD-K is defined. It is concerned with the full language, including the constructs presented in Chapter 10 and 11. The notions of term, expression and statement are integrated into a single syntactical category called <expression>. We give an (extended) BNF grammar defining a set of strings of ASCII characters which are used as concrete representations of the COLD-K constructs.
For user convenience, it is allowed to omit redundant type information. In the applied occurrence of a name the associated type information is generally omitted (otherwise the use of names would become very clumsy). Though in many situations the missing type information can be reconstructed from the context, there are situations where ambiguities may occur. We leave it to the parser to report such ambiguities; there is a special syntactic operator (the cast) to disambiguate the type of an expression.
In the concrete syntax defined here prefix, infix and postfix notations are used for the built-in operators of the language. For the user-defined operators (predicates, functions, procedures) only a prefix notation is provided. The main reason for not introducing infix, postfix or mixfix notations for the latter is simplicity. The possibility to define special notations for user-defined operators is typical for user-oriented versions of COLD, which can be defined as a kind of macro-extension of COLD-K.
Concrete syntax
We define the concrete syntax of COLD-K by means of a context free grammar and priority and associativity rules for the built-in operators. Below we shall define the lexical elements (tokens).
Chapters 6 and 7 deal with expansion calculi that exploit the fact that a design specification is ground confluent. First, directed expansion restricts paramodulation to left-to-right applications of prefix extensions of equational axioms (cf. Chapter 4). Narrowing (cf. Sect. 7.2) goes a step further and confines the input of expansion rules to pure axioms. Reductive expansion provides an alternative to inductive expansion, which originates from the idea of proving inductive validity by proving consistency (cf. Sect. 3.4) and reducing consistency to ground confluence (cf. Sects. 7.4 and 7.5).
Variables of a Horn clause occurring in its premise, but not in (the left-hand side of) its conclusion, called fresh variables, are usually banished as soon as one turns to a reduction calculus. This restriction cannot be maintained when arbitrary declarative programs are to be treated: if one follows the decomposition principle (cf. Sect. 2.6), then fresh variables are created automatically. They are then forbidden because they violate the usual condition that there is a Noetherian reduction ordering, i.e., that a reduction calculus admits only finite derivations. We shall see in Sect. 6.2 that other conditions on a reduction ordering can be weakened so as to often preserve the Noetherian property even if fresh variables are permitted.
This chapter is about setting up flat algebraic specifications. This involves the introduction of more COLD-K constructs and the formulation of various methodological guidelines. At the end of the previous chapter we had to conclude that we almost succeeded in specifying the natural numbers, the only problem being that the expressive power to express the minimality of Nat was lacking. This expressive power will be available after we have introduced the inductive predicate definitions below. We shall complete the example of the natural numbers and we shall investigate various technical aspects of inductive definitions – which unfortunately are quite non-trivial. In addition to inductive predicate definitions, we shall also have inductive function definitions. We address issues like ‘proof obligations’ for inductive definitions, consistency and completeness. Finally we give a number of complete examples of flat algebraic specifications: queues, stacks, bags and symbolic expressions.
Inductive predicate definitions
An inductive predicate definition defines a predicate as the least predicate satisfying some assertion (provided that this predicate exists). Before turning our attention to the syntactic machinery available in COLD-K for expressing this, we ought to explain this notion of ‘least’. Therefore we shall formulate what we mean by one predicate being ‘less than or equal to’ another predicate.
Definition. A predicate r is less than or equal to a predicate q if for each argument x we have that r(x) implies q(x).
We illustrate this definition by means of two unary predicates p1 and p2. We assume the sort Nat with its operations zero and succ as specified before.
The cut calculus for Horn clauses is simple, but rather inefficient as the basis of a theorem prover. To prove a goal γ via this calculus means to derive γ from axioms (those of the specification and congruence axioms for equality predicates) using CUT and SUB (cf. Sect. 1.2). In contrast, the inference rules resolution (cf. [Rob65]) and paramodulation (cf. [RW69]) allow us to start out from γ and apply axioms for transforming γ into the empty goal ∅. The actual purpose of resolution and paramodulation is to compute goal solutions (cf. Sect. 1.2): If γ can be transformed into ∅, then γ is solvable. The derivation process involves constructing a solution f, and ∅ indicates the validity of γ[f].
A single derivation step from γ to via resolution or paramodulation proves the clause γ[g]⇐δ for some g. Since γ[g] is the conclusion of a Horn clause, which, if viewed as a logic program, is expanded (into γ), we call such derivations expansions. More precisely, the rules are input resolution and paramodulation where one of the two clauses to be transformed stems from an “input” set of axioms or, in the case of inductive expansion (cf. Chapter 5), arbitrary lemmas or induction hypotheses.
While input resolution is always “solution complete”, input paramodulation has this property only if the input set includes all functionally-reflexive axioms, i.e., equations of the form Fx≡Fx (cf., e.g., [Hö189]).
EXPANDER is a proof support system for reasoning about data type specifications and declarative programs. EXPANDER applies the rules of inductive expansion (cf. Chapter 5) to correctness conditions that are given as single Gentzen clauses or sets of guarded Horn clauses (cf. Chapter 2). The system provides a kernel for special-purpose theorem provers, which are tailored to restricted application areas and implement specific proof plans, strategies or tactics. It is written in the functional language SML/NJ.
EXPANDER executes single inference steps. Each proof is a sequence of goal sets. The user has full control over the proof process. He may backtrack the sequence, interactively modify the underlying specification and add lemmas or induction orderings suggested by subgoals obtained so far. When a proof has been finished, the system can generate the theorems actually proved and, if necessary, the remaining subconjectures.
We first describe the kind of specifications that can be processed, then present the commands currently provided and, finally, document the implementation. The latter serves for illustrating the suitability of functional languages for encoding deductive methods.
The specifications
Specifications to be processed by EXPANDER are generated by the following context-free grammar in extended Backus-Naur form, i.e., [_], *, | denote the usual operators for building regular expressions. Key words are enclosed in “…”.
Part I introduced a collection of notations and techniques for algebraic specifications. These notations and techniques are relatively close to usual mathematics. As already shown by the examples of Part I, algebraic specifications suffice to describe a wide range of data types (Booleans, numbers, sets, bags, sequences, tuples, maps, stacks, queues, etc.) and they can even be used to describe syntax and semantics of languages, to describe rules and strategies of games and to describe many more non-trivial aspects of complex systems. Of course there are some differences between the notations and techniques of Part I and usual mathematics, like the restriction to first-order predicate logic with inductive definitions, the special way of treating partial functions and undefinedness and, most of all, the modularisation constructs. The latter difference reveals that COLD-K has its roots in software engineering and systems engineering, rather than in general purpose mathematics.
There is one more phenomenon which is characteristic for many branches of software engineering and systems engineering: special provisions for describing state-based systems.
How do we benefit from ground confluent specifications? Most of the advantages follow from Thm. 6.5: If (SIG, AX) is ground confluent, then and only then directed expansions yield all ground AX-solutions. Sects. 7.1 and 7.2 deal with refinements of directed expansion: strategic expansion and narrowing. Sect. 7.3 presents syntactic criteria for a set of terms to be a set of constructors (cf. Sect. 2.3). The results obtained in Sects. 7.2 and 7.3 provide the failure rule and the clash rule that check goals for unsolvability and thus help to shorten every kind of expansion proof (see the final remarks of Sect. 5.4).
Sect. 7.4 deals with the proof of a set CS of inductive theorems by showing that (SIG,AX∪CS) is consistent w.r.t. (SIG,AX) (cf. Sect. 3.4). Using consequences of the basic equivalence between consistency and inductive validity (Lemma 7.9) we come up with reductive expansion, which combines goal reduction and subreductive expansion (cf. Sect. 6.4) into a method for proving inductive theorems. While inductive expansion is always sound, the correctness of reductive expansion depends on ground confluence and strong termination of (SIG,AX). Under these conditions, an inductive expansion can always be turned into a reductive expansion (Thm 7.18). Conversely, a reductive expansion can be transformed in such a way that most of its “boundary conditions” hold true automatically (Thm. 7.19).
The chapter will close with a deduction-oriented concept for specification refinements, or algebraic implementations.