To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we generalize our notion of feature structure to allow for the possibility of having countably infinite collections of nodes. When we considered algebraic models in the last chapter, we implicitly allowed models with objects from which infinitely many distinct objects were accessible by iteratively applying feature value functions. In the case of feature structures as we have previously taken them, the set of substructures of a feature structure was always in one-to-one correspondence with the nodes of the feature structure and hence finite. In the case of finite feature structures, we were guaranteed joins or unifications for consistent finite sets of (finite) feature structures. We also saw examples of infinite sets of consistent finite feature structures which did not have a finite least upper bound. When we allow for the possibility of countably infinite feature structures, we have least upper bounds for arbitrary (possibly infinite) consistent sets of (possibly infinite) feature structures. In fact, the collection of feature structures with countable node sets turn out to form a predomain in the sense that when we factor out alphabetic variance, we are left with an algebraic countably based BCPO. Luckily, in the feature structure domain, the compact domain elements are just the finite feature structures, thus giving us a way to characterize arbitrary infinite feature structures as simple joins of finite feature structures. One benefit of such a move is that infinite or limit elements in our domains provide models for non-terminating inference procedures such as total type inference over an appropriateness specification with loops or extensionalization when non-maximal types are allowed to be extensional.
In this chapter we consider acyclic feature structures and their axiomatization. In standard first-order logic resolution theorem provers (see Wos et al. 1984 for an overview), it is necessary to make sure that when a variable X is unified with a term t, there is no occurrence of X in t. This is the so-called occurs check and dates back to Robinson's (1965) original algorithm for unification. Without the occurs check, resolution produces unsound inferences. The problem with the occurs check is that it is often expensive to compute in practical unification algorithms. Unification algorithms have been developed with built in occurs checks that are linear (Paterson and Wegman 1978) and quasi-linear (Martelli and Montanari 1982), but the data structures employed for computing the occurs check incur a heavy constant overhead (Jaffar 1984). Rather than carry out the occurs check, implementations of logic programming languages like Prolog simply omit it, leading to interpreters and compilers which are not sound with respect to the semantics of first-order logic and may furthermore cause the interpreters to hang during the processing of cyclic structures which are accidentally created. Rather than change the interpreters, the move made in Prolog II (Colmerauer 1984, 1987) was to change the semantics to allow for infinite rational trees, which correspond to the infinite unfoldings of the terms that result from unifications of a variable with a term that contains it. Considering the pointer-based implementation of unification (Jaffar 1984, Moshier 1988), infinite trees are more naturally construed as finite graphs containing cycles.
Up to this point, the semantics of the commands is determined by the relation between precondition and postcondition. This point of view is too restricted for the treatment of concurrent programs and reactive systems. The usual example is that of an operating system which is supposed to perform useful tasks without ever reaching a postcondition.
For this purpose, the semantics of commands must be extended by consideration of conditions at certain moments during execution. We do not want to be forced to consider all intermediate states or to formalize sequences of intermediate states. We have chosen the following level of abstraction. To every procedure name h, a predicate z.h is associated. The temporal semantic properties of a command q depend on the values of z.h.x for the procedure calls, say of procedure h in state x, induced by execution of command q. The main properties are ‘always’ and ‘eventually’, which are distinguished by the question whether z.h.x should hold for all induced calls or for at least one induced call. The concept of ‘always’ is related to stability and safety. The concept of ‘eventually’ is related to progress and liveness.
In this chapter, we regard nontermination of simple commands as malfunctioning and nontermination of procedures as potentially useful infinite behaviour. We therefore use wp for the interpretation of simple commands and wlp for procedure calls.
Suppose that you have a database that contains, among other things, the following pieces of information (in some form of code):
α: All European swans are white.
β: The bird caught in the trap is a swan.
γ: The bird caught in the trap comes from Sweden.
δ: Sweden is part of Europe.
If your database is coupled with a program that can compute logical inferences in the given code, the following fact is derivable from α – δ:
ε: The bird caught in the trap is white.
Now suppose that, as a matter of fact, the bird caught in the trap turns out to be black. This means that you want to add the fact –ε, i.e., the negation of ε, to the database. But then the database becomes inconsistent. If you want to keep the database consistent, which is normally a sound methodology, you need to revise it. This means that some of the beliefs in the original database must be retracted. You don't want to give up all of the beliefs since this would be an unnecessary loss of valuable information. So you have to choose between retracting α, β, γ or δ.
The problem of belief revision is that logical considerations alone do not tell you which beliefs to give up, but this has to be decided by some other means.
This book is about programs as mathematical objects. We focus on one of the aspects of programs, namely their functionality, their meaning or semantics. Following Dijkstra we express the semantics of a program by the weakest precondition of the program as a function of the postcondition. Of course, programs have other aspects, like syntactic structure, executability and (if they are executable) efficiency. In fact, perhaps surprisingly, for programming methodology it is useful to allow a large class of programs, many of which are not executable but serve as partially implemented specifications.
Weakest preconditions are used to define the meanings of programs in a clean and uniform way, without the need to introduce operational arguments. This formalism allows an effortless incorporation of unbounded nondeterminacy. Now programming methodology poses two questions. The first question is, given a specification, to design a general program that is proved to meet the specification but need not be executable or efficient, and the second question is to transform such a program into a more suitable one that also meets the specification.
We do not address the methodological question how to design, but we concentrate on the mathematical questions concerning semantic properties of programs, semantic equality of programs and the refinement relation between programs. We provide a single formal theory that supports a number of different extensions of the basic theory of computation. The correctness of a program with respect to a specification is for us only one of its semantic properties.
Studies of the dynamics of (rational) belief have reached a new degree of sophistication through the formal models developed in particular by Levi (1980) and by Alchourrón, Gärdenfors, and Makinson (the AGM model, Alchourrón et al. 1985 and Gärdenfors 1988). In these models, an individual's beliefs are represented by a set that is closed under logical consequence. The AGM model and its underlying assumptions have had a profound influence on the study of belief base updating in computer science. Although states of belief are, for obvious reasons, represented in computer applications by finite sets of sentences, it is common to demand that “results from an update must be independent of the syntax of the original K[nowledge] B[ase]” (Katsuno and Mendelzon 1989). In other words, operations on finite sets are treated more or less as shadows of operations on the (infinite) logical closures of these sets.
In this paper, a different representation of beliefs will be introduced. A formalized belief state will consist of an ordered pair <K,Cn>. K, or the belief base, is a set of expressions that is not necessarily closed under logical consequence. It represents the basic facts on which an individual grounds her beliefs. Cn is a consequence operator, such that Cn(K) represents her beliefs (namely both the elements of K and the conclusions she draws from them).
In the last five years, there has been much concern in Artificial Intelligence about the problem of belief revision. A rather exciting feature of the on-going debate is that it has led to an interaction between fields which usually ignore each other, such as logic, probability theory, fuzzy set and possibility theory, and their epistemological underpinnings. In his book, Gärdenfors (1988) emphasizes in a convincing way that notions of epistemic state can be defined in the framework of logic as well as in the one of probability theory and that similar postulates can be put forward in the two settings, for the purpose of devising rational updating operations that take place upon the arrival of new pieces of information. On the other hand Spohn (1988), in trying to refine the logical notion of epistemic state, viewed as a consistent and deductively closed set of propositions, comes close to Shackle's (1961) notions of degrees of potential surprise, as well as Zadeh's (1978) possibility theory. However he proposes revision rules that are in some way in accordance with probability theory where probabilities only take infinitesimal values (Spohn, 1990). Besides, one of the main outcomes of Gärdenfors' logical theory of belief change is that any revision process underlies an ordering of propositions that form the epistemic state; the rationality postulates force this ordering to satisfy specific properties, and the only numerical set-functions that can account for these properties are the so-called necessity measures that play a basic role in possibility theory (Dubois and Prade, 1988).
This chapter is devoted to the formal definition of the semantics of sequential composition, unbounded choice and recursion and to the proofs of the properties of commands that were introduced and used in the previous chapters. The semantics of the simple commands is taken for granted, but otherwise the reader should not rely on old knowledge but only use facts that have already been justified in the new setting. At the end of the chapter, the foundations of the previous chapters will be complete.
Some examples of the theory are given in the exercises at the end of the chapter. The text of the chapter has almost no examples. One reason is that the chapters 1, 2 and 3 may be regarded as examples of the theory. On the other hand, every nontrivial example tends to constitute additional theory.
In Section 4.1, we introduce complete lattices and investigate the lattice of the predicate transformers and some important subsets. Section 4.2 contains our version of the theorem of Knaster–Tarski. A syntactic formalism for commands with unbounded choice is introduced in Section 4.3.
Section 4.4 contains the main definition. From the definition on simple commands, the functions wp and wlp are extended to procedure names and command expressions. In Sections 4.5 and 4.6, the healthiness laws, which are postulated for simple commands, are extended to procedure names and command expressions.
Knowledge update has been a matter of concern to two quite separate traditions: one in philosophical logic, and another in artificial intelligence. In this paper we draw on both traditions to develop a theory of update, based on conditional logic, for a kind of knowledge base that has proven to be of interest in artificial intelligence. After motivating and formulating the logic on which our theory is based, we will prove some basic results and show how our logic can be used to describe update in an environment in which knowledge bases can be treated as truth-value assignments in four-valued logic. In keeping with Nuel Belnap's terminology in Belnap (1977a) and Belnap (1977b), we will refer to such truth-value assignments as set-ups or as four-valued set-ups.
Paraconsistency, primeness, and atomistic update
For the moment we will not say exactly what a four-valued set-up is. Instead we will describe informally some conditions under which it would be natural to structure one's knowledge base as a four-valued set-up. One of these conditions has to do with the treatment of inconsistent input; a second has to do with the representation of disjunctive information; the third concerns what kinds of statements can be the content of an update.
Inconsistent input
A logical calculus is paraconsistent if it cannot be used to derive arbitrary conclusions from inconsistent premises. Belnap argues in general terms that paraconsistent reasoning is appropriate any context where an automated reasoner must operate without any guarantee that its input is consistent, and where nondegenerate performance is desirable even if inconsistency is present. Knowledge bases used in AI applications are cases of this sort.
This chapter describes a model of autonomous belief revision (ABR) which discriminates between possible alternative belief sets in the context of change. The model determines preferred revisions on the basis of the relative persistence of competing cognitive states. It has been implemented as ICM (increased coherence model); a belief revision mechanism encompassing a three-tiered ordering structure which represents a blend between coherence and foundational theories of belief revision.
The motivation for developing the model of ABR is as a component of a model of communication between agents. The concern is choice about changing belief. In communication, agents should be designed to choose whether as well as how to revise their beliefs. This is an important aspect of design for multi-agent contexts as open environments (Hewitt, 1986), in which no one element can be in possession of complete information of all parts of the system at all times. Communicated information cannot therefore be assumed to be reliable and fully informed. The model of ABR and system ICM, represent the first phase in the development of a computational model of cooperative, yet autonomously determined communication. The theory of ABR and communication is explicated in section 2.
Section 3 follows with an outline of the problem of multiple alternative revisions, and a discussion of preference and strength of belief issues from an AI perspective. This section includes the relevant comparative and theoretical background for understanding the model of ABR described in section 4.
In this chapter we fulfil the remaining proof obligations of Chapter 12. Section 13.1 contains a strengthened version of Theorem 4(8), our version of the theorem of Knaster-Tarski. In Section 13.2, we provide the basic set-up, in which we need not yet distinguish between wp and wlp. Section 13.3 contains the construction of the strong preorder and the proofs of rule 12(4) and a variation of rule 12(5). In this way, the proof of the accumulation rule 12(5) is reduced to the verification of two technical conditions: sup-safety (for wp) and inf-safety (for wlp). These conditions comprise the base case of the induction and a continuity property.
In Section 13.4, the base case is reduced to a condition on function abort⊙. Section 13.5 contains the proof for inf-safety. Section 13.6 contains the definition of the set Lia and the proof for sup-safety. In Sections 13.7 and 13.8 we justify the rules for Lia stated in Section 11.2.
It may seem unsatisfactory that, in the presence of unbounded nondeterminacy, computational induction needs such a complicated theory. The examples in Sections 11.7 and 12.4, however, show that the accumulation rules 11(6) and 12(5) need their complicated conditions. Therefore, corresponding complications must occur in the construction or in the proofs.
Consider a knowledge base represented by a theory ψ of some logic, say propositional logic. We want to incorporate into ψ a new fact, represented by a sentence μ of the same language. What should the resulting theory be? A growing body of work (Dalal 1988, Katsuno and Mendelzon 1989, Nebel 1989, Rao and Foo 1989) takes as a departure point the rationality postulates proposed by Alchourrón, Gärdenfors and Makinson (1985). These are rules that every adequate revision operator should be expected to satisfy. For example: the new fact μ must be a consequence of the revised knowledge base.
In this paper, we argue that no such set of postulates will be adequate for every application. In particular, we make a fundamental distinction between two kinds of modifications to a knowledge base. The first one, update, consists of bringing the knowledge base up to date when the world described by it changes. For example, most database updates are of this variety, e.g. “increase Joe's salary by 5%”. Another example is the incorporation into the knowledge base of changes caused in the world by the actions of a robot (Ginsberg and Smith 1987, Winslett 1988, Winslett 1990). We show that the AGM postulates must be drastically modified to describe update.
The second type of modification, revision, is used when we are obtaining new information about a static world.
The purpose of this book is to develop the semantics of imperative sequential programs. One prerequisite for reading is some familiarity with the use of predicates in programming, as exposed for instance in the books [Backhouse 1986], [Dijkstra 1976], or [Gries 1981]. Some mathematical maturity is another prerequisite: we freely use sets, functions, relations, orders, etc. We strive for providing complete proofs. This requires many backward references but, of course, the reader may sometimes prefer to ignore them. Actually, at every assertion the reader is invited to join the game and provide a proof himself.
In every chapter, the formulae are numbered consecutively. For reference to formulae of other chapters we use the convention that i(j) denotes formula (j) of Chapter i.
At the end of almost every chapter we give a number of exercises, grouped according to the latest relevant section. When referring to exercise i.j.k, we mean exercise k of Section i.j. Some exercises are simple tests of the reader's apprehension, while other exercises contain applications and extensions of the main text. For (parts of) exercises marked with ♡ we provide solutions in Chapter 16.
References to the literature are given in the form [X n], for author X and year n, possibly followed by a letter.
Semantics of imperative sequential programs
The word ‘semantics’ means ‘meaning’. In the title of this book, it announces two central themes. The meaning of a program is given by its specification.
There are many ways to change a theory. The tasks of adding a sentence to a theory and of retracting a sentence from a theory are non-trivial because they are usually constrained by at least three requirements. The result of a revision or contraction of a theory should again be a theory, i.e., closed under logical consequence, it should be consistent whenever possible, and it should not change the original theory beyond necessity. In the course of the Alchourrón-Gärdenfors-Makinson research programme, at least three different methods for constructing contractions of theories have been proposed. Among these the “safe contraction functions” of Alchourrón and Makinson (1985, 1986) have played as it were the role of an outsider. Gärdenfors and Makinson (1988, p. 88) for instance state that ‘another, quite different, way of doing this [contracting and revising theories] was described by Alchourrón and Makinson (1985).’ (Italics mine.) The aim of the present paper is to show that this is a miscasting.
In any case, it seems that the intuitions behind safe contractions are fundamentally different from those behind its rivals, the partial meet contractions of Alchourrón, Gärdenfors and Makinson (1985) and the epistemic entrenchment contractions of Gärdenfors and Makinson (1988). Whereas the latter notions are tailored especially to handling theories (as opposed to sets of sentences which are not closed under a given consequence operation), safe contraction by its very idea focusses on minimal sets of premises sufficient to derive a certain sentence.