To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The nondeterminacy considered thus far in this monograph was loose in the sense of [Park 1979]: any choice or sequence of choices allowed by the command is acceptable behaviour of the implementation, but the fact that a choice is allowed does not mean that it can ever occur.
While reasoning about concurrent computations, and in the design of communicating processes, we have to deal with unpredictable execution, which is yet not completely loose. We may want to assume that a computation delegated to another process eventually yields an answer or that, if a stream of messages is sent, eventually an acknowledgement comes back.
Such assumptions are called fairness assumptions. Fairness is a subject in itself with a highly operational flavour. There are many different kinds of fairness, cf. [Francez 1986] and [Lehmann e.a. 1981], but it seems that most definitions cannot elegantly be expressed in terms of predicate-transformation semantics. Therefore, we restrict ourselves to predicative fairness, a kind of fairness proposed in [Morris 1990] and [Queille-Sifakis 1983].
In the literature, fairness is usually treated only for repetitions. In [Morris 1990], fairness of tail-recursive procedures without mutual recursion is treated. We give a definition applicable to arbitrary procedures. Our formalization is in agreement with the treatment of loc.cit. in the case of tail recursion. Mutual recursion and ‘calls before the tail’ seem to be adequately treated. Our formalization leads to overly optimistic specifications if a procedure body contains sequentially ordered recursive calls.
We come back to the informal description of wp and wlp given in Section 1.2. This description is used to justify two more postulates concerning wp and wlp, the so-called healthiness laws. These postulates are due to [Dijkstra 1976]). They are theorems of the standard relational semantics, but in predicate-transformation semantics they need not be imposed. In fact, recently, some investigators (cf. [Backvon Wright 1989b], [Morgan-Gardiner 1990]) have proposed specification constructs that lead to violations of the laws (so these constructs cannot be expressed in relational semantics). Command serve from the second example in 1.2 belongs to this category.
In the remainder of this book the healthiness laws are imposed since they form the natural boundary of the theory of Chapter 4. Another reason for imposing them is that they hold for all practical imperative languages and for the relational model of computation (see Chapter 6).
In this chapter, we introduce the laws with an informal justification and we treat the main formal implications.
Conjunctivity properties of predicate transformers
Since the healthiness laws prescribe certain properties of the predicate transformers wp.c and wlp.c for commands c, it is useful to introduce these properties for arbitrary predicate transformers.
Belief revision is the process of incorporating new information into a knowledge base while preserving consistency. Recently, belief revision has received a lot of attention in AI, which led to a number of different proposals for different applications (Ginsberg 1986; Ginsberg, Smith 1987; Dalal 1988; Gärdenfors, Makinson 1988; Winslett 1988; Myers, Smith 1988; Rao, Foo 1989; Nebel 1989; Winslett 1989; Katsuno, Mendelzon 1989; Katsuno, Mendelzon 1991; Doyle 1990). Most of this research has been considerably influenced by approaches in philosophical logic, in particular by Gärdenfors and his colleagues (Alchourrón, Gärdenfors, Makinson 1985; Gärdenfors 1988), who developed the logic of theory change, also called theory of epistemic change. This theory formalizes epistemic states as deductively closed theories and defines different change operations on such epistemic states.
Syntax-based approaches to belief revision to be introduced in Section 3 have been very popular because of their conceptual simplicity. However, there also has been criticisms since the outcome of a revision operation relies an arbitrary syntactic distinctions (see, e.g., (Dalal 1988; Winslett 1988; Katsuno, Mendelzon 1989))—and for this reason such operations cannot be analyzed on the knowledge level. In (Nebel 1989) we showed that syntax-based approaches can be interpreted as assigning higher relevance to explicitly represented sentences. Based on that view, one particular kind of syntax-based revision, called base revision, was shown to fit into the theory of epistemic change. In Section 4 we generalize this result to prioritized bases.
In this chapter, we develop syntactic criteria on commands, which imply disjunctivity properties for their weakest preconditions. We suppose that the disjunctivity properties of the simple commands are known and try to generalize these properties to procedures and composite commands. From this chapter onward, the theory of Chapter 4 is indispensable.
In Section 8.1 we introduce, for a given set R of predicate transformers, a set of commands called the syntactic reflection Sy.R of R. The main property is that wp.q ∈ R for all q ∈ Sy.R. In Section 8.2 we provide methods to prove that a command belongs to the syntactic reflection.
In Section 8.3 the theory is specialized to the case that R is characterized by a disjunctivity property. Section 8.4 contains the next specialization, namely to the classes of total commands, of disjunctive commands, and of finitely nondeterminate commands. For our purposes the first two classes merely serve as examples or test cases. Our real aim is the class of the finitely nondeterminate commands. It is this class, or rather its syntactic reflection, that plays a key role in Chapters 11 and 13.
Syntactic reflection of semantic properties
Throughout this section we let R be a sup-closed subset of MT. We are interested in syntactic criteria on commands c ∈ A⊙ that imply wp.c ∈ R. Our solution consists of an algebraic definition of a subset Sy.R of A⊙ with wp.q ∈ R for all q ∈ Sy.R.
In this chapter, we reconcile the definition of the semantics of recursive procedures, cf. Chapter 4, with the relational semantics of Chapter 6. The idea is that the two semantical paradigms meet halfway. Therefore, the chapter consists of two parts.
The first part is based on predicate-transformation semantics, cf. Chapter 4. In Section 9.1, we describe the stack implementation of recursive procedures. This implementation can be regarded as an interpreter: the whole recursive declaration is interpreted by means of a tail-recursive procedure with a stack of continuations as a value parameter. The correctness of the interpreter is proved in Section 9.2.
In the second part of the chapter we treat the relational semantics of recursive procedures. This is done in two steps. In Section 9.3, we define the relational semantics of a tail-recursive declaration by means of a transitive closure in a graph of configurations. By Chapter 6, these relational semantics induce predicate transformers. We then show that the predicate transformers correspond to wp and wlp as defined for such a declaration in Chapter 4. In Section 9.4, the ideas and results of the preceding sections are combined. The stack implementation of 9.1 is combined with the relational semantics of tail recursion (cf. Section 9.3) to define the relational semantics of an arbitrary recursive declaration. The results of 9.2 and 9.3 imply that these relational semantics correspond to the predicate-transformation semantics of Chapter 4.
This chapter is devoted to the introduction of annotations, procedures, recursion and repetitions, all concepts highly relevant to programming practice and programming methodology. In 2.1 we introduce Hoare triples as a specification method. Hoare triples are used in 2.2 for correctness proofs by annotation. In 2.3 and 2.4 we treat procedures in a programming language like Pascal. The specification and invocation rules are discussed in Section 2.3. The correctness of recursive procedures is treated in Section 2.4. The methods presented here are not new but deserve to be promoted.
In Section 2.5 we present and prove an abstract version of the rule for total correctness of recursive procedures. In 2.6 we introduce homomorphisms, functions from commands to predicate transformers that satisfy the standard laws of wp and wlp. Homomorphisms are used in 2.7 to give Hoare's Induction Rule for conditional correctness of recursive procedures, and a related rule for the necessity of preconditions. Finally, in Section 2.8, the results on recursive procedures are specialized to the repetition.
With respect to recursive procedures, this chapter is not ‘well-founded’. We only postulate some properties and proof rules, but the definition of the semantics of recursion (i.e., of the functions wp and wlp) and the proof of the postulates are postponed to Chapter 4.
Specification with Hoare triples
Weakest preconditions provide the easiest way to present predicate–transformation semantics. The formalism of Hoare triples, however, is completely equivalent and more convenient for program derivations and proofs of program correctness.
Recent years have seen considerable work on two approaches to belief revision: the so-called foundations and coherence approaches. The foundations approach supposes that a rational agent derives its beliefs from justifications or reasons for these beliefs: in particular, that the agent holds some belief if and only if it possesses a satisfactory reason for that belief. According to the foundations approach, beliefs change as the agent adopts or abandons reasons. The coherence approach, in contrast, maintains that pedigrees do not matter for rational beliefs, but that the agent instead holds some belief just as long as it logically coheres with the agent's other beliefs. More specifically, the coherence approach supposes that revisions conform to minimal change principles and conserve as many beliefs as possible as specific beliefs are added or removed. The artificial intelligence notion of reason maintenance system (Doyle, 1979) (also called “truth maintenance system”) has been viewed as exemplifying the foundations approach, as it explicitly computes sets of beliefs from sets of recorded reasons. The so-called AGM theory of Alchourrón, Gärdenfors and Makinson (1985; 1988) exemplifies the coherence approach with its formal postulates characterizing conservative belief revision.
Although philosophical work on the coherence approach influenced at least some of the work on the foundations approach (e.g., (Doyle, 1979) draws inspiration from (Quine, 1953; Quine and Ullian, 1978)), Harman (1986) and Gärdenfors (1990) view the two approaches as antithetical. Gärdenfors has presented perhaps the most direct argument for preferring the coherence approach to the foundations approach.
Since the beginning of artificial intelligence research on action, researchers have been concerned with reasoning about actions with preconditions and postconditions. Through the work of Moore (1980), Pratt's (1980) dynamic semantics soon established itself in artificial intelligence as the appropriate semantics for action. Mysteriously, however, actions with preconditions and postconditions were not given a proper treatment within the modal framework of dynamic logic. This paper offers such an analysis. Things are complicated by the need to deal at the same time with the notion of competence, or an actor's ability. Below, a logic of actions with preconditions and postconditions is given a sound and complete syntactic characterization, in a logical formalism in which it is possible to express actor competence, and the utility of this formalism is demonstrated in the generation and evaluation of plans.
The notion of actions with pre- and postconditions arose in artificial intelligence in the field of planning. In formulating a plan to reach some particular goal, there are a number of things which a planning agent must take into account. First, he will have to decide which actions can and may be undertaken in order to reach the goal. The physical, legal, financial and other constraints under which an actor must act will be lumped together below, since we will be interested in what is common to them all, namely that they restrict available options.
In this last chapter we shall survey a number of instances where infinite electrical networks are useful models of physical phenomena, or serve as analogs in some other mathematical disciplines, or are realizations of certain abstract entities. We shall simply describe those applications without presenting a detailed exposition. To do the latter would carry us too far afield into quite a variety of subjects. However, we do provide references to the literature wherein the described applications can be examined more closely.
Several examples are presented in Sections 8.1 and 8.2 that demonstrate how the theory of infinite electrical networks is helpful for finding numerical solutions of some partial differential equations when the phenomenon being studied extends over an infinite region. The basic analytical tool is an operator version of Norton's representation, which is appropriate for an infinite grid that is being observed along a boundary. In effect, the infinite grid is replaced by a set of terminating resistors and possibly equivalent sources connected to the boundary nodes. In this way, the infinite domain of the original problem can be reduced to a finite one – at least so far as one of the spatial dimensions is concerned. This can save computer time and memory-storage requirements.
In Section 8.3 we describe two classical problems in the theory of random walks on infinite graphs and state how infinite-electrical-network theory solves those problems. Indeed, resistive networks are analogs for random walks.
In 1971 Harley Flanders [51] opened a door by showing how a unique, finite-power, voltage-current regime could be shown to exist in an infinite resistive network whose graph need not have a regular pattern. To be sure, infinite networks had been examined at least intermittently from the earliest days of circuit theory, but those prior works were restricted to simple networks of various sorts, such as ladders and grids. For example, infinite uniform ladder networks were analyzed in [31], [73], and [139], works that appeared 70 to 80 years ago. Early examinations of uniform grids and the discrete harmonic operators they generate can be found in [35], [44], [45], [47], [52], [54], [65], [84], [126], [135], [143].
Flanders' theorem, an exposition of which starts this chapter, is restricted to locally finite networks with a finite number of sources. Another tacit assumption in his theory is that only open-circuits appear at the infinite extremities of the network. The removal of these restrictions, other extensions, and a variety of ramifications [158], [159], [163], [177] comprise the rest of this chapter. However, the assumptions that the network consists only of linear resistors and independent sources and is in a finite-power regime are maintained throughout this chapter.
Actually, finite-power theories for nonlinear networks are now available, and they apply just as well to linear networks as special cases. One is due to Dolezal [40], [41], and the other to DeMichelle and Soardi [37].