To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fibrations have been widely used to model polymorphic λ-calculi. In this paper we describe the additional structure on a fibration that suffices to make it a model of a polymorphic λ-calculus with subtypes and bounded quantification; the basic idea is to single out a class of maps, the inclusions, in each fibre. Bounded quantification is made possible by a imposing condition that resembles local smallness. Since the notion of inclusion is not stable under isomorphism, some care must be taken to make everything strict.
We then show that PER models for λ-calculi with subtypes fit into this framework. In fact, not only PERs, but also any full reflective sub category of the category of modest sets (‘PERs’ in a realizability topos), provide a model; hence all the small complete categories of ‘synthetic domains’ found in various realizability toposes can be used to model subtypes.
Introduction
What this paper is about
At the core of object-oriented programming, and related approaches to programming, are the notions of subtyping and inheritance. These have proved to be very powerful tools for structuring programs, and they appear—in one form or another—in a wide variety of modern programming languages.
One way of studying these notions formally is to use the framework of typed λ-calculus. That is, we start with a formal system (say, a version of the polymorphic λ-calculus) and extend it by adding a notion of type inclusion, together with suitable rules; we obtain a larger system. We can then use the methods of mathematical logic to study the properties of the system.
We explore some foundational issues in the development of a theory of intensional semantics, in which program denotations may convey information about computation strategy in addition to the usual extensional information. Beginning with an “extensional” category C, whose morphisms we can think of as functions of some kind, we model a notion of computation using a comonad with certain extra structure and we regard the Kleisli category of the comonad as an intensional category. An intensional morphism, or algorithm, can be thought of as a function from computations to values, or as a function from values to values equipped with a computation strategy. Under certain rather general assumptions the underlying category C can be recovered from the Kleisli category by taking a quotient, derived from a congruence relation that we call extensional equivalence. We then focus on the case where the underlying category is cartesian closed. Under further assumptions the Kleisli category satisfies a weak form of cartesian closure: application morphisms exist, currying and uncurrying of morphisms make sense, and the diagram for exponentiation commutes up to extensional equivalence. When the underlying category is an ordered category we identify conditions under which the exponentiation diagram commutes up to an inequality. We illustrate these ideas and results by introducing some notions of computation on domains and by discussing the properties of the corresponding categories of algorithms on domains.
Introduction
Most existing denotational semantic treatments of programming languages are extensional, in that they abstract away from computational details and ascribe essentially extensional meanings to programs.
There has been considerable recent interest in the use of algebraic methodologies to define and elucidate constructions in fixed point semantics [B], [FMRS], [Mu2]. In this paper we present recent results utilizing categorical methods, particularly strong monads, algebras and dinatural transformations to build general fixed point operators. The approach throughout is to evolve from the specific to the general case by eventually discarding the particulars of domains and continuous functions so often used in this setting. Instead we rely upon the structure of strong monads and algebras to provide a general algebraic framework for this discussion. This framework should provide a springboard for further investigations into other issues in semantics.
By way of background, the issues raised in this paper find their origins in several different sources. In [Mu2] the formal role of iteration in a cartesian closed category (ccc) with fixed points was investigated. This was motivated by the observation in [H-P] that the presence of a natural number object (nno) was inconsistent with ccc's and fixed points. This author introduced the notion of onno (ordered nno) which in semantic categories played a role as iterator and was precisely the initial T-algebra for T the strong lift monad. Using the onno a factorization of fix was produced and further it was shown fix was in fact a dinatural transformation. This was accomplished by avoiding the traditional projection/embedding approach to semantics. Similar results were extended to order-enriched and effective settings as well.
Turning to monads, their role in computation is not new. In particular, it was emphasized early in the development of topos theory that the partial map classifier was a strong monad.
Domain theoretic understanding of databases as elements of powerdomains is modified to allow multisets of records instead of sets. This is related to geometric theories and classifying toposes, and it is shown that algebraic base domains lead to algebraic categories of models in two cases analogous to the lower (Hoare) powerdomain and Gunter's mixed powerdomain.
Terminology
Throughout this paper, “domain” means algebraic poset – not necessarily with bottom, nor second countable. The information system theoretic account of algebraic posets fits very neatly with powerdomain constructions. Following Vickers [90], it may be that essentially the same methods work for continuous posets; but we defer treating those until we have a better understanding of the necessary generalizations to topos theory.
More concretely, a domain is a preorder (information system) (D, ⊆) of tokens, and associated with it are an algebraic poset pt D of points (ideals of D; one would normally think of pt D as the domain), and a frame ΩD of opens (upper closed subsets of D; ΩD is isomorphic to the Scott topology on pt D).
“Topos” always means “Grothendieck topos”, and not “elementary topos” morphisms between toposes are understood to be geometric morphisms.
S, italicized, denotes the category of sets.
We shall follow, usually without comment, the notation of Vickers [89], which can be taken as our standard reference for the topological and localic notions used here.
In [6] one finds a general method to describe various (typed) λ-calculi categorically. Here we give an elementary formulation in terms of indexed categories of the outcome of applying this method to the simply typed λ-calculus. It yields a categorical structure in which one can describe exponent types without assuming cartesian product types. Specializing to the “monoid” case where one has only one type yields a categorical description of the untyped λ-calculus.
In the literature there are two categorical notions for the untyped λ-calculus: one by Obtulowicz and one by Scott & Koymans. The notion we arrive at subsumes both of these; it can be seen as a mild generalization of the first one.
Introduction
The straightforward way to describe the simply typed λ-calculus (denoted here by λ1) categorically is in terms of cartesian closed categories (CCC's), see [10]. On the type theoretic side this caused some discomfort, because one commonly uses only exponent types without assuming cartesian product types — let alone unit (i.e. terminal) types. The typical reply from category theory is that one needs cartesian products in order to define exponents. Below we give a categorical description of exponent types without assuming cartesian product types. We do use cartesian products of contexts; these always exist by concatenation. Thus both sides can be satisfied by carefully distinguishing between cartesian products of types and cartesian products of contexts. We introduce an appropriate categorical structure for doing so.
In [6] one can find a general method to describe typed λ-calculi categorically. One of the main organizing principles used there is formulated below. It deserves the status of a slogan.
There are many situations in logic, theoretical computer science, and category theory where two binary operations—one thought of as a (tensor) “product”, the other a “sum”—play a key role, such as in distributive categories and in -autonomous categories. (One can regard these as essentially the AND/OR of traditional logic and the TIMES/PAR of (multiplicative) linear logic, respectively.) In the latter example, however, the distributivity one often finds is conspicuously absent: in this paper we study a “linearisation” of distributivity that is present in this context. We show that this weak distributivity is precisely what is needed to model Gentzen's cut rule (in the absence of other structural rules), and show how it can be strengthened in two natural ways, one to generate full distributivity, and the other to generate -autonomous categories.
Introduction
There are many situations in logic, theoretical computer science, and category theory where two binary operations, “tensor products” (though one may be a “sum”), play a key role. The multiplicative fragment of linear logic is a particularly interesting example as it is a Gentzen style sequent calculus in which the structural rules of contraction, thinning, and (sometimes) exchange are dropped. The fact that these rules are omitted considerably simplifies the derivation of the cut elimination theorem. Furthermore, the proof theory of this fragment is interesting and known [Se89] to correspond to *-autonomous categories as introduced by Ban in [Ba79].
In the study of categories with two tensor products one usually assumes a distributivity condition, particularly in the case when one of these is either the product or sum.
Up to this point, the semantics of the commands is determined by the relation between precondition and postcondition. This point of view is too restricted for the treatment of concurrent programs and reactive systems. The usual example is that of an operating system which is supposed to perform useful tasks without ever reaching a postcondition.
For this purpose, the semantics of commands must be extended by consideration of conditions at certain moments during execution. We do not want to be forced to consider all intermediate states or to formalize sequences of intermediate states. We have chosen the following level of abstraction. To every procedure name h, a predicate z.h is associated. The temporal semantic properties of a command q depend on the values of z.h.x for the procedure calls, say of procedure h in state x, induced by execution of command q. The main properties are ‘always’ and ‘eventually’, which are distinguished by the question whether z.h.x should hold for all induced calls or for at least one induced call. The concept of ‘always’ is related to stability and safety. The concept of ‘eventually’ is related to progress and liveness.
In this chapter, we regard nontermination of simple commands as malfunctioning and nontermination of procedures as potentially useful infinite behaviour. We therefore use wp for the interpretation of simple commands and wlp for procedure calls.
Suppose that you have a database that contains, among other things, the following pieces of information (in some form of code):
α: All European swans are white.
β: The bird caught in the trap is a swan.
γ: The bird caught in the trap comes from Sweden.
δ: Sweden is part of Europe.
If your database is coupled with a program that can compute logical inferences in the given code, the following fact is derivable from α – δ:
ε: The bird caught in the trap is white.
Now suppose that, as a matter of fact, the bird caught in the trap turns out to be black. This means that you want to add the fact –ε, i.e., the negation of ε, to the database. But then the database becomes inconsistent. If you want to keep the database consistent, which is normally a sound methodology, you need to revise it. This means that some of the beliefs in the original database must be retracted. You don't want to give up all of the beliefs since this would be an unnecessary loss of valuable information. So you have to choose between retracting α, β, γ or δ.
The problem of belief revision is that logical considerations alone do not tell you which beliefs to give up, but this has to be decided by some other means.
This book is about programs as mathematical objects. We focus on one of the aspects of programs, namely their functionality, their meaning or semantics. Following Dijkstra we express the semantics of a program by the weakest precondition of the program as a function of the postcondition. Of course, programs have other aspects, like syntactic structure, executability and (if they are executable) efficiency. In fact, perhaps surprisingly, for programming methodology it is useful to allow a large class of programs, many of which are not executable but serve as partially implemented specifications.
Weakest preconditions are used to define the meanings of programs in a clean and uniform way, without the need to introduce operational arguments. This formalism allows an effortless incorporation of unbounded nondeterminacy. Now programming methodology poses two questions. The first question is, given a specification, to design a general program that is proved to meet the specification but need not be executable or efficient, and the second question is to transform such a program into a more suitable one that also meets the specification.
We do not address the methodological question how to design, but we concentrate on the mathematical questions concerning semantic properties of programs, semantic equality of programs and the refinement relation between programs. We provide a single formal theory that supports a number of different extensions of the basic theory of computation. The correctness of a program with respect to a specification is for us only one of its semantic properties.
Studies of the dynamics of (rational) belief have reached a new degree of sophistication through the formal models developed in particular by Levi (1980) and by Alchourrón, Gärdenfors, and Makinson (the AGM model, Alchourrón et al. 1985 and Gärdenfors 1988). In these models, an individual's beliefs are represented by a set that is closed under logical consequence. The AGM model and its underlying assumptions have had a profound influence on the study of belief base updating in computer science. Although states of belief are, for obvious reasons, represented in computer applications by finite sets of sentences, it is common to demand that “results from an update must be independent of the syntax of the original K[nowledge] B[ase]” (Katsuno and Mendelzon 1989). In other words, operations on finite sets are treated more or less as shadows of operations on the (infinite) logical closures of these sets.
In this paper, a different representation of beliefs will be introduced. A formalized belief state will consist of an ordered pair <K,Cn>. K, or the belief base, is a set of expressions that is not necessarily closed under logical consequence. It represents the basic facts on which an individual grounds her beliefs. Cn is a consequence operator, such that Cn(K) represents her beliefs (namely both the elements of K and the conclusions she draws from them).
In the last five years, there has been much concern in Artificial Intelligence about the problem of belief revision. A rather exciting feature of the on-going debate is that it has led to an interaction between fields which usually ignore each other, such as logic, probability theory, fuzzy set and possibility theory, and their epistemological underpinnings. In his book, Gärdenfors (1988) emphasizes in a convincing way that notions of epistemic state can be defined in the framework of logic as well as in the one of probability theory and that similar postulates can be put forward in the two settings, for the purpose of devising rational updating operations that take place upon the arrival of new pieces of information. On the other hand Spohn (1988), in trying to refine the logical notion of epistemic state, viewed as a consistent and deductively closed set of propositions, comes close to Shackle's (1961) notions of degrees of potential surprise, as well as Zadeh's (1978) possibility theory. However he proposes revision rules that are in some way in accordance with probability theory where probabilities only take infinitesimal values (Spohn, 1990). Besides, one of the main outcomes of Gärdenfors' logical theory of belief change is that any revision process underlies an ordering of propositions that form the epistemic state; the rationality postulates force this ordering to satisfy specific properties, and the only numerical set-functions that can account for these properties are the so-called necessity measures that play a basic role in possibility theory (Dubois and Prade, 1988).
This chapter is devoted to the formal definition of the semantics of sequential composition, unbounded choice and recursion and to the proofs of the properties of commands that were introduced and used in the previous chapters. The semantics of the simple commands is taken for granted, but otherwise the reader should not rely on old knowledge but only use facts that have already been justified in the new setting. At the end of the chapter, the foundations of the previous chapters will be complete.
Some examples of the theory are given in the exercises at the end of the chapter. The text of the chapter has almost no examples. One reason is that the chapters 1, 2 and 3 may be regarded as examples of the theory. On the other hand, every nontrivial example tends to constitute additional theory.
In Section 4.1, we introduce complete lattices and investigate the lattice of the predicate transformers and some important subsets. Section 4.2 contains our version of the theorem of Knaster–Tarski. A syntactic formalism for commands with unbounded choice is introduced in Section 4.3.
Section 4.4 contains the main definition. From the definition on simple commands, the functions wp and wlp are extended to procedure names and command expressions. In Sections 4.5 and 4.6, the healthiness laws, which are postulated for simple commands, are extended to procedure names and command expressions.
Knowledge update has been a matter of concern to two quite separate traditions: one in philosophical logic, and another in artificial intelligence. In this paper we draw on both traditions to develop a theory of update, based on conditional logic, for a kind of knowledge base that has proven to be of interest in artificial intelligence. After motivating and formulating the logic on which our theory is based, we will prove some basic results and show how our logic can be used to describe update in an environment in which knowledge bases can be treated as truth-value assignments in four-valued logic. In keeping with Nuel Belnap's terminology in Belnap (1977a) and Belnap (1977b), we will refer to such truth-value assignments as set-ups or as four-valued set-ups.
Paraconsistency, primeness, and atomistic update
For the moment we will not say exactly what a four-valued set-up is. Instead we will describe informally some conditions under which it would be natural to structure one's knowledge base as a four-valued set-up. One of these conditions has to do with the treatment of inconsistent input; a second has to do with the representation of disjunctive information; the third concerns what kinds of statements can be the content of an update.
Inconsistent input
A logical calculus is paraconsistent if it cannot be used to derive arbitrary conclusions from inconsistent premises. Belnap argues in general terms that paraconsistent reasoning is appropriate any context where an automated reasoner must operate without any guarantee that its input is consistent, and where nondegenerate performance is desirable even if inconsistency is present. Knowledge bases used in AI applications are cases of this sort.