To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In a recent paper, Steven Vickers introduced a ‘generalized powerdomain’ construction, which he called the (lower) bagdomain, for algebraic posets, and argued that it provides a more realistic model than the powerdomain for the theory of databases (cf. Gunter). The basic idea is that our ‘partial information’ about a possible database should be specified not by a set of partial records of individuals, but by an indexed family (or, in Vickers' terminology, a bag) of such records: we do not want to be forced to identify two individuals in our database merely because the information that we have about them so far happens to be identical (even though we may, at some later stage, obtain the information that they are in fact the same individual).
There is an obvious problem with this notion. Even if the domain from which we start has only one point, the points of its bagdomain should correspond to arbitrary sets, and the ‘refinement ordering’ on them to arbitrary functions between sets, so that the bagdomain clearly cannot be a space (or even a locale) as usually understood. However, topos-theorists have long known how to handle ‘the space of all sets’ as a topos (the object classifier, cf. Johnstone and Wraith, pp. 175–6), and this is what Vickers constructs: that is, given an algebraic poset D, he constructs a topos BL(D) whose points are bags of points of D (and in the case when D has just one point, BL(D) is indeed the object classifier).
The London Mathematical Society Symposium on Applications of Categories in Computer Science took place in the Department of Mathematical Sciences at the University of Durham from 20 to 30 July 1991. Although the interaction between the mathematical theory of categories and theoretical computer science is by no means a recent phenomenon, the last few years have seen a marked upsurge in activity in this area. Consequently this was a very wellattended and lively Symposium. There were 100 participants, 73 receiving partial financial support from the Science and Engineering Research Council. The scientific aspects of the meeting were organized by Michael Fourman (Edinburgh), Peter Johnstone (Cambridge) and Andrew Pitts (Cambridge). A programme committee consisting of the three organizers together with Samson Abramsky (Imperial College), Pierre-Louis Curien (ENS, Paris) and Glynn Winskel (Aarhus) decided the final details of the scientific programme. There were 62 talks, eight of which were by the four key speakers who were Pierre-Louis Curien, Peter Freyd (Pennsylvania), John Reynolds (Carnegie-Mellon) and Glynn Winskel.
The papers in this volume represent final versions of a selection of the material presented at the Symposium, or in one case (the paper which stands last in the volume) of a development arising out of discussions which took place at the Symposium. We hope that they collectively present a balanced overview of the current state of research on the intersection between categories and computer science. All the papers have been refereed; we regret that pressure of space obliged us toexclude one or two papers that received favourable referee's reports.
This paper collects observations about the two issues of sequentiality and full abstraction for programming languages. The format of the paper is that of an extended lecture. Some old and new results are hinted at, and references are given, without any claim to be exhaustive. We assume that the reader knows something about λ-calculus and about domain theory.
Introduction
Sequentiality and full abstraction have been often considered as related topics. More precisely, the quest of full abstraction led to the idea that sequentiality is a key issue in the semantics of programming languages.
In vague terms, full abstraction is the property that a mathematical semantics captures exactly the operational semantics of a specific language under study. Following the tradition of the first studies on full abstraction [Milner,Plotkin], the languages considered here are PCF, an extension of λ-calculus with arithmetic operators and recursion, and variants thereof. The focus on λ-calculus is amply justified by its rôle, either as a kernel (functional) programming language, or as a suitable metalanguage for the encoding of denotational semantics of a great variety of (sequential) programming languages.
It was Gérard Berry's belief that only after a detailed sudy of the syntax could one conceive the semantic definitions appropriate for reaching full abstraction. I always considered this as an illuminating idea, and this will be the starting point of this paper.
In section 2, we shall state Berry's Sequentiality Theorem: this will require us first to recall Wadsworth-Welch-Lévy's Continuity Theorem, and then to introduce a general notion of sequential function, due to Kahn-Plotkin.
In section 3, we shall abandon sequentiality for a while to define full abstraction and quote some results.
From the outside, our feature structures look much like the ψ-terms of Aït-Kaci (1984, 1986) or the feature structures of Pollard and Sag (1987), Moshier (1988) or Pollard and Moshier (1990). In particular, a feature structure is modeled by a possibly cyclic directed graph with labels on all of the nodes and arcs. Each node is labeled with a symbol representing its type, and the arcs are labeled with symbols representing features. We think of our types as organizing feature structures into natural classes. In this role, our types are doing the same duty as concepts in a terminological knowledge representation system (Brachman and Schmolze 1985, Brachman, Fikes, and Levesque 1983, Mac Gregor 1988) or abstract data types in object-oriented programming languages (Cardelli and Wegner 1985). Thus it is natural to think of the types as being organized in an inheritance hierarchy based on their generality. Feature structure unification is then modified so that two feature structures can only be unified if their types are compatible according to the primitive hierarchy of types.
In this chapter, we discuss how type inheritance hierarchies can be specified and the restrictions that we impose on them that allow us to define an adequate notion of type inference, which is necessary during unification. These restrictions were first noted by Aït-Kaci (1984) in his unification-based reasoning system. The polymorphism allowed in our type system is based on inheritance in which a subtype inherits information from all of its supertypes. The possibility of more than one supertype for a given type allows for multiple inheritance.
In this chapter we consider the addition of variables ranging over feature structures to our description language. It turns out that the addition of variables does not increase the representational power of the description language in terms of the feature structures which it can distinguish. Of course, this should not be surprising given the description theorem, which tells us that every feature structure can be picked out as the most general satisfier of some description. On the other hand, we can replace path equations and inequations in favor of variables and equations and inequations between variables if desirable. We prove a theorem to this effect in the latter part of this chapter. The reason that we consider variables now is that they have shown up in various guises in the feature structure literature, and are actually useful when considering applications such as definite clause programming languages based on feature structures. Our treatment of variables most closely follows that of Smolka (1988, 1989), who treats variables as part of the language for describing feature structures. Aït-Kaci (1984, 1986) also used variable-like objects, which he called tags. Due to the fact that he did not have a description language, Aït-Kaci had to consider variables to be part of the feature structures themselves, and then factor the class of feature structures with respect to alphabetic variance to recover the desired informational structure. We have informally introduced tags in our attribute value matrix diagrams, but did not consider them to be part of the feature structures themselves.
We assume that we have a countably infinite collection Var of variables.
In our development up to this point, we have treated feature structures logically as models of descriptions expressed in a simple attribute-value language. In the last chapter, we extended the notion of description to include variables; in this chapter, we generalize the notion of model to partial algebraic structures. An algebraic model consists of an arbitrary collection of domain objects and associates each feature with a unary partial function over this domain. In the research of Smolka (1988, 1989) and Johnson (1986, 1987, 1990), more general algebraic models of attribute-value descriptions are the focus of attention. We pull the rabbit out of the hat when we show that our old notion of satisfaction as a relation between feature structures and descriptions is really just a special case of a more general algebraic definition of satisfaction. The feature structures constitute an algebraic model in which the domain of the model is the collection of feature structures, and the features pick out their natural (partial) value mappings. What makes the feature structure model so appealing from a logical perspective is that it is canonical in the sense that descriptions are logically equivalent if and only if they are logically equivalent for the feature structure model. In this respect, the feature structure model plays the same logical role as term models play in universal algebra. This connection is strengthened in light of the most general satisfier and description theorems, which allow us to go back and forth between feature structures and their normal form descriptions.
In recent joint work with A. Edalat[ES91] we developed a general approach to the solution of domain equations, based on information system ideas. The basis of the work was an axiomatization of the notion of a category of information systems, yielding what we may call an “information category”, or I-category for short. We begin this paper with an exposition of the I-category work. In the remainder of the paper we consider duality in I-categories, as the setting for studying initial algebra/final algebra coincidence. We then look at induction and coinduction principles in the light of these ideas.
To amplify the preceding a little, we note that the existing treatments of information systems, following [Sco82] and [LW84], make use of a global ordering of the objects of the category (the information systems) in order to “solve” domain equations by the ordinary cpo fixed point theorem. In the I-category approach, an initial algebra characterization of the solutions is obtained, by making use of a global ordering ⊆ of morphisms in addition to the ordering ⊴ of objects. (In the usual cases, where morphisms are “approximable relations” between tokens, the global ordering is essentially set inclusion; more precisely, we have that (f : A → B) ⊆ (f′ : A′ → B′) if A ⊴ A′, B ⊴ B′ and f ⊆ f′.) Moreover the axiomatic formulation enables us to handle many examples besides the usual categories of domains, in a unified manner: for example, “domain equations” over Stone spaces, via Boolean algebras as information systems. In the present exposition, we attempt to clarify the relation between the I-category approach and an established method of domain equation solution, namely the O-category method,usingthe Basic Lemma of [SP82] as a key.
In Algebraically Complete Categories (in the proceedings of the Category theory conference in Como '90) an ALGEBRAICALLY COMPLETE CATEGORY was defined as one for which every covariant endofunctor has an initial algebra. This should be understood to be in a 2-category setting, that is, in a setting in which the phrase “every covariant endofunctor” refers to an understood class of endofunctors.
Given an endofunctor T the category of T- INVARIANT objects is best defined as the category whose objects are triples <A,f, g> where f:TA →A, g:A→TA and fg and gf are both identity maps. T-Inv appears as a full subcategory of both T-Alg and T-Coalg, in each case via a forgetful functor. The Lambek lemma and its dual say that the initial object in T-Alg and the final object in T-Coalg may be viewed as objects in T-Inv wherein they easily remain initial and final. Of course there is a canonical map from the initial to the final. I will say that T is ALGEBRAICALLY BOUNDED if this canonical map is an isomorphism, equivalently if T-Inv is a punctuated category, that is one with a biterminator, an object that is both initial and final.
An algebraically bicomplete category is ALGEBRAICALLY COMPACT if each endofunctor is algebraically bounded. (As with algebraic completeness this should be understood to be in a 2-category setting.) In this context I will use the term FREE T-ALGEBRA rather than either initial algebra or final coalgebra.
With our definition of inheritance, we have a notion of consistency and unification for our smallest conceptual unit, the type. We now turn to the task of developing structured representations that can be built out of the basic concepts, which we call feature structures. The reason for the qualifier is that even though feature structures are defined using our type symbols, they are not typed, in the sense that there is no restriction on the co-occurrence of features or restrictions on their values. We introduce methods for specifying appropriateness conditions on features and a notion of well-typing only after studying the ordered notion of feature structures. We also hold off on introducing inequations and extensionality conditions. Before introducing these other topics, we concentrate on fully developing the notion of untyped feature structure and the logical notions we employ. Most of the results that hold for feature structures can be immediately generalized to well-typed feature structures by application of the type inference mechanism.
Our feature structures are structurally similar to the more traditional form of feature structures such as those used in the patr-ii system and those defined by Rounds and Kasper. The next major development after these initial systems was introduced by Moshier (1988). The innovation of Moshier's system was to allow atomic symbols to label arbitrary nodes in a feature structure. He also treated the identity conditions for these atoms fully intensionally. Both patr-ii and the Rounds and Kasper systems treated feature structures intensionally, but enforced extensional identity conditions on atoms.
In this chapter, we consider a phrase structure grammar formalism, or more precisely, a parameterized family of such formalisms, in which non-terminal (category) symbols are replaced by feature structures in both rewriting rules and lexical entries. Consequently, the application of a rewriting rule must be mediated by unification rather than by simple symbol matching. This explains why grammar formalisms such as the one we present here have come to be known as unification-based. Although our presentation of unification-based phrase structure grammars is self contained, for those unfamiliar with unification-based grammars and their applications, we recommend reading Shieber's excellent introduction (Shieber 1986). Shieber lays out the fundamental principles of unification-based phrase structure formalisms along with some of their more familar incarnations, as well as providing a wide variety of linguistic examples and motivations. Another good introductory source is the text by Gazdar and Mellish (1989).
The early development of unification-based grammars was intimately connected with the development of logic programming itself, the most obvious link stemming from Colmerauer's research into Q-systems (1970) and Metamorphosis Grammars (1978). In fact, Colmerauer's development of Prolog was motivated by the desire to provide a powerful yet efficient implementation environment for natural language grammars. The subsequent popularity of Prolog led to the development of a number of so-called logic grammar systems. These grammar formalisms are typically variations of first-order term unification phrase structure grammars such as the Definite Clause Grammars (DCGS) of Pereira and Warren (1980), the Extraposition Grammars of Pereira (1981), the Slot Grammars of McCord (1981) and also the Gapping Grammars of Dahl and Abramson (1984, Popowich 1985).