To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book describes the mathematical aspects of the semantics of programming languages. The main goals are to provide formal tools to assess the meaning of programming constructs in both a language-independent and a machine-independent way, and to prove properties about programs, such as whether they terminate, or whether their result is a solution of the problem they are supposed to solve. In order to achieve this the authors first present, in an elementary and unified way, the theory of certain topological spaces that have proved of use in the modelling of various families of typed lambda calculi considered as core programming languages and as meta-languages for denotational semantics. This theory is known as Domain Theory, and was founded as a subject by Scott and Plotkin. One of the main concerns is to establish links between mathematical structures and more syntactic approaches to semantics, often referred to as operational semantics, which is also described. This dual approach has the double advantage of motivating computer scientists to do some mathematics and of interesting mathematicians in unfamiliar application areas from computer science.
Information is a central topic in computer science, cognitive science and philosophy. In spite of its importance in the 'information age', there is no consensus on what information is, what makes it possible, and what it means for one medium to carry information about another. Drawing on ideas from mathematics, computer science and philosophy, this book addresses the definition and place of information in society. The authors, observing that information flow is possible only within a connected distribution system, provide a mathematically rigorous, philosophically sound foundation for a science of information. They illustrate their theory by applying it to a wide range of phenomena, from file transfer to DNA, from quantum mechanics to speech act theory.
The lecture courses in this work are derived from the SERC 'Logic for IT' Summer School and Conference on Proof Theory held at Leeds University. The contributions come from acknowledged experts and comprise expository and research articles; put together in this book they form an invaluable introduction to proof theory that is aimed at both mathematicians and computer scientists.
Coinduction is a method for specifying and reasoning about infinite data types and automata with infinite behaviour. In recent years, it has come to play an ever more important role in the theory of computing. It is studied in many disciplines, including process theory and concurrency, modal logic and automata theory. Typically, coinductive proofs demonstrate the equivalence of two objects by constructing a suitable bisimulation relation between them. This collection of surveys is aimed at both researchers and Master's students in computer science and mathematics and deals with various aspects of bisimulation and coinduction, with an emphasis on process theory. Seven chapters cover the following topics: history, algebra and coalgebra, algorithmics, logic, higher-order languages, enhancements of the bisimulation proof method, and probabilities. Exercises are also included to help the reader master new material.
Grammars of natural languages can be expressed as mathematical objects, similar to computer programs. Such a formal presentation of grammars facilitates mathematical reasoning with grammars (and the languages they denote), as well as computational implementation of grammar processors. This book presents one of the most commonly used grammatical formalisms, Unification Grammars, which underlies contemporary linguistic theories such as Lexical-Functional Grammar (LFG) and Head-driven Phrase Structure Grammar (HPSG). The book provides a robust and rigorous exposition of the formalism that is both mathematically well-founded and linguistically motivated. While the material is presented formally, and much of the text is mathematically oriented, a core chapter of the book addresses linguistic applications and the implementation of several linguistic insights in unification grammars. Dozens of examples and numerous exercises (many with solutions) illustrate key points. Graduate students and researchers in both computer science and linguistics will find this book a valuable resource.
The history of bisimulation is well documented in earlier chapters of this book. In this chapter we will look at a major non-trivial extension of the theory of labelled transition systems: probabilistic transition systems. There are many possible extensions of theoretical and practical interest: real-time, quantitative, independence, spatial and many others. Probability is the best theory we have for handling uncertainty in all of science, not just computer science. It is not an idle extension made for the purpose of exploring what is theoretically possible. Non-determinism is, of course, important, and arises in computer science because sometimes we just cannot do any better or because we lack quantitative data from which to make quantitative predictions. However, one does not find any use of non-determinism in a quantitative science like physics, though it appears in sciences like biology where we have not yet reached a fundamental understanding of the nature of systems.
When we do have data or quantitative models, it is far preferable to analyse uncertainty probabilistically. A fundamental reason that we want to use probabilistic reasoning is that if we merely reported what is possible and then insisted that no bad things were possible, we would trust very few system designs in real life. For example, we would never trust a communication network, a car, an aeroplane, an investment bank nor would we ever take any medication! In short, only very few idealised systems ever meet purely logical specifications. We need to know the ‘odds’ before we trust any system.
We introduce bisimulation and coinduction roughly following the way that led to their discovery in Computer Science. Thus the general topic is the semantics of concurrent languages (or systems), in which several activities, the processes, may run concurrently. Central questions are: what is, mathematically, a process? And what does it mean that two processes are ‘equal’? We seek notions of process and process equality that are both mathematically and practically interesting. For instance, the notions should be amenable to effective techniques for proving equalities, and the equalities themselves should be justifiable, according to the way processes are used.
We hope that the reader will find this way of proceeding helpful for understanding the meaning of bisimulation and coinduction. The emphasis on processes is also justified by the fact that concurrency remains today the main application area for bisimulation and coinduction.
We compare processes and functions in Section 1.1. We will see that processes do not fit the input/output schema of functions. A process has an interactive behaviour, and it is essential to take this into account. We formalise the idea of behaviour in Section 1.2 via labelled transition systems (LTSs), together with notations and terminology for them. We discuss the issue of equality between behaviours in Section 1.3. We first try to re-use notions of equality from Graph Theory and Automata Theory. The failure of these attempts leads us to proposing bisimilarity, in Section 1.4. We introduce the reader to the bisimulation proof method through a number of examples.
This book is about bisimulation and coinduction. It is the companion book of the volume An Introduction to Bisimulation and Coinduction, by Davide Sangiorgi (Cambridge University Press, 2011), which deals with the basics of bisimulation and coinduction, with an emphasis on labelled transition systems, processes, and other notions from the theory of concurrency.
In the present volume, we have collected a number of chapters, by different authors, on several advanced topics in bisimulation and coinduction. These chapters either treat specific aspects of bisimulation and coinduction in great detail, including their history, algorithmics, enhanced proof methods and logic. Or they generalise the basic notions of bisimulation and coinduction to different or more general settings, such as coalgebra, higher-order languages and probabilistic systems. Below we briefly summarise the chapters in this volume.
The origins of bisimulation and coinduction, by Davide Sangiorgi
In this chapter, the origins of the notions of bisimulation and coinduction are traced back to different fields, notably computer science, modal logic, and set theory.
An introduction to (co)algebra and (co)induction, by Bart Jacobs and Jan Rutten
Here the notions of bisimulation and coinduction are explained in terms of coalgebras. These mathematical structures generalise all kinds of infinitedata structures and automata, including streams (infinite lists), deterministic and probabilistic automata, and labelled transition systems. Coalgebras are formally dual to algebras and it is this duality that is used to put both induction and coinduction into a common perspective.
This book is an introduction to bisimulation and coinduction and a precursor to the companion book on more advanced topics. Between them, the books analyse the most fundamental aspects of bisimulation and coinduction, exploring concepts and techniques that can be transported to many areas. Bisimulation is a special case of coinduction, by far the most studied coinductive concept. Bisimulation was discovered in Concurrency Theory and processes remain the main application area. This explains the special emphasis on bisimulation and processes that one finds throughout the two volumes.
This volume treats basic topics. It explains coinduction, and its duality with induction, from various angles, starting from some simple results of fixed-point theory. It then goes on to bisimulation, as a tool for defining behavioural equality among processes (bisimilarity), and for proving such equalities. It compares bisimulation with other notions of behavioural equivalence. It also presents a simple process calculus, both to show algebraic techniques for bisimulation and to illustrate the combination of inductive and coinductive reasoning.
The companion volume, Advanced Topics in Bisimulation and Coinduction, edited by Davide Sangiorgi and Jan Rutten, deals with more specialised topics. A chapter recalls the history of the discovery of bisimulation and coinduction. Another chapter unravels the duality between induction and coinduction, both as defining principles and as proof techniques, in terms of the duality between the mathematical notions of algebra and coalgebra and properties such as initiality and finality.
The simulation equivalence of Exercise 1.4.17 drops the symmetry of the bisimulation game: the challenge transitions may only be launched by one of the processes in the pairs. We have seen that simulation equivalence is strictly coarser than bisimilarity. Unfortunately, it does not respect deadlock. In this section we discuss a few refinements of simulation equivalence without this drawback. They are coinductively defined, much like bisimilarity, while being coarser than bisimilarity. Thus they can allow us to use coinduction in situations where bisimilarity may be over-discriminating. Another possible advantage of a simulationlike relation is that it naturally yields a preorder (with all the advantages mentioned in Section 5.5). With respect to simulation-based relations, however, bisimilarity remains mathematically more robust and natural. The most interesting refinements we examine are represented by ready similarity and coupled similarity.
We begin in Section 6.1 with complete simulation, and continue in Section 6.2 with ready simulation. They are to simulation what complete trace equivalence and failure (or ready) equivalence are to trace equivalence. In Section 6.3 we discuss two-nested simulation equivalence. In Section 6.4 we consider the weak versions of the relations in the previous sections. In Section 6.5 we present coupled similarity, which aims at relaxing the bisimulation requirements on the internal actions of processes. Finally, in Section 6.6, we summarise the various equivalences discussed in this and previous chapters.
Complete simulation
The arguments about the deadlock-insensitivity of trace equivalence in Section 1.3.2, such as the equality between the processes in Figure 1.4, apply to simulation equivalence too.
In the first part of this chapter we present (yet) another characterisation of bisimilarity, namely bisimilarity as a testing equivalence. In a testing scenario two processes are equivalent if no experiment can distinguish them. An experiment on a process is set up by defining a test, that is, intuitively, a pattern of demands on the process (e.g., the ability of performing a certain sequence of actions, or the inability of performing certain actions). Depending on how the process behaves when such a test is conducted, the observer emits a verdict about the success or failure of the experiment. The experiments are means of understanding how the processes react to stimuli from the environment. A testing scenario has two important parameters.
How are observations about the behaviour of a process gathered? In other words, what can the observer conducting the experiment do on a process? For instance, is he/she allowed to observe the inability of a process to perform certain actions? Is he/she allowed to observe whether a process can immediately perform two distinct actions? Is he/she allowed to make a copy of a process? These decisions are embodied in the language for the tests.
How is the success or the failure of an experiment determined?
The choice for these parameters has an impact on the distinctions that can be made on the processes. We will set up a testing scenario whose induced equivalence is precisely bisimilarity.