To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Information is a central topic in computer science, cognitive science and philosophy. In spite of its importance in the 'information age', there is no consensus on what information is, what makes it possible, and what it means for one medium to carry information about another. Drawing on ideas from mathematics, computer science and philosophy, this book addresses the definition and place of information in society. The authors, observing that information flow is possible only within a connected distribution system, provide a mathematically rigorous, philosophically sound foundation for a science of information. They illustrate their theory by applying it to a wide range of phenomena, from file transfer to DNA, from quantum mechanics to speech act theory.
Coinduction is a method for specifying and reasoning about infinite data types and automata with infinite behaviour. In recent years, it has come to play an ever more important role in the theory of computing. It is studied in many disciplines, including process theory and concurrency, modal logic and automata theory. Typically, coinductive proofs demonstrate the equivalence of two objects by constructing a suitable bisimulation relation between them. This collection of surveys is aimed at both researchers and Master's students in computer science and mathematics and deals with various aspects of bisimulation and coinduction, with an emphasis on process theory. Seven chapters cover the following topics: history, algebra and coalgebra, algorithmics, logic, higher-order languages, enhancements of the bisimulation proof method, and probabilities. Exercises are also included to help the reader master new material.
Two deterministic finite automata are almost equivalent if they disagree in acceptanceonly for finitely many inputs. An automaton A is hyper-minimized if noautomaton with fewer states is almost equivalent to A. A regular languageL is canonical if the minimal automaton accepting L ishyper-minimized. The asymptotic state complexitys∗(L) of a regular languageL is the number of states of a hyper-minimized automaton for a languagefinitely different from L. In this paper we show that: (1) the class ofcanonical regular languages is not closed under: intersection, union, concatenation,Kleene closure, difference, symmetric difference, reversal, homomorphism, and inversehomomorphism; (2) for any regular languages L1 andL2 the asymptotic state complexity of their sumL1 ∪ L2, intersectionL1 ∩ L2, differenceL1 − L2, and symmetricdifference L1 ⊕ L2 can be boundedbys∗(L1)·s∗(L2).This bound is tight in binary case and in unary case can be met in infinitely many cases.(3) For any regular language L the asymptotic state complexity of itsreversal LR can be bounded by2s∗(L). This bound is tightin binary case. (4) The asymptotic state complexity of Kleene closure and concatenationcannot be bounded. Namely, for every k ≥ 3, there exist languagesK, L, and M such thats∗(K) = s∗(L) = s∗(M) = 1ands∗(K∗) = s∗(L·M) = k.These are answers to open problems formulated by Badr et al.[RAIRO-Theor. Inf. Appl.43 (2009) 69–94].
This paper presents a new lower bound for the recursive algorithm for solving parity games which is induced by the constructive proof of memoryless determinacy by Zielonka. We outline a family of games of linear size on which the algorithm requires exponential time.
An algorithm is corrected here that was presented as Theorem 2 in [Š. Holub, RAIRO-Theor. Inf. Appl. 40 (2006) 583–591]. It is designed to calculate the maximum length of a nontrivial word with a given set of periods.
Given non-negative weights wS on the k-subsets S of a km-element set V, we consider the sum of the products wS1 ⋅⋅⋅ wSm over all partitions V = S1 ∪ ⋅⋅⋅ ∪Sm into pairwise disjoint k-subsets Si. When the weights wS are positive and within a constant factor of each other, fixed in advance, we present a simple polynomial-time algorithm to approximate the sum within a polynomial in m factor. In the process, we obtain higher-dimensional versions of the van der Waerden and Bregman–Minc bounds for permanents. We also discuss applications to counting of perfect and nearly perfect matchings in hypergraphs.
The history of bisimulation is well documented in earlier chapters of this book. In this chapter we will look at a major non-trivial extension of the theory of labelled transition systems: probabilistic transition systems. There are many possible extensions of theoretical and practical interest: real-time, quantitative, independence, spatial and many others. Probability is the best theory we have for handling uncertainty in all of science, not just computer science. It is not an idle extension made for the purpose of exploring what is theoretically possible. Non-determinism is, of course, important, and arises in computer science because sometimes we just cannot do any better or because we lack quantitative data from which to make quantitative predictions. However, one does not find any use of non-determinism in a quantitative science like physics, though it appears in sciences like biology where we have not yet reached a fundamental understanding of the nature of systems.
When we do have data or quantitative models, it is far preferable to analyse uncertainty probabilistically. A fundamental reason that we want to use probabilistic reasoning is that if we merely reported what is possible and then insisted that no bad things were possible, we would trust very few system designs in real life. For example, we would never trust a communication network, a car, an aeroplane, an investment bank nor would we ever take any medication! In short, only very few idealised systems ever meet purely logical specifications. We need to know the ‘odds’ before we trust any system.
This book is about bisimulation and coinduction. It is the companion book of the volume An Introduction to Bisimulation and Coinduction, by Davide Sangiorgi (Cambridge University Press, 2011), which deals with the basics of bisimulation and coinduction, with an emphasis on labelled transition systems, processes, and other notions from the theory of concurrency.
In the present volume, we have collected a number of chapters, by different authors, on several advanced topics in bisimulation and coinduction. These chapters either treat specific aspects of bisimulation and coinduction in great detail, including their history, algorithmics, enhanced proof methods and logic. Or they generalise the basic notions of bisimulation and coinduction to different or more general settings, such as coalgebra, higher-order languages and probabilistic systems. Below we briefly summarise the chapters in this volume.
The origins of bisimulation and coinduction, by Davide Sangiorgi
In this chapter, the origins of the notions of bisimulation and coinduction are traced back to different fields, notably computer science, modal logic, and set theory.
An introduction to (co)algebra and (co)induction, by Bart Jacobs and Jan Rutten
Here the notions of bisimulation and coinduction are explained in terms of coalgebras. These mathematical structures generalise all kinds of infinitedata structures and automata, including streams (infinite lists), deterministic and probabilistic automata, and labelled transition systems. Coalgebras are formally dual to algebras and it is this duality that is used to put both induction and coinduction into a common perspective.
Algebra is a well-established part of mathematics, dealing with sets with operations satisfying certain properties, like groups, rings, vector spaces, etc. Its results are essential throughout mathematics and other sciences. Universal algebra is a part of algebra in which algebraic structures are studied at a high level of abstraction and in which general notions like homomorphism, subalgebra, congruence are studied in themselves, see e.g. [Coh81, MT92, Wec92]. A further step up the abstraction ladder is taken when one studies algebra with the notions and tools from category theory. This approach leads to a particularly concise notion of what is an algebra (for a functor or for a monad), see for example [Man74]. The conceptual world that we are about to enter owes much to this categorical view, but it also takes inspiration from universal algebra, see e.g. [Rut00].
In general terms, a program in some programming language manipulates data. During the development of computer science over the past few decades it became clear that an abstract description of these data is desirable, for example to ensure that one's program does not depend on the particular representation of the data on which it operates. Also, such abstractness facilitates correctness proofs. This desire led to the use of algebraic methods in computer science, in a branch called algebraic specification or abstract data type theory. The objects of study are data types in themselves, using notions and techniques which are familiar from algebra.
One of the main reasons for the success of bisimilarity is the strength of the associated proof method. We discuss here the method on processes, more precisely, on Labelled Transition Systems (LTSs). However the reader should bear in mind that the bisimulation concept has applications in many areas beyond concurrency [San12]. According to the proof method, to establish that two processes are bisimilar it suffices to find a relation on processes that contains the given pair and that is a bisimulation. Being a bisimulation means that related processes can match each other's transitions so that the derivatives are again related.
In general, when two processes are bisimilar there may be many relations containing the pair, including the bisimilarity relation, defined as the union of all bisimulations. However, the amount of work needed to prove that a relation is a bisimulation depends on its size, since there are transition diagrams to check for each pair. It is therefore important to use relations as small as possible.
In this chapter we show that the bisimulation proof method can be enhanced, by employing relations called ‘bisimulations up to’. These relations need not be bisimulations; they are just contained in a bisimulation. The proof that a relation is a ‘bisimulation up to’ follows diagram-chasing arguments similar to those in bisimulation proofs. The reason why ‘bisimulations up to’ are interesting is that they can be substantially smaller than any enclosing bisimulation; hence they may entail much less work in proofs.
In this chapter,we look at the origins of bisimulation.We showthat bisimulation has been discovered not only in computer science, but also – and roughly at the same time – in other fields: philosophical logic (more precisely, modal logic), and set theory. In each field, we discuss the main steps that led to the discovery, and introduce the people who made these steps possible.
In computer science, philosophical logic, and set theory, bisimulation has been derived through refinements of notions of morphism between algebraic structures. Roughly, morphisms are maps (i.e. functions) that are ‘structurepreserving’. The notion is therefore fundamental in all mathematical theories in which the objects of study have some kind of structure, or algebra. The most basic forms of morphism are the homomorphisms. These essentially give us a way of embedding a structure (the source) into another one (the target), so that all the relations in the source are present in the target. The converse, however, need not be true; for this, stronger notions of morphism are needed. One such notion is isomorphism, which is, however, extremely strong – isomorphic structures must be essentially the same, i.e. ‘algebraically identical’. It is a quest for notions in between homomorphism and isomorphism that led to the discovery of bisimulation.
The kind of structures studied in computer science, philosophical logic, and set theory were forms of rooted directed graphs.
A model for reactive computation, for example that of labelled transition systems [Kel76], or a process algebra (such asACP [BW90], CCS [Mil89], CSP [Hoa85]) can be used to describe both implementations of processes and specifications of their expected behaviours. Process algebras and labelled transition systems therefore naturally support the so-called single-language approach to process theory, that is, the approach in which a single language is used to describe both actual processes and their specifications. An important ingredient of the theory of these languages and their associated semantic models is therefore a notion of behavioural equivalence or behavioural approximation between processes. One process description, say SYS, may describe an implementation, and another, say SPEC, may describe a specification of the expected behaviour. To say that SYS and SPEC are equivalent is taken to indicate that these two processes describe essentially the same behaviour, albeit possibly at different levels of abstraction or refinement. To say that, in some formal sense, SYS is an approximation of SPEC means roughly that every aspect of the behaviour of this process is allowed by the specification SPEC, and thus that nothing unexpected can happen in the behaviour of SYS. This approach to program verification is also sometimes called implementation verification or equivalence checking.
Designers using implementation verification to validate their (models of) reactive systems need only learn one language to describe both their systems and their specifications, and can benefit from the intrinsic compositionality of their descriptions, at least when they are using a process algebra for denoting the labelled transition systems in their models and an equivalence (or preorder) that is preserved by the operations in the algebra.
The Turán number of a graph H, ex(n, H), is the maximum number of edges in any graph on n vertices which does not contain H as a subgraph. Let Pl denote a path on l vertices, and let k ⋅ Pl denote k vertex-disjoint copies of Pl. We determine ex(n, k ⋅ P3) for n appropriately large, answering in the positive a conjecture of Gorgol. Further, we determine ex(n, k ⋅ Pl) for arbitrary l, and n appropriately large relative to k and l. We provide some background on the famous Erdős–Sós conjecture, and conditional on its truth we determine ex(n, H) when H is an equibipartite forest, for appropriately large n.