To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this book is to provide a body of tools for establishing concentration of measure that is accessible to researchers working in the design and analysis of randomized algorithms.
Concentration of measure refers to the phenomenon that a function of a large number of random variables tends to concentrate its values in a relatively narrow range (under certain conditions of smoothness of the function and under certain conditions of the dependence amongst the set of random variables). Such a result is of obvious importance to the analysis of randomized algorithms: for instance, the running time of such an algorithm can then be guaranteed to be concentrated around a pre-computed value. More generally, various other parameters measuring the performance of randomized algorithms can be provided tight guarantees via such an analysis.
In a sense, the subject of concentration of measure lies at the core of modern probability theory as embodied in the laws of large numbers, the central limit theorem and, in particular, the theory of large deviations [26]. However, these results are asymptotic: they refer to the limit as the number of variables n goes to infinity, for example. In the analysis of algorithms, we typically require quantitative estimates that are valid for finite (though large) values of n. The earliest such results can be traced back to the work of Azuma, Chernoff and Hoeffding in the 1950s. Subsequently, there have been steady advances, particularly in the classical setting of martingales. In the last couple of decades, these methods have taken on renewed interest, driven by applications in algorithms and optimisation. Also several new techniques have been developed.
Having matured over the years, formal design verification methods, such as theorem proving, property and model checking, and equivalence checking, have found increasing application in industry. Canonical graph-based representations, such as binary decision diagrams (BDDs), [1] binary moment diagrams (BMDs), [2] and their variants, play an important role in the development of software tools for verification. While these techniques are quite mature at the structural level, the high-level verification models are only now being developed. The main difficulty is that such verification must span several levels of design abstraction. Verification of arithmetic designs is particularly difficult because of the disparity in the representations on the different design levels and the complexity of logic involved.
This chapter addresses verification based on canonical data structures. It presents several canonical, graph-based representations that are used in formal verification, and, in particular, in equivalence checking of combinational designs specified at different levels of abstraction. These representations are commonly known as decision diagrams, even though not all of them are actually decision-based forms. They are graph based structures whose nodes represent the variables and whose directed edges represent the result of the decomposition of the function with respect to the individual variables. Particular attention is given to arithmetic and word-level representations.
An important common feature of all these representations is canonicity, which is essential in combinational equivalence checking. A form is canonical if the representation of a function in that form is unique. Canonical graph-based representations make it possible to check whether two combinational functions are equivalent by checking whether their graph-based representations are isomorphic.
The aim of this chapter is to show how systems with a “persistent state” can be modelled. On completion of the chapter, the reader should be able to develop models of systems which contain persistent state components. The difference between this style and the functional modelling style used so far in this book will be highlighted by revisiting the explosives controller and the trusted gateway examples.
Introduction
Using formal modelling techniques, computing systems may be described at many different levels of abstraction. The models presented so far in the book have been set at a relatively high level of abstraction. This is reflected both in the data types used and in the way functionality is described through mathematical functions that take some data representing the system as input parameters and return a result which describes the system after the computation has been performed. In some cases, the function can be described without explicitly constructing its result (e.g. see Subsection 6.4.3).
This functional modelling style has its limitations. Few computing systems are actually implemented via pure functions. More often, they have variables that hold data which are modified by operations invoked by some outside user. These variables are persistent in the sense that they continue to hold data between operation invocations. If the purpose of a model is to document design decisions about the split between ordinary parameters to functions and persistent variables it is necessary to use operations in VDM-SL. These operations take inputs and return results, but they also have some effect (often called a side-effect) on the persistent variables.
The aim of this chapter is to show how relationships between data can be modelled as mappings. The mapping type constructor and operators in VDMSL are introduced through an example from the nuclear industry. On completing this chapter, the reader should be confident in modelling and analysing systems involving mappings.
Introduction
Computing systems frequently centre on relationships between sets of values. For example, a database might link a set of customer identifiers to detailed information. Such relationships can often be modelled as mappings from elements of one set, known as the domain, to elements of the other set, known as the range. Mappings can be thought of as tables in which one can look up the domain element and read across to see the range element to which it is related. We will say that each domain element maps to the corresponding range element. Each line of the table, being a small part of the mapping, is called a maplet. Each domain element can have only one maplet in a mapping, so there is no ambiguity about which range element it points to. For example, the following table represents a mapping from names (strings of characters) to bank balances (integers).