To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this book is to provide a body of tools for establishing concentration of measure that is accessible to researchers working in the design and analysis of randomized algorithms.
Concentration of measure refers to the phenomenon that a function of a large number of random variables tends to concentrate its values in a relatively narrow range (under certain conditions of smoothness of the function and under certain conditions of the dependence amongst the set of random variables). Such a result is of obvious importance to the analysis of randomized algorithms: for instance, the running time of such an algorithm can then be guaranteed to be concentrated around a pre-computed value. More generally, various other parameters measuring the performance of randomized algorithms can be provided tight guarantees via such an analysis.
In a sense, the subject of concentration of measure lies at the core of modern probability theory as embodied in the laws of large numbers, the central limit theorem and, in particular, the theory of large deviations [26]. However, these results are asymptotic: they refer to the limit as the number of variables n goes to infinity, for example. In the analysis of algorithms, we typically require quantitative estimates that are valid for finite (though large) values of n. The earliest such results can be traced back to the work of Azuma, Chernoff and Hoeffding in the 1950s. Subsequently, there have been steady advances, particularly in the classical setting of martingales. In the last couple of decades, these methods have taken on renewed interest, driven by applications in algorithms and optimisation. Also several new techniques have been developed.
Having matured over the years, formal design verification methods, such as theorem proving, property and model checking, and equivalence checking, have found increasing application in industry. Canonical graph-based representations, such as binary decision diagrams (BDDs), [1] binary moment diagrams (BMDs), [2] and their variants, play an important role in the development of software tools for verification. While these techniques are quite mature at the structural level, the high-level verification models are only now being developed. The main difficulty is that such verification must span several levels of design abstraction. Verification of arithmetic designs is particularly difficult because of the disparity in the representations on the different design levels and the complexity of logic involved.
This chapter addresses verification based on canonical data structures. It presents several canonical, graph-based representations that are used in formal verification, and, in particular, in equivalence checking of combinational designs specified at different levels of abstraction. These representations are commonly known as decision diagrams, even though not all of them are actually decision-based forms. They are graph based structures whose nodes represent the variables and whose directed edges represent the result of the decomposition of the function with respect to the individual variables. Particular attention is given to arithmetic and word-level representations.
An important common feature of all these representations is canonicity, which is essential in combinational equivalence checking. A form is canonical if the representation of a function in that form is unique. Canonical graph-based representations make it possible to check whether two combinational functions are equivalent by checking whether their graph-based representations are isomorphic.