To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We explore network reliability primarily from the viewpoint of how the combinatorial structure of operating or failed states enables us to compute or bound the reliability. Many other viewpoints are possible, and are outlined in [7, 45, 147]. Combinatorial structure is most often reflected in the simplicial complex (hereditary set system) that represents all operating states of a network; most of the previous research has been developed in this vernacular. However, the language and theory of Boolean functions has played an essential role in this development; indeed these two languages are complementary in their utility to understand the combinatorial structure. As we survey exact computation and enumerative bounds for network reliability in the following, the interplay of complexes and Boolean functions is examined.
In classical reliability analysis, failure mechanisms and the causes of failure are relatively well understood. Some failure mechanisms associated with network reliability applications share these characteristics, but many do not. Typically, component failure rates are estimated based on historical data. Hence, time-independent, discrete probability models are often employed in network reliability analysis. In the most common model, network components (nodes and edges, for example) can take on one of two states: operative or failed. The state of a component is a random event that is independent of the states of other components. Similarly, the network itself is in one of two states, operative or failed.
Two-level logic minimization has been a success story both in terms of theoretical understanding (see, e.g., [15]) and availability of practical tools (such as espresso) [2, 14, 31, 38]. However, two-level logic is not suitable to implement large Boolean functions, whereas multilevel implementations allow a better tradeoff between area and delay. Multilevel logic synthesis has the objective to explore multilevel implementations guided by some function of the following metrics:
(i) The area occupied by the logic gates and interconnect (e.g., approximated by literals, which correspond to transistors in technology-independent optimization);
(ii) The delay of the longest path through the logic;
(iii) The testability of the circuit, measured in terms of the percentage of faults covered by a specified set of test vectors, for an appropriate fault model (e.g., single stuck faults, multiple stuck faults);
(iv) The power consumed by the logic gates and wires.
Often good implementations must satisfy simultaneously upper or lower constraints placed on these parameters and look for good compromises among the cost functions.
It is common to classify optimization as technology-independent versus technology-dependent, where the former represents a circuit by a network of abstract nodes, whereas the latter represents a circuit by a network of the actual gates available in a given library or programmable architecture. A common paradigm is to first try technology-independent optimization and then map the optimized circuit into the final library (technology mapping).
The basic step in functional decomposition of a Boolean function f : {0, 1}n ↦ {0, 1} with input variables N = {x1, x2, … xn} is essentially the partitioning of the set N into two disjoint sets A = {x1, x2, …, xp} (the “modular set”) and B = {xp+1, …, xn} (the “free set”), such that f = F(g(xA), xB). The function g is called a component (subfunction) of f, and F is called a composition (quotient) function of f. The idea here is that F computes f based on the intermediate result computed by g and the variables in the free set. More complex (Ashenhurst) decompositions of a function f can be obtained by recursive application of the basic decomposition step to a component function or to a quotient function of f.
Functional decomposition for general Boolean functions has been introduced in switching theory in the late 1950s and early 1960s by Ashenhurst, Curtis, and Karp [1, 2, 20, 23, 24]. More or less independent from these developments, decomposition of positive functions has been initiated by Shapley [36], Billera [5, 4], and Birnbaum and Esary [12] in several contexts such as voting theory (simple games), clutters, and reliability theory. However, the results in these areas are mainly formulated in terms of set systems and set operations.
The literature contains a wide variety of proof systems for propositional logic. In this chapter, we outline the more important of such proof systems, beginning with an equational calculus, then describing a traditional axiomatic proof system in the style of Frege and Hilbert.We also describe the systems of sequent calculus and resolution that have played an important part in proof theory and automated theorem proving. The chapter concludes with a discussion of the problem of the complexity of propositional proofs, an important area in recent logical investigations. In the last section,we give a proof that any consensus proof of the pigeonhole formulas has exponential length.
An Equational Calculus
The earliest proof systems for propositional logic belong to the tradition of algebraic logic and represent proofs as sequences of equations between Boolean expressions. The proof systems of Boole, Venn, and Schröder are all of this type. In this section, we present such a system, and prove its completeness, by showing that all valid equations between Boolean expressions can be deduced formally.
We start from the concept of Boolean expression defined in Chapter 1 of the monograph Crama and Hammer [9]. If ϕ and ψ are Boolean expressions, then we write ϕ[ψ/xi] for the expression resulting from ϕ by substituting ψ for all occurrences of the variable xi in ϕ. With this notational convention, we can state the formal rules for deduction in our equational calculus.
Synthesis and verification are two basic steps in designing a digital electronic system, which may involve both hardware and software components. Synthesis aims to produce an implementation that satisfies the specification while minimizing some cost objectives, such as circuit area, code size, timing, and power consumption. Verification deals with the certification that the synthesized component is correct.
In system design, hardware synthesis and verification are more developed than the software counterparts and will be our focus. The reason for this asymmetric development is threefold. First, hardware design automation is better driven by industrial needs; after all, hardware costs aremore tangible. Second, the correctness and time-to-market criteria of hardware design are in general more stringent. As a result, hardware design requires rigorous design methodology and high automation. Third, hardware synthesis and verification admit simpler formulation and are better studied.
There are various types of hardware verification, according to design stages, methodologies, and objectives. By design stages, verification can be deployed in high-level design from specification, called design verification; during synthesis transformation, called implementation verification; or after circuit manufacturing, called manufacture verification.
Manufacture verification is also known as testing. There is a whole research and engineering community devoted to it. In hardware testing, we would like to know if some defects appear in a manufactured circuit by testing the conformance between it and its intended design.
The theory on efficient algorithms and complexity theory is software oriented. Their hardware-oriented counterpart is the theory on combinational circuits or, simply, circuits. The main difference is that circuits are a nonuniform model. A circuit is designed for one Boolean function f Є Bn,m, that is, f: {0, 1}n → {0, 1}m. However, most circuit designs lead to sequences of circuits realizing a sequence of functions. Typical adders are sequences of adders, one for each input length. If there is an efficient algorithm computing for each n the circuit for input length n, the circuit family is called uniform. However, for basic functions like arithmetic functions or storage access the circuit model is more adequate than software models. Moreover, circuits are a very simple and natural computation model reflecting all aspects of efficiency.
A circuit model needs a basis of elementary functions that can be realized by simple gates. In the basic circuitmodel, a basis is a finite set. Then a circuit for input size n is a finite sequence of instructions or gates and, therefore, a straight-line program: the ith instruction consists of a function g from the chosen basis and, if g Є Bj ≔ Bj,1, a sorted list Ii,1, …, Ii, j of inputs. The constants 0 and 1, the variables x1, …, xn, and the results r1, …, ri–1 of the first i – 1 instructions are possible inputs.
Boolean models and methods play a fundamental role in the analysis of a broad diversity of situations encountered in various branches of science.
The objective of this collection of papers is to highlight the role of Boolean theory in a number of such areas, ranging from algebra and propositional logic to learning theory, cryptography, computational complexity, electrical engineering, and reliability theory.
The chapters are written by some of the most prominent experts in their fields and are intended for advanced undergraduate or graduate students, as well as for researchers or engineers. Each chapter provides an introduction to the main questions investigated in a particular field of science, as well as an in-depth discussion of selected issues and a survey of numerous important or representative results. As such, the collection can be used in a variety of ways: some readers may simply skim some of the chapters in order to get the flavor of unfamiliar areas, whereas others may rely on them as authoritative references or as extensive surveys of fundamental results.
Beyond the diversity of the questions raised and investigated in different chapters, a remarkable feature of the collection is the presence of an “Ariane's thread” created by the common language, concepts, models, and tools of Boolean theory. Many readers will certainly be surprised to discover countless links between seemingly remote topics discussed in various chapters of the book. It is hoped that they will be able to draw on such connections to further their understanding of their own scientific disciplines and to explore new avenues for research.
The objective of machine learning is to acquire knowledge from available information in an automated manner. As an example, one can think of obtaining rules for medical diagnosis, based on a database of patients with their test results and diagnoses. If we make the simplifying assumption that data are represented as binary vectors of a fixed length and the rule to be learned classifies these vectors into two classes, then the task is that of learning, or identifying a Boolean function. This simplifying assumption is realistic in some cases, and it provides a good intuition for more general learning problems in others.
The notion of learning is somewhat elusive, and there are a large number of approaches to defining precisely what is meant by learning. A probabilistic notion, PAC (probably approximately correct) learning, based on random sampling of examples of the function to be learned, is discussed by Anthony in Chapter 6 of this volume. Here we discuss a different approach called learning by queries, introduced by Angluin in the 1980s [5]. In this model, it is known in advance that the function to be learned belongs to some given class of functions, and the learner's objective is to identify this function exactly by asking questions, or queries, about it. The prespecified class is called the target class, or concept class, and the function to be learned is called the target function, the target concept, or simply the target.
There exists a large gap between the empirical evidence of the computational capabilities of neural networks and our ability to systematically analyze and design those networks. Although it is well known that classical Fourier analysis is a very effective mathematical tool for the design and analysis of linear systems, such a tool was not available for artificial neural networks, which are inherently nonlinear. In the late 1980s, the spectral analysis tool was introduced in the domain of discrete neural networks. The application of the spectral technique led to a number of new insights and results, including lower and upper bounds on the complexity of computing with neural networks as well as methods for constructing optimal (in terms of performance) feedforward networks for computing various arithmetic functions.
The focus of the presentation in this chapter is on an elementary description of the basic techniques of Fourier analysis and its applications in threshold circuit complexity. Our hope is that this chapter will serve as background material for those who are interested in learning more about the progress and results in this area. We also provide extensive bibliographic notes that can serve as pointers to a number of research results related to spectral techniques and threshold circuit complexity.
String algorithms are a traditional area of study in computer science. In recent years their importance has grown dramatically with the huge increase of electronically stored text and of molecular sequence data (DNA or protein sequences) produced by various genome projects. This book is a general text on computer algorithms for string processing. In addition to pure computer science, the book contains extensive discussions on biological problems that are cast as string problems, and on methods developed to solve them. It emphasises the fundamental ideas and techniques central to today's applications. New approaches to this complex material simplify methods that up to now have been for the specialist alone. With over 400 exercises to reinforce the material and develop additional topics, the book is suitable as a text for graduate or advanced undergraduate students in computer science, computational biology, or bio-informatics. Its discussion of current algorithms and techniques also makes it a reference for professionals.
The repetition threshold is a measure of the extent to whichthere need to be consecutive (partial) repetitions of finitewords within infinite wordsover alphabets of various sizes. Dejean's Conjecture, which hasbeen recently proven, provides this threshold for all alphabetsizes. Motivated by a question of Krieger, we deal here withthe analogous threshold when the infinite word is restricted to be a D0Lword. Our main result is that, asymptotically, this thresholddoes not exceed the unrestricted threshold by more than a little.
We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n verticesand m edges. To count the number of all the maximal cliques, without actuallygenerating them, our algorithm requires O(log p) communicationrounds with O(nm/p) local computation time.We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation,and to generate each one of the subsequent maximal cliques thisalgorithm requires O(log p) communication rounds with O(m/p) localcomputation.The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present analgorithm for this problem that uses O log (p) communication roundswith O(m/p) local computation for each maximal path.We also show that the presented algorithms can be extended to the CREWPRAM model.