To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The first part of the book, “Algebraic Structures,” deals with compositions and decompositions of Boolean functions.
A set F of Boolean functions is called complete if every Boolean function is a composition of functions from F; it is a clone if it is composition-closed and contains all projections. In 1921, E. L. Post found a completeness criterion, that is, a necessary and sufficient condition for a set F of Boolean functions to be complete. Twenty years later, he gave a full description of the lattice of Boolean clones. Chapter 1, by Reinhard Pöschel and Ivo Rosenberg, provides an accessible and self-contained discussion of “Compositions and Clones of Boolean Functions” and of the classical results of Post.
Functional decomposition of Boolean functions was introduced in switching theory in the late 1950s. In Chapter 2, “Decomposition of Boolean Functions,” Jan C. Bioch proposes a unified treatment of this topic. The chapter contains both a presentation of the main structural properties of modular decompositions and a discussion of the algorithmic aspects of decomposition.
Part II of the collection covers topics in logic, where Boolean models find their historical roots.
In Chapter 3, “Proof Theory,” Alasdair Urquhart briefly describes the more important proof systems for propositional logic, including a discussion of equational calculus, of axiomatic proof systems, and of sequent calculus and resolution proofs. The author compares the relative computational efficiency of these different systems and concludes with a presentation of Haken's classical result that resolution proofs have exponential length for certain families of formulas.
This chapter explores the learnability of Boolean functions. Broadly speaking, the problem of interest is how to infer information about an unknown Boolean function given only information about its values on some points, together with the information that it belongs to a particular class of Boolean functions. This broad description can encompass many more precise formulations, but here we focus on probabilistic models of learning, in which the information about the function value on points is provided through its values on some randomly drawn sample, and in which the criteria for successful “learning” are defined using probability theory. Other approaches, such as “exact query learning” (see [1, 18, 20] and Chapter 7 in this volume, for instance) and “specification,” “testing,” or “learning with a helpful teacher” (see [12, 4, 16, 21, 26]) are possible, and particularly interesting in the context of Boolean functions. Here, however, we focus on probabilistic models and aim to give a fairly thorough account of what can be said in two such models.
In the probabilistic models discussed, there are two separate, but linked, issues of concern. First, there is the question of how much information is needed about the values of a function on points before a good approximation to the function can be found. Second, there is the question of how, algorithmically, we might find a good approximation to the function. These two issues are usually termed the sample complexity and computational complexity of learning.
Let f: Dn → R be a finite function: that is, D and R are finite sets. Such a function can be represented by the table of all (a, f (a)), a Є Dn, which always has an exponential size of ∣D∣n. Therefore, we are interested in representations that for many important functions are much more compact. The best-known representations are circuits and decision diagrams. Circuits are a hardware model reflecting the sequential and parallel time to compute f (a) froma (see Chapter 11). Decision diagrams (DDs), also called branching programs (BPs), are nonuniform programs for computing f (a) from a based on only two types of instructions represented by nodes in a graph (see also Figure 10.1):
Decision nodes: depending on the value of some input variable xi the next node is chosen.
Output nodes (also called sinks): a value from R is presented as output.
A decision diagram is a directed acyclic graph consisting of decision nodes and output nodes. Each node v represents a function fv defined in the following way. Let a = (a1, …, an) Є Dn. At decision nodes, choose the next node as described before. The value of fv(a) is defined as the value of the output node that is finally reached when starting at v. Hence, for each node each input a Є Dn activates a unique computation path that we follow during the computation of fv(a). An edge e = (v,w) of the diagram is called activated by a if the computation path starting at v runs via e.
We explore network reliability primarily from the viewpoint of how the combinatorial structure of operating or failed states enables us to compute or bound the reliability. Many other viewpoints are possible, and are outlined in [7, 45, 147]. Combinatorial structure is most often reflected in the simplicial complex (hereditary set system) that represents all operating states of a network; most of the previous research has been developed in this vernacular. However, the language and theory of Boolean functions has played an essential role in this development; indeed these two languages are complementary in their utility to understand the combinatorial structure. As we survey exact computation and enumerative bounds for network reliability in the following, the interplay of complexes and Boolean functions is examined.
In classical reliability analysis, failure mechanisms and the causes of failure are relatively well understood. Some failure mechanisms associated with network reliability applications share these characteristics, but many do not. Typically, component failure rates are estimated based on historical data. Hence, time-independent, discrete probability models are often employed in network reliability analysis. In the most common model, network components (nodes and edges, for example) can take on one of two states: operative or failed. The state of a component is a random event that is independent of the states of other components. Similarly, the network itself is in one of two states, operative or failed.
Two-level logic minimization has been a success story both in terms of theoretical understanding (see, e.g., [15]) and availability of practical tools (such as espresso) [2, 14, 31, 38]. However, two-level logic is not suitable to implement large Boolean functions, whereas multilevel implementations allow a better tradeoff between area and delay. Multilevel logic synthesis has the objective to explore multilevel implementations guided by some function of the following metrics:
(i) The area occupied by the logic gates and interconnect (e.g., approximated by literals, which correspond to transistors in technology-independent optimization);
(ii) The delay of the longest path through the logic;
(iii) The testability of the circuit, measured in terms of the percentage of faults covered by a specified set of test vectors, for an appropriate fault model (e.g., single stuck faults, multiple stuck faults);
(iv) The power consumed by the logic gates and wires.
Often good implementations must satisfy simultaneously upper or lower constraints placed on these parameters and look for good compromises among the cost functions.
It is common to classify optimization as technology-independent versus technology-dependent, where the former represents a circuit by a network of abstract nodes, whereas the latter represents a circuit by a network of the actual gates available in a given library or programmable architecture. A common paradigm is to first try technology-independent optimization and then map the optimized circuit into the final library (technology mapping).
The basic step in functional decomposition of a Boolean function f : {0, 1}n ↦ {0, 1} with input variables N = {x1, x2, … xn} is essentially the partitioning of the set N into two disjoint sets A = {x1, x2, …, xp} (the “modular set”) and B = {xp+1, …, xn} (the “free set”), such that f = F(g(xA), xB). The function g is called a component (subfunction) of f, and F is called a composition (quotient) function of f. The idea here is that F computes f based on the intermediate result computed by g and the variables in the free set. More complex (Ashenhurst) decompositions of a function f can be obtained by recursive application of the basic decomposition step to a component function or to a quotient function of f.
Functional decomposition for general Boolean functions has been introduced in switching theory in the late 1950s and early 1960s by Ashenhurst, Curtis, and Karp [1, 2, 20, 23, 24]. More or less independent from these developments, decomposition of positive functions has been initiated by Shapley [36], Billera [5, 4], and Birnbaum and Esary [12] in several contexts such as voting theory (simple games), clutters, and reliability theory. However, the results in these areas are mainly formulated in terms of set systems and set operations.
The literature contains a wide variety of proof systems for propositional logic. In this chapter, we outline the more important of such proof systems, beginning with an equational calculus, then describing a traditional axiomatic proof system in the style of Frege and Hilbert.We also describe the systems of sequent calculus and resolution that have played an important part in proof theory and automated theorem proving. The chapter concludes with a discussion of the problem of the complexity of propositional proofs, an important area in recent logical investigations. In the last section,we give a proof that any consensus proof of the pigeonhole formulas has exponential length.
An Equational Calculus
The earliest proof systems for propositional logic belong to the tradition of algebraic logic and represent proofs as sequences of equations between Boolean expressions. The proof systems of Boole, Venn, and Schröder are all of this type. In this section, we present such a system, and prove its completeness, by showing that all valid equations between Boolean expressions can be deduced formally.
We start from the concept of Boolean expression defined in Chapter 1 of the monograph Crama and Hammer [9]. If ϕ and ψ are Boolean expressions, then we write ϕ[ψ/xi] for the expression resulting from ϕ by substituting ψ for all occurrences of the variable xi in ϕ. With this notational convention, we can state the formal rules for deduction in our equational calculus.
Synthesis and verification are two basic steps in designing a digital electronic system, which may involve both hardware and software components. Synthesis aims to produce an implementation that satisfies the specification while minimizing some cost objectives, such as circuit area, code size, timing, and power consumption. Verification deals with the certification that the synthesized component is correct.
In system design, hardware synthesis and verification are more developed than the software counterparts and will be our focus. The reason for this asymmetric development is threefold. First, hardware design automation is better driven by industrial needs; after all, hardware costs aremore tangible. Second, the correctness and time-to-market criteria of hardware design are in general more stringent. As a result, hardware design requires rigorous design methodology and high automation. Third, hardware synthesis and verification admit simpler formulation and are better studied.
There are various types of hardware verification, according to design stages, methodologies, and objectives. By design stages, verification can be deployed in high-level design from specification, called design verification; during synthesis transformation, called implementation verification; or after circuit manufacturing, called manufacture verification.
Manufacture verification is also known as testing. There is a whole research and engineering community devoted to it. In hardware testing, we would like to know if some defects appear in a manufactured circuit by testing the conformance between it and its intended design.
The theory on efficient algorithms and complexity theory is software oriented. Their hardware-oriented counterpart is the theory on combinational circuits or, simply, circuits. The main difference is that circuits are a nonuniform model. A circuit is designed for one Boolean function f Є Bn,m, that is, f: {0, 1}n → {0, 1}m. However, most circuit designs lead to sequences of circuits realizing a sequence of functions. Typical adders are sequences of adders, one for each input length. If there is an efficient algorithm computing for each n the circuit for input length n, the circuit family is called uniform. However, for basic functions like arithmetic functions or storage access the circuit model is more adequate than software models. Moreover, circuits are a very simple and natural computation model reflecting all aspects of efficiency.
A circuit model needs a basis of elementary functions that can be realized by simple gates. In the basic circuitmodel, a basis is a finite set. Then a circuit for input size n is a finite sequence of instructions or gates and, therefore, a straight-line program: the ith instruction consists of a function g from the chosen basis and, if g Є Bj ≔ Bj,1, a sorted list Ii,1, …, Ii, j of inputs. The constants 0 and 1, the variables x1, …, xn, and the results r1, …, ri–1 of the first i – 1 instructions are possible inputs.
Boolean models and methods play a fundamental role in the analysis of a broad diversity of situations encountered in various branches of science.
The objective of this collection of papers is to highlight the role of Boolean theory in a number of such areas, ranging from algebra and propositional logic to learning theory, cryptography, computational complexity, electrical engineering, and reliability theory.
The chapters are written by some of the most prominent experts in their fields and are intended for advanced undergraduate or graduate students, as well as for researchers or engineers. Each chapter provides an introduction to the main questions investigated in a particular field of science, as well as an in-depth discussion of selected issues and a survey of numerous important or representative results. As such, the collection can be used in a variety of ways: some readers may simply skim some of the chapters in order to get the flavor of unfamiliar areas, whereas others may rely on them as authoritative references or as extensive surveys of fundamental results.
Beyond the diversity of the questions raised and investigated in different chapters, a remarkable feature of the collection is the presence of an “Ariane's thread” created by the common language, concepts, models, and tools of Boolean theory. Many readers will certainly be surprised to discover countless links between seemingly remote topics discussed in various chapters of the book. It is hoped that they will be able to draw on such connections to further their understanding of their own scientific disciplines and to explore new avenues for research.
The objective of machine learning is to acquire knowledge from available information in an automated manner. As an example, one can think of obtaining rules for medical diagnosis, based on a database of patients with their test results and diagnoses. If we make the simplifying assumption that data are represented as binary vectors of a fixed length and the rule to be learned classifies these vectors into two classes, then the task is that of learning, or identifying a Boolean function. This simplifying assumption is realistic in some cases, and it provides a good intuition for more general learning problems in others.
The notion of learning is somewhat elusive, and there are a large number of approaches to defining precisely what is meant by learning. A probabilistic notion, PAC (probably approximately correct) learning, based on random sampling of examples of the function to be learned, is discussed by Anthony in Chapter 6 of this volume. Here we discuss a different approach called learning by queries, introduced by Angluin in the 1980s [5]. In this model, it is known in advance that the function to be learned belongs to some given class of functions, and the learner's objective is to identify this function exactly by asking questions, or queries, about it. The prespecified class is called the target class, or concept class, and the function to be learned is called the target function, the target concept, or simply the target.