To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Monotone networks have been the most widely studied class of restricted Boolean networks. It is now possible to prove superlinear (in fact exponential) lower bounds on the size of optimal monotone networks computing some naturally arising functions. There remains, however, the problem of obtaining similar results on the size of combinational (i.e. unrestricted) Boolean networks. One approach to solving this problem would be to look for circumstances in which large lower bounds on the complexity of monotone networks would provide corresponding bounds on the size of combinational networks.
In this paper we briefly review the current state of results on Boolean function complexity and examine the progress that has been made in relating monotone and combinational network complexity.
Introduction
One of the major problems in computational complexity theory is to develop techniques by which non-trivial lower bounds, on the amount of time needed to solve ‘explicitly defined’ decision problems, could be proved. By ‘nontrivial’ we mean bounds which are superlinear in the length of the input; and, since we may concentrate on functions with a binary input alphabet, the term ‘explicitly defined’ may be taken to mean functions for which the values on all inputs of length n can be enumerated in time 2cn for some constant c.
Classical computational complexity theory measures ‘time’ as the number of moves made by a (multi-tape) deterministic Turing machine. Thus a decision problem, f, has time complexity, T(n) if there is a Turing machine program that computes f and makes at most T(n) moves on any input of length n.
A general theory is developed for constructing the asymptotically shallowest networks and the asymptotically smallest networks (with respect to formula size) for the carry save addition of n numbers using any given basic carry save adder as a building block.
Using these optimal carry save addition networks the shallowest known multiplication circuits and the shortest formulae for the majority function (and many other symmetric Boolean functions) are obtained.
In this paper, simple basic carry save adders are described, using which multiplication circuits of depth 3.71 log n (the result of which is given as the sum of two numbers) and majority formulae of size O(n3.21) are constructed. Using more complicated basic carry save adders, not described here, these results could be further improved. Our best bounds are currently 3.57 log n for depth and O(n3.13) for formula size.
Introduction
The question ‘How fast can we multiply?’ is one of the fundamental questions in theoretical computer science. Ofman-Karatsuba and Schönhage-Strassen tried to answer it by minimising the number of bit operations required, or equivalently the circuit size. A different approach was pursued by Avizienis, Dadda, Ofman, Wallace and others. They investigated the depth, rather than the size of multiplication circuits.
The main result proved by the above authors in the early 1960's was that, using a process called Carry Save Addition, n numbers (of linear length) could be added in depth O(log n). As a consequence, depth O(log n) circuits for multiplication and polynomial size formulae for all the symmetric Boolean functions are obtained.
Topical but classical results concerning the incidence relationship between prime clauses and implicants of a monotone Boolean function are derived by applying a general theory of computational equivalence and replaceability to distributive lattices. A non-standard combinatorial model for the free distributive lattice FDL(n) is described, and a correspondence between monotone Boolean functions and partitions of a standard Cayley diagram for the symmetric group is derived.
Preliminary research on-classifying and characterising the simple paths and circuits that are the blocks of this partition is summarised. It is shown in particular that each path and circuit corresponds to a characteristic configuration of implicants and clauses. The motivation for the research and expected future directions are briefly outlined.
Introduction
Models of Boolean formulae expressed in terms of the incidence relationship between the prime implicants and clauses of a function were first discovered several years ago, but they have recently been independently rediscovered by several authors, and have attracted renewed interest. They have been used in proving lower bounds by Karchmer and Wigderson and subsequently by Razborov. More general investigations aimed at relating the complexity of functions to the model have also been carried out by Newman [20].
This paper demonstrates the close connection between these classical models for monotone Boolean formulae and circuits and a general theory of computational equivalence as it applies to FDL(n): the (finite) distributive lattice freely generated by n elements. It also describes how the incidence relationships between prime implicants and clauses associated with monotone Boolean functions can be viewed as built up from a characteristic class of incidence patterns between relatively small subsets of implicants and clauses.
We give a general complexity classification scheme for monotone computation, including monotone space-bounded and Turing machine models not previously considered. We propose monotone complexity classes including mACi, mNCi, mLOGCFL, mBWBP, mL, mNL, mP, mBPP and mNP. We define a simple notion of monotone reducibility and exhibit complete problems. This provides a framework for stating existing results and asking new questions.
We show that mNL (monotone nondeterministic log-space) is not closed under complementation, in contrast to Immerman's and Szelepcsényi's nonmonotone result [Imm88, Sze87] that NL = co-NL; this is a simple extension of the monotone circuit depth lower bound of Karchmer and Wigderson [KW90] for st-connectivity.
We also consider mBWBP (monotone bounded width branching programs) and study the question of whether mBWBP is properly contained in mNC1, motivated by Barrington's result [Bar89] that BWBP = NC1. Although we cannot answer this question, we show two preliminary results: every monotone branching program for majority has size Ω(n2) with no width restriction, and no monotone analogue of Barrington's gadget exists.
Introduction
A computation is monotone if it does not use the negation operation. Monotone circuits and formulas have been studied as restricted models of computation with the goal of developing techniques for the general problem of proving lower bounds.
In this paper we seek to unify the theory of monotone complexity along the lines of Babai, Frankl, and Simon who gave a framework for communication complexity theory. We propose a collection of monotone complexity models paralleling the familiar nonmonotone models. This provides a rich classification system for monotone functions including most monotone circuit classes previously considered, as well as monotone space-bounded complexity classes which have previously received little attention.
We survey some recent results on read-once Boolean functions. Among them are a characterization theorem, a generalization and a discussion on the randomized Boolean decision tree complexity for read-once functions. A previously unpublished result of Lovás and Newman is also presented.
Introduction
A Boolean formula is a rooted binary tree whose internal nodes are labeled by the Boolean operators ∨ or Λ and in which each leaf is labeled by a Boolean variable or its negation. A Boolean formula computes a Boolean function in a natural way.
A Boolean formula is read-once if every variable appears exactly once. A function is read-once if it has a read-once formula.
Read-once functions have been studied by many authors, since they have the lowest possible formula size (for functions that depend on all their variables). In addition, every NC1 function on n variables is a projection of a read-once function with a polynomial (in n) number of variables.
We present here some recent results in the area. All but one of those results have been published, hence, full proofs will be generally omitted and will be given just for the unpublished result (Theorem 3.4). The results we will discuss cover a characterization theorem, some generalizations and results on the randomized decision tree complexity of read-once functions. There is a recent result on learning of read-once functions, [AHK89], which will not be described.
Definitions and Notations
If g : {0, l}n ↦ {0, 1} has a formula in which no negated variable appears, we say that g is monotone. The size of a Boolean formula is the number of its leaves.
In the last decade substantial progress has been made in our understanding of restricted classes of Boolean circuits, in particular those restricted to have constant depth (Furst, Sipser, Saxe, Ajtai, Yao, Haiåstad, Razborov, Smolensky or to be monotone (Razborov, Andreev, Alon and Boppana, Tardos, Karchmer and Wigderson). The question arises, perhaps more urgently than before, as to what approaches could be pursued that might contribute to progress on the unrestricted model.
In this note we first argue that if P ≠ NP then any circuit-theoretic proof of this would have to be preceded by analogous results for the more constrained arithmetic model. This is because, as we shall observe, there are proven implications showing that if, for example, the Hamiltonian cycle problem (HC) requires exponential circuit size, then so does the analogous problem on arithmetic circuits. Since the set of valid algebraic identities in the latter model form a proper subset of those in the former, a lower bound proof for it should be strictly easier.
In spite of the above relationship the algebraic model is often regarded as an alternative, rather than a restriction of the Boolean model. One reason for this is that specific computations are usually understandable in one of these models, and not in both. In particular, the main power of the algebraic model derives from the possibility of cancellations, and it is usually difficult to express explicitly how these help in computing combinatorial problems.
In recent years several methods have been developed for obtaining superpolynomial lower bounds on the monotone formula and circuit size of explicitly given Boolean functions. Among these are the method of approximations, the combinatorial analysis of a communication problem related to monotone depth and the use of matrices with very particular rank properties. Now it can be said almost surely that each of these methods would need considerable strengthening to yield nontrivial lower bounds for the size of circuits or formulae over a complete basis. So, it seems interesting to try to understand from the formal point of view what kind of machinery we lack.
The first step in that direction was undertaken by the author in. In that paper two possible formalizations of the method of approximations were considered. The restrictive version forbids the method to use extra variables. This version was proven to be practically useless for circuits over a complete basis. If extra variables are allowed (the second formalization) then the method becomes universal, i.e. for any Boolean function f there exists an approximating model giving a lower bound for the circuit size of f which is tight up to a polynomial. Then the burden of proving lower bounds for the circuit size shifts to estimating from below the minimal number of covering sets in a particular instance of “MINIMUM COVER”. One application of an analogous model appears in where the first nonlinear lower bound was proven for the complexity of MAJORITY with respect to switching-and-rectifiers networks.
Let f be an arbitrary Boolean function depending on n variables and let A be a network computing them, i.e., A has n inputs and one output and for an arbitrary Boolean vector a of length n outputs f(a). Assume we have to compute simultaneously the values f(a1), …,f(ar) of f on r arbitrary Boolean vectors a1, …,ar. Then we can do it by r copies of A. But in most cases it can be done more efficiently (with a smaller complexity) by one network with nr inputs and r outputs (as already shown in Uhlig (1974)). In this paper we present a new and simple proof of this fact based on a new construction method. Furthermore, we show that the depth of our network is “almost” minimal.
Introduction
Let us consider (combinatorial) networks. Precise definitions are given in [Lu58, Lu65, Sa76, We87]. We assume that a complete set G of gates is given, i.e., every Boolean function can be computed (realized) by a network consisting of gates of G. For example, the set consisting of 2-input AND, 2-input OR and the NOT function is complete. A cost C(Gi) (a positive number) is associated with each of the gates Gi ∈ G. The complexity C(A) of a network A is the sum of the costs of its gates. The complexity C(f) of a Boolean function f is defined by C(f) = min C(A) where A ranges over all networks computing f.
By Bn we denote the set of Boolean functions {0, l}n → {0, 1}.