To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In our discussion of networks thus far, we have generally viewed the relationships contained in these networks as having positive connotations – links have typically indicated such things as friendship, collaboration, sharing of information, or membership in a group. The terminology of online social networks reflects a largely similar view, through its emphasis on the connections one forms with friends, fans, followers, and so forth. But in most network settings, there are also negative effects at work. Some relations are friendly, but others are antagonistic or hostile; interactions between people or groups are regularly beset by controversy, disagreement, and sometimes outright conflict. How should we reason about the mix of positive and negative relationships that take place within a network?
Here we describe a rich part of social network theory that involves taking a network and annotating its links (i.e., its edges) with positive and negative signs. Positive links represent friendship while negative links represent antagonism, and an important problem in the study of social networks is to understand the tension between these two forces. The notion of structural balance that we discuss in this chapter is one of the basic frameworks for doing this.
In addition to introducing some of the basics of structural balance, our discussion in this chapter serves a second, methodological purpose: it illustrates a nice connection between local and global network properties. A recurring issue in the analysis of networked systems is the way in which local effects – phenomena involving only a few nodes at a time – can have global consequences that are observable at the level of the network as a whole.
We say that α ∈ [0, 1) is a jump for an integer r ≥ 2 if there exists c(α) > 0 such that for all ϵ > 0 and all t ≥ 1, any r-graph with n ≥ n0(α, ϵ, t) vertices and density at least α + ϵ contains a subgraph on t vertices of density at least α + c.
The Erdős–Stone–Simonovits theorem [4, 5] implies that for r = 2, every α ∈ [0, 1) is a jump. Erdős [3] showed that for all r ≥ 3, every α ∈ [0, r!/rr) is a jump. Moreover he made his famous ‘jumping constant conjecture’, that for all r ≥ 3, every α ∈ [0, 1) is a jump. Frankl and Rödl [7] disproved this conjecture by giving a sequence of values of non-jumps for all r ≥ 3.
We use Razborov's flag algebra method [9] to show that jumps exist for r = 3 in the interval [2/9, 1). These are the first examples of jumps for any r ≥ 3 in the interval [r!/rr, 1). To be precise, we show that for r = 3 every α ∈ [0.2299, 0.2316) is a jump.
We also give an improved upper bound for the Turán density of K4− = {123, 124, 134}: π(K4−) ≤ 0.2871. This in turn implies that for r = 3 every α ∈ [0.2871, 8/27) is a jump.
In this paper we consider the influences of variables on Boolean functions in general product spaces. Unlike the case of functions on the discrete cube, where there is a clear definition of influence, in the general case several definitions have been presented in different papers. We propose a family of definitions for the influence that contains all the known definitions, as well as other natural definitions, as special cases. We show that the proofs of the BKKKL theorem and of other results can be adapted to our new definition. The adaptation leads to generalizations of these theorems, which are tight in terms of the definition of influence used in the assertion.
The interplay between algebra and geometry is a beautiful (and fun!) area of mathematical investigation. Advances in computing and algorithms make it possible to tackle many classical problems in a down-to-earth and concrete fashion. This opens wonderful new vistas and allows us to pose, study and solve problems that were previously out of reach. Suitable for graduate students, the objective of this 2003 book is to bring advanced algebra to life with lots of examples. The first chapters provide an introduction to commutative algebra and connections to geometry. The rest of the book focuses on three active areas of contemporary algebra: Homological Algebra (the snake lemma, long exact sequence inhomology, functors and derived functors (Tor and Ext), and double complexes); Algebraic Combinatorics and Algebraic Topology (simplicial complexes and simplicial homology, Stanley-Reisner rings, upper bound theorem and polytopes); and Algebraic Geometry (points and curves in projective space, Riemann-Roch, Cech cohomology, regularity).
This book treats bounded arithmetic and propositional proof complexity from the point of view of computational complexity. The first seven chapters include the necessary logical background for the material and are suitable for a graduate course. Associated with each of many complexity classes are both a two-sorted predicate calculus theory, with induction restricted to concepts in the class, and a propositional proof system. The complexity classes range from AC0 for the weakest theory up to the polynomial hierarchy. Each bounded theorem in a theory translates into a family of (quantified) propositional tautologies with polynomial size proofs in the corresponding proof system. The theory proves the soundness of the associated proof system. The result is a uniform treatment of many systems in the literature, including Buss's theories for the polynomial hierarchy and many disparate systems for complexity classes such as AC0, AC0(m), TC0, NC1, L, NL, NC, and P.
The asymptotics of 2-colour Ramsey numbers of loose and tight cycles in 3-uniform hypergraphs were recently determined [16, 17]. We address the same problem for Berge cycles and for 3 colours. Our main result is that the 3-colour Ramsey number of a 3-uniform Berge cycle of length n is asymptotic to . The result is proved with the Regularity Lemma via the existence of a monochromatic connected matching covering asymptotically 4n/5 vertices in the multicoloured 2-shadow graph induced by the colouring of Kn(3).
Let I be an independent set drawn from the discrete d-dimensional hypercube Qd = {0, 1}d according to the hard-core distribution with parameter λ > 0 (that is, the distribution in which each independent set I is chosen with probability proportional to λ|I|). We show a sharp transition around λ = 1 in the appearance of I: for λ > 1, min{|I ∩ Ɛ|, |I ∩ |} = 0 asymptotically almost surely, where Ɛ and are the bipartition classes of Qd, whereas for λ < 1, min{|I ∩ Ɛ|, |I ∩ |} is asymptotically almost surely exponential in d. The transition occurs in an interval whose length is of order 1/d.
A key step in the proof is an estimation of Zλ(Qd), the sum over independent sets in Qd with each set I given weight λ|I| (a.k.a. the hard-core partition function). We obtain the asymptotics of Zλ(Qd) for , and nearly matching upper and lower bounds for , extending work of Korshunov and Sapozhenko. These bounds allow us to read off some very specific information about the structure of an independent set drawn according to the hard-core distribution.
We also derive a long-range influence result. For all fixed λ > 0, if I is chosen from the independent sets of Qd according to the hard-core distribution with parameter λ, conditioned on a particular v ∈ Ɛ being in I, then the probability that another vertex w is in I is o(1) for w ∈ but Ω(1) for w ∈ Ɛ.
In a previous paper, we have described the construction of anautomaton from a rational expression which has the property thatthe automaton built from an expression which is itself computedfrom a co-deterministic automaton by the state elimination methodis co-deterministic. It turned out that the definition on which the construction isbased was inappropriate, and thus the proof of the property wasflawed. We give here the correct definition of the broken derived termsof an expression which allow to define the automaton and thedetailed full proof of the property.
A word u defined over an alphabet $\mathcal A$ is c-balanced (c∈$\mathbb N$) if for all pairs of factors v, w of u of the same lengthand for all letters a∈$\mathcal A$, the difference between the number of letters a in v and w is less or equal to c. In this paper we consider a ternary alphabet $\mathcal A$ = {L, S, M} and a class of substitutions $\varphi_p$ defined by $\varphi_p$(L) = LpS, $\varphi_p$(S) = M, $\varphi_p$(M) = Lp–1S where p> 1.We prove that the fixed point of $\varphi_p$, formally written as $\varphi_p^\infty$(L), is 3-balanced and that its Abelian complexity is bounded above by the value 7, regardless of the value of p. We also show that both these bounds are optimal, i.e. they cannot be improved.
Analyzing genomic data for finding those gene variations which are responsible for hereditary diseases is one of the great challenges in modern bioinformatics. In many living beings (including the human), every gene is present in two copies, inherited from the two parents, the so-called haplotypes. In this paper, we propose a simple combinatorial model for classifying the set of haplotypes in a population according to their responsibility for a certain genetic disease. This model is based on the minimum-ones 2SAT problem with uniform clauses.The minimum-ones 2SAT problem asks for a satisfying assignment to a satisfiable formula in 2CNF which sets a minimum number of variables to true. This problem is well-known to be $\mathcal{NP}$-hard, even in the case where all clauses are uniform, i.e., do not contain a positive and a negative literal. We analyze the approximability and present the first non-trivial exact algorithm for the uniform minimum-ones 2SAT problem with a running time of $\mathcal{O}$(1.21061n) on a 2SAT formula with n variables. We also show that the problem is fixed-parameter tractable by showing that our algorithm can be adapted to verify in $\mathcal{O}^*$(2k) time whether an assignment with at most k true variables exists.
A modified version of the classical µ-operator as well as thefirst value operator and the operator of inverting unaryfunctions, applied in combination with the composition offunctions and starting from the primitive recursive functions,generate all arithmetically representable functions. Moreover, thenesting levels of these operators are closely related to thestratification of the arithmetical hierarchy. The same is shownfor some further function operators known from computability and complexitytheory. The close relationships between nesting levels of operators andthe stratification of the hierarchy also hold for suitablerestrictions of the operators with respect to the polynomialhierarchy if one starts with the polynomial-time computablefunctions. It follows that questions around P vs. NP andNP vs. coNP can equivalently be expressed by closureproperties of function classes under these operators. The polytime version of the first value operator can be used toestablish hierarchies between certain consecutive levels withinthe polynomial hierarchy of functions, which are related togeneralizations of the Boolean hierarchies over the classes$\mbox{$\Sigma^p_{k}$}$.
We introduce two graph polynomials and discuss their properties. One is a polynomial of two variables whose investigation is motivated by the performance analysis of the Bethe approximation of the Ising partition function. The other is a polynomial of one variable that is obtained by the specialization of the first one. It is shown that these polynomials satisfy deletion–contraction relations and are new examples of the V-function, which was introduced by Tutte (Proc. Cambridge Philos. Soc.43, 1947, p. 26). For these polynomials, we discuss the interpretations of special values and then obtain the bound on the number of sub-coregraphs, i.e., spanning subgraphs with no vertices of degree one. It is proved that the polynomial of one variable is equal to the monomer–dimer partition function with weights parametrized by that variable. The properties of the coefficients and the possible region of zeros are also discussed for this polynomial.
There has recently been much interest in “artificial neural networks,” machines (or models of computation) based loosely on the ways in which the brain is believed to work. Neurobiologists are interested in using these machines as a means of modeling biological brains, but much of the impetus comes from their applications. For example, engineers wish to create machines that can perform “cognitive” tasks, such as speech recognition, and economists are interested in financial time series prediction using such machines.
In this chapter we focus on individual “artificial neurons” and feed-forward artificial neural networks. We are particularly interested in cases where the neurons are linear threshold neurons, sigmoid neurons, polynomial threshold neurons, and spiking neurons. We investigate the relationships between types of artificial neural network and classes of Boolean function. In particular, we ask questions about the type of Boolean functions a given type of network can compute, and about how extensive or expressive the set of functions so computable is.
Artificial Neural Networks
Introduction
It appears that one reason why the human brain is so powerful is the sheer complexity of connections between neurons. In computer science parlance, the brain exhibits huge parallelism, with each neuron connected to many other neurons. This has been reflected in the design of artificial neural networks. One advantage of such parallelism is that the resulting network is robust: in a serial computer, a single fault can make computation impossible, whereas in a system with a high degree of parallelism and many computation paths, a small number of faults may be tolerated with little or no upset to the computation.
A fundamental objective of cryptography is to enable two persons to communicate over an insecure channel (a public channel such as the internet) in such a way that any other person is unable to recover their message (called the plaintext) from what is sent in its place over the channel (the ciphertext). The transformation of the plaintext into the ciphertext is called encryption, or enciphering. Encryption-decryption is the most ancient cryptographic activity (ciphers already existed four centuries b.c.), but its nature has deeply changed with the invention of computers, because the cryptanalysis (the activity of the third person, the eavesdropper, who aims at recovering the message) can use their power.
The encryption algorithm takes as input the plaintext and an encryption key KE, and it outputs the ciphertext. If the encryption key is secret, then we speak of conventional cryptography, of private key cryptography, or of symmetric cryptography. In practice, the principle of conventional cryptography relies on the sharing of a private key between the sender of a message (often called Alice in cryptography) and its receiver (often called Bob). If, on the contrary, the encryption key is public, then we speak of public key cryptography. Public key cryptography appeared in the literature in the late 1970s.