To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The branching process model was introduced by Sir Francis Galton in 1873 to represent the genealogical descendance of individuals. More generally it provides a versatile model for the growth of a population of reproducing individuals in the absence of external limiting factors. It is an adequate starting point when studying epidemics since, as we shall see in Chapter 2, it describes accurately the early stages of an epidemic outbreak. In addition, our treatment of so-called dual branching processes paves the way for the analysis of the supercritical phase in Chapter 2. Finally, the present chapter gives an opportunity to introduce large deviations inequalities (and notably the celebrated Chernoff bound), which is instrumental throughout the book.
A Galton–Watson branching process can be represented by a tree in which each node represents an individual, and is linked to its parent as well as its children. The “root” of the tree corresponds to the “ancestor” of the whole population. An example of such a tree is depicted in Figure 1.1.
In the following we consider three distinct ways of exploring the so-called Galton–Watson tree, each well suited to establishing specific properties.
In the depth-first view, we start by exploring one child in the first generation, then explore using the same method recursively the subtree of its descendants, before we move to the next child of the first generation.
In 1967, the sociologist Stanley Milgram published results of a letter-relaying experiment of his design. The now-famous experiment required a source individual to forward a letter to a destination individual, about whom was disclosed information such as address, name and profession. However, each source individual was forbidden to post the letter directly to the target person. Instead she was required to forward the letter to someone known on a first-name basis, who in turn was allowed to forward it only to such familiar contacts.
The outcome was that a significant fraction of letters reached their destinations. Moreover, they did so in at most six hops, justifying the term “six degrees of separation”. This observation is also often referred to as the “small-world phenomenon”. The problem, formulated in the social sciences, is highly relevant in many other settings, namely routing with limited information in communication networks and browsing behaviour on the World Wide Web.
Viewing the social world as a graph with edges between acquainted persons, if any individual can relay information to any other in a small number of hops as in Milgram's experiment, the corresponding graph must have a small diameter. As we saw in Chapter 4, the E-R graph does have a small diameter (logarithmic in the number of nodes).
In Chapter 2 we saw that, when the average degree np of an Erdős–Rényi graph is of constant order λ > 1, the graph contains a giant component of size of order n with high probability. However, in that regime, this component's size is strictly less than n, so that the graph is disconnected.
In the present chapter we shall establish that connectivity appears when the product np is of order log n. We shall more precisely evaluate the probability of connectivity when np is asymptotic to log n + c for some constant c. In the framework of the Reed–Frost epidemic this corresponds to a regime known as atomic infection wherein all nodes are ultimately infected. In this regime we can analyse the time, in terms of the number of rounds, it takes the epidemic or the rumour to reach the whole population. This will be illustrated in Chapter 4 for the Reed–Frost epidemic and revisited in Chapter 6 when we introduce the small-world phenomenon.
The main mathematical tool required to prove the connectivity regime consists of Poisson approximation techniques, namely the Stein–Chen method. The Stein–Chen method provides bounds on how accurately a sum of {0, 1}-valued or Bernoulli random variables can be approximated by a Poisson distribution.
This book gives applications of the theory of process algebra, or Algebra of Communicating Processes (ACP), that is the study of concurrent or communicating processes studied using an algebraic framework. The approach is axiomatic; the authors consider structures that are some set of mostly equational axioms, which are equipped with several operators. Thus the term 'algebra' is used in the model-theoretic sense. The axiomatic approach enables one to organize the field of process theories. The theory is applied systematically to a number of situations, including systolic algorithms, semantics of an object-oriented language, and protocols. It will be welcomed by computer scientists working in parallel programming.
The idea of mimicking the propagation of biological epidemics to achieve diffusion of useful information was first proposed in the late 1980s, the decade that also saw the appearance of computer viruses. Back then, these viruses propagated by copies on floppy disks and caused much less harm than their contemporary versions. But it was already noticed that they evolved and survived much as biological viruses do, a fact that prompted the idea of putting these features to good (rather than evil) use. The first application to be considered was synchronisation of distributed databases.
Interest in this paradigm received new impetus with the advent of peer-to-peer systems, online social systems and wireless mobile ad hoc networks in the early 2000s. All these scenarios feature a complex network with potentially evolving connections. In such large-scale dynamic environments, epidemic diffusion of information is especially appealing: it is decentralised, and it relies on randomised decisions which can prove as efficient as carefully made decisions. Detailed accounts of epidemic algorithms can be found in papers by Birman et al. and Eugster et al. Their applications are manifold. They can be used to perform distributed computation of global statistics in a spatially extended environment (e.g. mean temperature seen by a collection of sensors), to perform real-time delivery of video data streams (e.g. to users receiving live TV via peer-to-peer systems over the internet) and to propagate updates of dynamic content (e.g. to mobile phone users whose phone operating system requires patching against vulnerabilities).
On page 551 the first author name should be J. A. De Loera (NOT J. A Loera)
The correct citation for this paper is
J. A. De Loera, J. Lee, S. Margulies and S. Onn (2009) Expressing Combinatorial Problems by Systems of Polynomial Equations and Hilbert's Nullstellensatz. Combinatorics, Probability and Computing, 18 (4) July, 551–582 doi:10.1017/S0963548309009894, Published online by Cambridge University Press 28 April 2009.
This book presents an up-to-date, unified treatment of research in bounded arithmetic and complexity of propositional logic, with emphasis on independence proofs and lower bound proofs. The author discusses the deep connections between logic and complexity theory and lists a number of intriguing open problems. An introduction to the basics of logic and complexity theory is followed by discussion of important results in propositional proof systems and systems of bounded arithmetic. More advanced topics are then treated, including polynomial simulations and conservativity results, various witnessing theorems, the translation of bounded formulas (and their proofs) into propositional ones, the method of random partial restrictions and its applications, direct independence proofs, complete systems of partial relations, lower bounds to the size of constant-depth propositional proofs, the method of Boolean valuations, the issue of hard tautologies and optimal proof systems, combinatorics and complexity theory within bounded arithmetic, and relations to complexity issues of predicate calculus. Students and researchers in mathematical logic and complexity theory will find this comprehensive treatment an excellent guide to this expanding interdisciplinary area.
Type theory is one of the most important tools in the design of higher-level programming languages, such as ML. This book introduces and teaches its techniques by focusing on one particularly neat system and studying it in detail. In this way, all the key ideas are covered without getting involved in the complications of more advanced systems, but concentrating rather on the principles that make the theory work in practice. This book takes a type-assignment approach to type theory, and the system considered is the simplest polymorphic one. The author covers all the basic ideas, including the system's relation to propositional logic, and gives a careful treatment of the type-checking algorithm which lies at the heart of every such system. Also featured are two other interesting algorithms that have been buried in inaccessible technical literature. The mathematical presentation is rigorous but clear, making the book at a level which can be used as an introduction to type theory for computer scientists.
Chaitin, the inventor of algorithmic information theory, presents in this book the strongest possible version of Gödel's incompleteness theorem, using an information theoretic approach based on the size of computer programs. One half of the book is concerned with studying the halting probability of a universal computer if its program is chosen by tossing a coin. The other half is concerned with encoding the halting probability as an algebraic equation in integers, a so-called exponential diophantine equation.
We introduce the concept of a relative Tutte polynomial of coloured graphs. We show that this relative Tutte polynomial can be computed in a way similar to the classical spanning tree expansion used by Tutte in his original paper on this subject. We then apply the relative Tutte polynomial to virtual knot theory. More specifically, we show that the Kauffman bracket polynomial (and hence the Jones polynomial) of a virtual knot can be computed from the relative Tutte polynomial of its face (Tait) graph with some suitable variable substitutions. Our method offers an alternative to the ribbon graph approach, using the face graph obtained from the virtual link diagram directly.
Let H be some fixed graph. We call a graph Gvicarious for H if G is maximal H-free and, for every edge e of G, there is an edge f not in G such that G − e + f is also H-free. We demonstrate various properties of vicarious graphs and several examples are given. It is conjectured that a graph of order n which is vicarious for K3 has size at most (1/4 + o(1))().
We show that the number of independent sets in an N-vertex, d-regular graph is at most (2d+1 − 1)N/2d, where the bound is sharp for a disjoint union of complete d-regular bipartite graphs. This settles a conjecture of Alon in 1991 and Kahn in 2001. Kahn proved the bound when the graph is assumed to be bipartite. We give a short proof that reduces the general case to the bipartite case. Our method also works for a weighted generalization, i.e., an upper bound for the independence polynomial of a regular graph.
In this paper we study the use of spectral techniques for graph partitioning. Let G = (V, E) be a graph whose vertex set has a ‘latent’ partition V1,. . ., Vk. Moreover, consider a ‘density matrix’ Ɛ = (Ɛvw)v, sw∈V such that, for v ∈ Vi and w ∈ Vj, the entry Ɛvw is the fraction of all possible Vi−Vj-edges that are actually present in G. We show that on input (G, k) the partition V1,. . ., Vk can (very nearly) be recovered in polynomial time via spectral methods, provided that the following holds: Ɛ approximates the adjacency matrix of G in the operator norm, for vertices v ∈ Vi, w ∈ Vj ≠ Vi the corresponding column vectors Ɛv, Ɛw are separated, and G is sufficiently ‘regular’ with respect to the matrix Ɛ. This result in particular applies to sparse graphs with bounded average degree as n = #V → ∞, and it has various consequences on partitioning random graphs.
Epistemic logic has grown from its philosophical beginnings to find diverse applications in computer science as a means of reasoning about the knowledge and belief of agents. This book, based on courses taught at universities and summer schools, provides a broad introduction to the subject; many exercises are included together with their solutions. The authors begin by presenting the necessary apparatus from mathematics and logic, including Kripke semantics and the well-known modal logics K, T, S4 and S5. Then they turn to applications in the contexts of distributed systems and artificial intelligence: topics that are addressed include the notions of common knowledge, distributed knowledge, explicit and implicit belief, the interplays between knowledge and time, and knowledge and action, as well as a graded (or numerical) variant of the epistemic operators. The problem of logical omniscience is also discussed extensively. Halpern and Moses' theory of honest formulae is covered, and a digression is made into the realm of non-monotonic reasoning and preferential entailment. Moore's autoepistemic logic is discussed, together with Levesque's related logic of 'all I know'. Furthermore, it is shown how one can base default and counterfactual reasoning on epistemic logic.