To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let be a set of terms over an arbitrary (but finite) number of Boolean variables. Let U() be the set of truth assignments that satisfy exactly one term in . Motivated by questions in computational complexity, Rudich conjectured that there exist ∊, δ > 0 such that, if is any set of terms for which U() contains at least a (1−∊)-fraction of all truth assignments, then there exists a term t ∈ such that at least a δ-fraction of assignments satisfy some term of sharing a variable with t [8].
We prove a stronger version: for any independent assignment of the variables (not necessarily the uniform one), if the measure of U() is at least 1 − ∊, there exists a t ∈ such that the measure of the set of assignments satisfying either t or some term incompatible with t (i.e., having no satisfying assignments in common with t) is at least . (A key part of the proof is a correlation-like inequality on events in a finite product probability space that is in some sense dual to Reimer's inequality [11], a.k.a. the BKR inequality [5], or the van den Berg–Kesten conjecture [3].)
Here we present a non-exhaustive list of software packages that (in most cases) the authors have tried, together with some other useful pointers. Of course, we cannot accept any responsibility for bugs/errors/omissions in any of the software or documentation mentioned here – caveat emptor!
Websites change. If any of the websites mentioned here disappear in the future, you may be able to find the new site using a search engine with appropriate keywords.
Software tools
CLN
CLN (Class Library for Numbers, http://www.ginac.de/CLN/) is a library for efficient computations with all kinds of numbers in arbitrary precision. It was written by Bruno Haible, and is currently maintained by Richard Kreckel. It is written in C++ and distributed under the GNU General Public License (GPL). CLN provides some elementary and special functions, and fast arithmetic on large numbers, in particular it implements Schönhage–Strassen multiplication, and the binary splitting algorithm. CLN can be configured to use GMP low-level MPN routines, which improves its performance.
GNU MP (GMP)
The GNU MP library is the main reference for arbitrary-precision arithmetic. It has been developed since 1991 by Torbjörn Granlund and several other contributors. GNU MP (GMP for short) implements several of the algorithms described in this book. In particular, we recommend reading the “Algorithms” chapter of the GMP reference manual.
This is a book about algorithms for performing arithmetic, and their implementation on modern computers. We are concerned with software more than hardware – we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead, we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication, and division, and their connections to topics such as modular arithmetic, greatest common divisors, the fast Fourier transform (FFT), and the computation of special functions.
The algorithms that we present are mainly intended for arbitrary-precision arithmetic. That is, they are not limited by the computer wordsize of 32 or 64 bits, only by the memory and time available for the computation. We consider both integer and real (floating-point) computations.
The book is divided into four main chapters, plus one short chapter (essentially an appendix). Chapter 1 covers integer arithmetic. This has, of course, been considered in many other books and papers. However, there has been much recent progress, inspired in part by the application to public key cryptography, so most of the published books are now partly out of date or incomplete. Our aim is to present the latest developments in a concise manner. At the same time, we provide a self-contained introduction for the reader who is not an expert in the field.
Chapter 2 is concerned with modular arithmetic and the FFT, and their applications to computer arithmetic.
In this chapter our main topic is modular arithmetic, i.e. how to compute efficiently modulo a given integer N. In most applications, the modulus N is fixed, and special-purpose algorithms benefit from some precomputations, depending only on N, to speed up arithmetic modulo N.
There is an overlap between Chapter 1 and this chapter. For example, integer division and modular multiplication are closely related. In Chapter 1 we present algorithms where no (or only a few) precomputations with respect to the modulus N are performed. In this chapter, we consider algorithms which benefit from such precomputations.
Unless explicitly stated, we consider that the modulus N occupies n words in the word-base β, i.e. βn−1 ≤ N < βn.
Representation
We consider in this section the different possible representations of residues modulo N. As in Chapter 1, we consider mainly dense representations.
Classical representation
The classical representation stores a residue (class) a as an integer 0 ≤ a < N. Residues are thus always fully reduced, i.e. in canonical form.
Another non-redundant form consists in choosing a symmetric representation, say −N/2 ≤ a < N/2. This form might save some reductions in additions or subtractions (see §2.2). Negative numbers might be stored either with a separate sign (sign-magnitude representation) or with a two's-complement representation.
Since N takes n words in base β, an alternative redundant representation chooses 0 ≤ a < βn to represent a residue class.
Here we consider various applications of Newton's method, which can be used to compute reciprocals, square roots, and more generally algebraic and functional inverse functions. We then consider unrestricted algorithms for computing elementary and special functions. The algorithms of this chapter are presented at a higher level than in Chapter 3. A full and detailed analysis of one special function might be the subject of an entire chapter!
Introduction
This chapter is concerned with algorithms for computing elementary and special functions, although the methods apply more generally. First we consider Newton's method, which is useful for computing inverse functions. For example, if we have an algorithm for computing y = ln x, then Newton's method can be used to compute x = exp y (see §4.2.5). However, Newton's method has many other applications. In fact, we already mentioned Newton's method in Chapters 1–3, but here we consider it in more detail.
After considering Newton's method, we go on to consider various methods for computing elementary and special functions. These methods include power series (§4.4), asymptotic expansions (§4.5), continued fractions (§4.6), recurrence relations (§4.7), the arithmetic-geometric mean (§4.8), binary splitting (§4.9), and contour integration (§4.10). The methods that we consider are unrestricted in the sense that there is no restriction on the attainable precision – in particular, it is not limited to the precision of IEEE standard 32-bit or 64-bit floating-point arithmetic. Of course, this depends on the availability of a suitable software package for performing floating-point arithmetic on operands of arbitrary precision, as discussed in Chapter 3.
Consider the barycentric subdivision which cuts a given triangle along its medians to produce six new triangles. Uniformly choosing one of them and iterating this procedure gives rise to a Markov chain. We show that, almost surely, the triangles forming this chain become flatter and flatter in the sense that their isoperimetric values go to infinity with time. Nevertheless, if the triangles are renormalized through a similitude to have their longest edge equal to [0, 1] ⊂ ℂ (with 0 also adjacent to the shortest edge), their aspect does not converge and we identify the limit set of the opposite vertex with the segment [0, 1/2]. In addition we prove that the largest angle converges to π in probability. Our approach is probabilistic, and these results are deduced from the investigation of a limit iterated random function Markov chain living on the segment [0, 1/2]. The stationary distribution of this limit chain is particularly important in our study.
The study of permutation patterns is a thriving area of combinatorics that relates to many other areas of mathematics, including graph theory, enumerative combinatorics, model theory, the theory of automata and languages, and bioinformatics. Arising from the Fifth International Conference on Permutation Patterns, held in St Andrews in June 2007, this volume contains a mixture of survey and research articles by leading experts, and includes the two invited speakers, Martin Klazar and Mike Atkinson. Together, the collected articles cover all the significant strands of current research: structural methods and simple patterns, generalisations of patterns, various enumerative aspects, machines and networks, packing, and more. Specialists in this area and other researchers in combinatorics and related fields will find much of interest in this book. In addition, the volume provides plenty of material accessible to advanced undergraduates and is a suitable reference for projects and dissertations.
We study the diameter of 1, the largest component of the Erdős–Rényi random graph (n, p) in the emerging supercritical phase, i.e., for p = where ε3n → ∞ and ε = o(1). This parameter was extensively studied for fixed ε > 0, yet results for ε = o(1) outside the critical window were only obtained very recently. Prior to this work, Riordan and Wormald gave precise estimates on the diameter; however, these did not cover the entire supercritical regime (namely, when ε3n → ∞ arbitrarily slowly). Łuczak and Seierstad estimated its order throughout this regime, yet their upper and lower bounds differed by a factor of .
We show that throughout the emerging supercritical phase, i.e., for any ε = o(1) with ε3n → ∞, the diameter of 1 is with high probability asymptotic to D(ε, n) = (3/ε)log(ε3n). This constitutes the first proof of the asymptotics of the diameter valid throughout this phase. The proof relies on a recent structure result for the supercritical giant component, which reduces the problem of estimating distances between its vertices to the study of passage times in first-passage percolation. The main advantage of our method is its flexibility. It also implies that in the emerging supercritical phase the diameter of the 2-core of 1 is w.h.p. asymptotic to , and the maximal distance in 1 between any pair of kernel vertices is w.h.p. asymptotic to .
This special issue is devoted to papers from the meeting on Combinatorics and Probability, held at the Mathematisches Forschungsinstitut in Oberwolfach from 26 April to 2 May. This meeting focused on the common themes of Combinatorics, Discrete Probability and Theoretical Computer Science, and the lectures, many of which were given by young participants, stimulated fruitful discussions. The open problems session held during the meeting, and the fact that the participants work in different and related topics, encouraged interesting discussions and collaborations.
In this paper we study the diameter of the random graph G(n, p), i.e., the largest finite distance between two vertices, for a wide range of functions p = p(n). For p = λ/n with λ > 1 constant we give a simple proof of an essentially best possible result, with an Op(1) additive correction term. Using similar techniques, we establish two-point concentration in the case that np → ∞. For p =(1 + ε)/n with ε → 0, we obtain a corresponding result that applies all the way down to the scaling window of the phase transition, with an Op(1/ε) additive correction term whose (appropriately scaled) limiting distribution we describe. Combined with earlier results, our new results complete the determination of the diameter of the random graph G(n, p) to an accuracy of the order of its standard deviation (or better), for all functions p = p(n). Throughout we use branching process methods, rather than the more common approach of separate analysis of the 2-core and the trees attached to it.
Ordering constraints are formally analogous to instances of the satisfiability problem in conjunctive normal form, but instead of a boolean assignment we consider a linear ordering of the variables in question. A clause becomes true given a linear ordering if and only if the relative ordering of its variables obeys the constraint considered.
The naturally arising satisfiability problems are NP-complete for many types of constraints. We look at random ordering constraints. Previous work of the author shows that there is a sharp unsatisfiability threshold for certain types of constraints. The value of the threshold, however, is essentially undetermined. We pursue the problem of approximating the precise value of the threshold. We show that random instances of the betweenness constraint are satisfiable with high probability if the number of randomly picked clauses is ≤0.92n, where n is the number of variables considered. This improves the previous bound, which is <0.82n random clauses. The proof is based on a binary relaxation of the betweenness constraint and involves some ideas not used before in the area of random ordering constraints.