To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book has its origins in my interest in semantics and logics for locality in programming languages. By locality, I mean the various mechanisms that exist for making local declarations, restricting a resource to a specific scope, or hiding information from the environment. Although mathematics and logic are involved in understanding these things, this is a distinctively computer science topic. I was introduced to it by Matthew Hennessy and Alley Stoughton when we all arrived at the University of Sussex in the second half of the 1980s. At the time I was interested in applying category theory and logic to computer science and they were interested in the properties of the mixture of local mutable state and higher-order functions that occurs in the ML family of languages (Milner et al., 1997).
Around that time Moggi introduced the use of category-theoretic monads to structure different notions of computational effect (Moggi, 1991). That is now an important technique in denotational semantics; and thanks to the work of Wadler (1992) and others, monads are the accepted way of ‘tackling the awkward squad’ (Peyton Jones, 2001) of side-effects within functional programming. One of Moggi's monads models the computational effect of dynamically allocating fresh names. It is less well known than some of the other monads he uses, because it needs categories of functors and is only mentioned in (Moggi, 1989), rather than (Moggi, 1991).
Every graphon defines a random graph on any given number n of vertices. It was known that the graphon is random-free if and only if the entropy of this random graph is subquadratic. We prove that for random-free graphons, this entropy can grow as fast as any subquadratic function. However, if the graphon belongs to the closure of a random-free hereditary graph property, then the entropy is O(n log n). We also give a simple construction of a non-step-function random-free graphon for which this entropy is linear, refuting a conjecture of Janson.
We show that the expected number of maximal empty axis-parallel boxes amidst n random points in the unit hypercube [0,1]d in $\mathbb{R}^d$ is (1 ± o(1)) $\frac{(2d-2)!}{(d-1)!}$n lnd−1n, if d is fixed. This estimate is relevant to analysis of the performance of exact algorithms for computing the largest empty axis-parallel box amidst n given points in an axis-parallel box R, especially the algorithms that proceed by examining all maximal empty boxes. Our method for bounding the expected number of maximal empty boxes also shows that the expected number of maximal empty orthants determined by n random points in $\mathbb{R}^d$ is (1 ± o(1)) lnd−1n, if d is fixed. This estimate is related to the expected number of maximal (or minimal) points amidst random points, and has application to algorithms for coloured orthogonal range counting.
Let m,n and t be positive integers. Consider [m]n as the set of sequences of length n on an m-letter alphabet. We say that two subsets A⊂[m]n and B⊂[m]n cross t-intersect if any two sequences a∈A and b∈B match in at least t positions. In this case it is shown that if $m > (1-\frac 1{\sqrt[t]2})^{-1}$ then |A||B|≤(mn−t)2. We derive this result from a weighted version of the Erdős–Ko–Rado theorem concerning cross t-intersecting families of subsets, and we also include the corresponding stability statement. One of our main tools is the eigenvalue method for intersection matrices due to Friedgut [10].
Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the 'bible of computer algebra', gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany one- or two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.
In this chapter, we introduce fast methods for multiplying integers and polynomials. We start with a simple method due to Karatsuba which reduces the cost from the classical O(n2) for polynomials of degree n to O(n1.59). The Discrete Fourier Transform and its efficient implementation, the Fast Fourier Transform, are the backbone of the fastest algorithms. These work only when appropriate roots of unity are present, but Schönhage & Strassen (1971) showed how to create “virtual” roots that lead to a multiplication cost of only O(n log n loglog n). In Chapter 9, Newton iteration will help us extend this to fast division with remainder.
General-purpose computer algebra systems typically only implement the classical method, and sometimes Karatsuba's. This is quite sufficient as long as one deals with fairly small numbers or polynomials, but for many high-performance tasks fast arithmetic is indispensable. Examples include factoring large polynomials (Section 15.7), finding primes and twin primes (Notes to Chapter 18), and computing billions of digits of π (Section 4.6) or billions of roots of Riemann's zeta function (Notes 18.4).
Asymptotically fast methods are standard tools in many areas of computer science, where, say, O(nlogn) sorting algorithms like quicksort or mergesort are widely used and experiments show that they outperform the “classical” O(n2) sorting algorithms like bubble sort or insertion sort already for values of n below 100.
Pierre Fermat (c. 1601–1665) has been called the greatest amateur mathematician. After growing up in Beaumont-de-Lomagne in Gascony (where his home now houses an interesting museum⋆⋆), he studied in Orléans and Toulouse, became “commissioner of requests” in 1631, and conseiller du roi in the local parlement, through which any petitions to the king had to pass. He died in Castres, where he was in the commission implementing the Édit de Nantes, which gave some protection to the persecuted protestant Huguenots. Fermat never left the area, never published a paper, and still became the second-best mathematician of his century (after Newton). Fermat communicated his mathematical discoveries in numerous letters, usually without proof and often in the form of challenges, to his contemporaries. (Among them was René Descartes, who could only be reached through his friend Marin Mersenne in Paris, because for many years he lived in Holland without a fixed address—a Flying Dutchman of mathematics, like the modern-day late Pál Erd˝os.)
Fermat was a pioneer in several areas. His method for drawing a tangent to certain plane curves was a step in the invention of calculus—later came Newton and Leibniz.
David Hilbert (1862–1943) grew up in Königsberg, then capital of East Prussia and now Kaliningrad in Russia, in an upper middle-class family; his father was a judge. The town had been home to the philosopher Immanuel Kant, to Leonard Euler, whose solution to the riddle of how to cross its seven bridges across the river Pregel without re-using one became a starting point for graph theory and topology, and to C. G. J. Jacobi.
After an unimpressive school career, he studied at the university to graduate with his doctoral thesis on invariant theory in 1885. He worked in this area until 1893, proving among other things the Hilbert basis theorem saying that any ideal in a polynomial ring (in finitely many variables over a field) is finitely generated (Theorem 21.23), and introducing the Hilbert function of algebraic varieties.
Two further results from his “multivariate polynomial phase” are relevant to the subject matter of this text: firstly Hilbert's Nullstellensatz (1890), which says that if a polynomial g vanishes on the set of common roots of some multivariate polynomials f1,…,fs over ℂ, then some power ge is in the ideal 〈f1,…,fs〉 (see Section 21.7). Secondly, Hilbert's irreducibility theorem (1892), stating that for an irreducible polynomial f ∈ Q[x,y], the univariate polynomial f(x,a) ∈ Q[x] is irreducible for “most” a ∈ ℤ. This sounds useful for reducing bivariate to univariate factorization. Unfortunately, no efficient versions of “most” are known, but, fortunately, such versions are known for reducing from many to two variables (Section 16.6).
The basic task in this chapter is, given an “expression” f, say f ∈ F(x), where F is a field, to compute the indefinite integral ∫ f = ∫ f(x)dx, that is, another “expression” (possibly in a larger domain) g with g′ = f, where ′ denotes differentiation with respect to the variable x. “Expressions” are usually built from rational functions and “elementary functions” such as sin, cos, exp, log, etc. (Since it is more common, we denote the natural (base e) logarithm by “log” instead of “ln” in this chapter.) Such integrals need not exist: Liouville's (1835) theorem implies that exp(x2) has no integral involving only rational functions, sin, cos, exp, and log.
A practical approach to the symbolic integration problem is to use a plethora of formulas for special functions, tricks from basic calculus like substitutions and integration by parts, and table lookups. There are projects that load the whole contents of existing printed integral tables into computer algebra systems, using optical character recognition, and modern computer algebra systems can solve practically all integration exercises in calculus textbooks. In the following, we discuss a systematic algorithm in the case of rational and “hyperexponential” functions as integrands. This approach can be extended—with plenty of new ideas and techniques—to more general functions, but we do not pursue this topic further.
We want to know whether a given integer is prime or not. Certainly we can find out by factoring it. Can you think of any other way? Well, there is, and the major discovery in this area is that primality testing is much easier than factoring, at least to current knowledge. One can test integers with many thousands of digits, but factoring numbers with only 300 digits is in general not feasible.
In this chapter, we provide an efficient probabilistic algorithm to test primality; factorization is the subject of the next chapter. As an easy application, we can also find large prime numbers, as they are required in some modular algorithms and in modern cryptography. We conclude with brief discussions of other primality testing algorithms. The long-standing quest for a deterministic polynomial-time primality test, stated as a Research Problem in the first two editions of this book, was resolved by Agrawal, Kayal & Saxena (2004).
For numbers of a special form, such as the Mersenne numbersMn = 2n − 1, particularly efficient methods have been known since the 19th century. Indeed, throughout history the largest known prime has usually been a Mersenne prime.
This chapter presents several applications of the Extended Euclidean Algorithm: modular arithmetic, in particular modular inverses; linear Diophantine equations; and continued fractions. The latter in turn are useful for problems outside of computer algebra: devising astronomical calendars and musical scale systems.
Modular arithmetic
We start with some applications. The first one is checking programs for correctness. In Part II of this book, we will see extremely fast algorithms for multiplication of large integers. These methods are also considerably more complicated than classical multiplication, and an implementation quite error-prone. So we may want to test correctness on many inputs. We take inputs a and b, say positive integers of 10000 words each, and the output c of 20000 words. Can we check that a·b = c without using our own software?
The solution is a modular test. We take a single-precision prime p and check whether a·b ≡ c mod p (read “a·b and c are congruent modulo p”), which means that a·b−c is divisible by p, or equivalently, a·b and c have the same remainder on division by p. By (1) below, it is sufficient for this purpose to compute the remainders a* = a rem p, b* = b rem p, c* = c rem p and check whether a*·b* ≡ c* mod p, since a·b ≡ a*·b* mod p.