To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Pierre Fermat (c. 1601–1665) has been called the greatest amateur mathematician. After growing up in Beaumont-de-Lomagne in Gascony (where his home now houses an interesting museum⋆⋆), he studied in Orléans and Toulouse, became “commissioner of requests” in 1631, and conseiller du roi in the local parlement, through which any petitions to the king had to pass. He died in Castres, where he was in the commission implementing the Édit de Nantes, which gave some protection to the persecuted protestant Huguenots. Fermat never left the area, never published a paper, and still became the second-best mathematician of his century (after Newton). Fermat communicated his mathematical discoveries in numerous letters, usually without proof and often in the form of challenges, to his contemporaries. (Among them was René Descartes, who could only be reached through his friend Marin Mersenne in Paris, because for many years he lived in Holland without a fixed address—a Flying Dutchman of mathematics, like the modern-day late Pál Erd˝os.)
Fermat was a pioneer in several areas. His method for drawing a tangent to certain plane curves was a step in the invention of calculus—later came Newton and Leibniz.
David Hilbert (1862–1943) grew up in Königsberg, then capital of East Prussia and now Kaliningrad in Russia, in an upper middle-class family; his father was a judge. The town had been home to the philosopher Immanuel Kant, to Leonard Euler, whose solution to the riddle of how to cross its seven bridges across the river Pregel without re-using one became a starting point for graph theory and topology, and to C. G. J. Jacobi.
After an unimpressive school career, he studied at the university to graduate with his doctoral thesis on invariant theory in 1885. He worked in this area until 1893, proving among other things the Hilbert basis theorem saying that any ideal in a polynomial ring (in finitely many variables over a field) is finitely generated (Theorem 21.23), and introducing the Hilbert function of algebraic varieties.
Two further results from his “multivariate polynomial phase” are relevant to the subject matter of this text: firstly Hilbert's Nullstellensatz (1890), which says that if a polynomial g vanishes on the set of common roots of some multivariate polynomials f1,…,fs over ℂ, then some power ge is in the ideal 〈f1,…,fs〉 (see Section 21.7). Secondly, Hilbert's irreducibility theorem (1892), stating that for an irreducible polynomial f ∈ Q[x,y], the univariate polynomial f(x,a) ∈ Q[x] is irreducible for “most” a ∈ ℤ. This sounds useful for reducing bivariate to univariate factorization. Unfortunately, no efficient versions of “most” are known, but, fortunately, such versions are known for reducing from many to two variables (Section 16.6).
The basic task in this chapter is, given an “expression” f, say f ∈ F(x), where F is a field, to compute the indefinite integral ∫ f = ∫ f(x)dx, that is, another “expression” (possibly in a larger domain) g with g′ = f, where ′ denotes differentiation with respect to the variable x. “Expressions” are usually built from rational functions and “elementary functions” such as sin, cos, exp, log, etc. (Since it is more common, we denote the natural (base e) logarithm by “log” instead of “ln” in this chapter.) Such integrals need not exist: Liouville's (1835) theorem implies that exp(x2) has no integral involving only rational functions, sin, cos, exp, and log.
A practical approach to the symbolic integration problem is to use a plethora of formulas for special functions, tricks from basic calculus like substitutions and integration by parts, and table lookups. There are projects that load the whole contents of existing printed integral tables into computer algebra systems, using optical character recognition, and modern computer algebra systems can solve practically all integration exercises in calculus textbooks. In the following, we discuss a systematic algorithm in the case of rational and “hyperexponential” functions as integrands. This approach can be extended—with plenty of new ideas and techniques—to more general functions, but we do not pursue this topic further.
We want to know whether a given integer is prime or not. Certainly we can find out by factoring it. Can you think of any other way? Well, there is, and the major discovery in this area is that primality testing is much easier than factoring, at least to current knowledge. One can test integers with many thousands of digits, but factoring numbers with only 300 digits is in general not feasible.
In this chapter, we provide an efficient probabilistic algorithm to test primality; factorization is the subject of the next chapter. As an easy application, we can also find large prime numbers, as they are required in some modular algorithms and in modern cryptography. We conclude with brief discussions of other primality testing algorithms. The long-standing quest for a deterministic polynomial-time primality test, stated as a Research Problem in the first two editions of this book, was resolved by Agrawal, Kayal & Saxena (2004).
For numbers of a special form, such as the Mersenne numbersMn = 2n − 1, particularly efficient methods have been known since the 19th century. Indeed, throughout history the largest known prime has usually been a Mersenne prime.
This chapter presents several applications of the Extended Euclidean Algorithm: modular arithmetic, in particular modular inverses; linear Diophantine equations; and continued fractions. The latter in turn are useful for problems outside of computer algebra: devising astronomical calendars and musical scale systems.
Modular arithmetic
We start with some applications. The first one is checking programs for correctness. In Part II of this book, we will see extremely fast algorithms for multiplication of large integers. These methods are also considerably more complicated than classical multiplication, and an implementation quite error-prone. So we may want to test correctness on many inputs. We take inputs a and b, say positive integers of 10000 words each, and the output c of 20000 words. Can we check that a·b = c without using our own software?
The solution is a modular test. We take a single-precision prime p and check whether a·b ≡ c mod p (read “a·b and c are congruent modulo p”), which means that a·b−c is divisible by p, or equivalently, a·b and c have the same remainder on division by p. By (1) below, it is sufficient for this purpose to compute the remainders a* = a rem p, b* = b rem p, c* = c rem p and check whether a*·b* ≡ c* mod p, since a·b ≡ a*·b* mod p.
In science and engineering, a successful attack on a problem will usually lead to some equations that have to be solved. There are many types of such equations: differential equations, linear or polynomial equations or inequalities, recurrences, equations in groups, tensor equations, etc. In principle, there are two ways of solving such equations: approximately or exactly. Numerical analysis is a well-developed field that provides highly successful mathematical methods and computer software to compute approximate solutions.
Computer algebra is a more recent area of computer science, where mathematical tools and computer software are developed for the exact solution of equations.
Why use approximate solutions at all if we can have exact solutions? The answer is that in many cases an exact solution is not possible. This may have various reasons: for certain (simple) ordinary differential equations, one can prove that no closed form solution (of a specified type) is possible. More important are questions of efficiency: any system of linear equations, say with rational coefficients, can be solved exactly, but for the huge linear systems that arise in meteorology, nuclear physics, geology or other areas of science, only approximate solutions can be computed efficiently. The exact methods, run on a supercomputer, would not yield answers within a few days or weeks (which is not really acceptable for weather prediction).
In this chapter, we present two modular algorithms for factoring in Q[x] and F[x, y] for a field F. The first one uses factorization modulo a “big” prime and is conceptually easier, and the second one uses factorization modulo a “small” prime and then “lifts” it to a factorization modulo a power of that prime. The latter is computationally faster and comprises our most powerful employment of the prime power modular approach introduced in Chapter 5.
Factoring in ℤ[x] and Q[x]: the basic idea
Our first goal is to understand the difference between “factoring in ℤ[x]” and “factoring a polynomial with integer coefficients in Q[x]”. The basic fact is that the latter corresponds to factoring primitive polynomials in ℤ[x], while the former requires in addition the factoring of an integer, namely the polynomial's content. We rely on the following notions which were introduced in Section 6.2.
Let R be a Unique Factorization Domain (our two main applications are, as usual, R = ℤ and R = F[y] for a field F). The content cont (f) of a polynomial f ∈ R[x] is the greatest common divisor of its coefficients (with the convention that the gcd is positive if R = ℤ and monic if R = F[y]).
Carl Friedrich Gauß (1777–1855), the Prince of Mathematicians, was the latest, after Archimedes and Newton, in this trio of great men whose ideas shaped mathematics for centuries after their work (and two of whom figure prominently in this book).
Born on April 30, 1777, and registered as Johann Friderich Carl Gauß, he grew up in a poor bricklayer's family in Braunschweig. His father, an honest but tough and simple-minded person, did not succeed in keeping his son as uneducated as himself, mainly because of the efforts of Gauß' mother Dorothea and his uncle Friederich.
Gauß loved to tell the story of how—at ten years of age—one of the first flashes of his genius surprised his unsuspecting teacher Büttner. The class had been given the task to sum the numbers 1,…,100. (What a useless task!) Gauß figured out the corresponding summation formula (see Section 23.1), wrote down the correct answer almost immediately, and waited while the other boys took the full hour to get their answers—all wrong. (Such stupidities have not vanished from German schools: the first author had a high-school geography teacher who would set similarly useless tasks in order to have some time for serious study— of the current Playboy issue.)