To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An important general concept in computer algebra is the idea of using various types of representation for the objects at hand. As an example, we can represent a polynomial either by a list of its coefficients or by its values at sufficiently many points. In fact, this is just computer algebra lingo for the ubiquitous quest for efficient data structures for computational problems.
One successful instantiation of the general concept are modular algorithms, where instead of solving an integer problem (more generally, an algebraic computation problem over a Euclidean domain R) directly one solves it modulo one or several integers m. The general principle is illustrated in Figure 5.1. There are three variants: big prime (Figure 5.1 with m = p for a prime p), small primes (Figure 5.2 with m = p1…pr for pairwise distinct primes p1,…,pr), and prime power modular algorithms (Figure 5.3 with m = pl for a prime p). The first one is conceptually the simplest, and the basic issues are most visible in that variant. However, the other two variants are computationally superior.
In each case, two technical problems have to be addressed:
We start by discussing the computer representation and fundamental arithmetic algorithms for integers and polynomials. We will keep this discussion fairly informal and avoid all the intricacies of actual computer arithmetic—that is a topic on its own. The reader must be warned that modern-day processors do not represent numbers and operate on them as we describe now, but to describe the tricks they use would detract us from our current goal: a simple description of how one could, in principle, perform basic arithmetic.
Although our straightforward approach can be improved in practice for arithmetic on small objects, say double-precision integers, it is quite appropriate for large objects, at least as a start. Much of this book deals with polynomials, and we will use some of the notions of this chapter throughout. A major goal is to find algorithmic improvements for large objects.
The algorithms in this chapter will be familiar to the reader, but she can refresh her memory of the analysis of algorithms with our simple examples.
This appendix presents some of the basic notions used throughout the text, for the reader's reference. By necessity, this is kept rather short and without proofs; we indicate, however, reference texts where these can be found. The reader is required to either have previous acquaintance with the material or be willing to read up on it. Our presentation is too concise for self-study; its purpose is to fix the language and point the reader to those areas, if any, where she needs brushing up.
The first five sections deal with algebra: groups, rings, polynomials and fields, finite fields, and linear algebra. Then we discuss finite probability spaces. After this mathematical background come some fundamentals from computer science: O-notation and a modicum of complexity theory.
Groups
The material of the first three sections can be found in any basic algebra text, such as Hungerford (1990) or the latest edition of van der Waerden's (1930b, 1931) classic on Modern Algebra.
DEFINITION 25.1. A group is a nonempty set G with a binary operation ·: G × G → G satisfying
◦ Associativity: ∀ a, b, c ∈ G (a · b)· c = a · (b · c),
We mentioned on pages 218–219 Newton's method for approximating roots of polynomials. It has become a staple of numerical computation, and seen many generalizations and improvements over the years. But what does this decidedly continuous, approximative method, computing values that are closer and closer to some real root, have to do with the discrete, exact calculations prevalent in computer algebra? There is a somewhat counter-intuitive notion of closeness for integers (and polynomials), corresponding to divisibility by higher and higher powers of a fixed prime. Newton iteration works just beautifully in this purely algebraic setting.
We start by using it to find a custom-Taylored division algorithm that is about as fast as multiplication, and then describe its use for finding roots of polynomials. Finally, we describe a common framework—valuations—into which both the analytical method over the real numbers and our symbolic version fit. In Chapter 15, we will apply Newton's method to the factorization of polynomials; it is then called Hensel lifting.
We may always depend upon it that algebra, which cannot be translated into good English and sound common sense, is bad algebra.
William Kingdon Clifford (1885)
Angling may be said to be so like the Mathematicks, that it can never be fully learnt.
Izaak Walton (1653)
At Kent he was curious about computer science but in just the introductory course Math 10 061 in Merrill Hall the math got to be too much for him.
John Updike (1981)
At the mathematical school, the proposition and demonstration were fairly written on a thin wafer, with ink composed of a cephalic tincture. This the student was to swallow upon a fasting stomach, and for three days following eat nothing but bread and water. As the wafer digested, the tincture mounted to his brain, bearing the proposition along with it.
In this chapter, we present several algorithms for the factorization of univariate polynomials over finite fields. The two central steps are distinct-degree factorization, where irreducible factors of distinct degrees are separated from each other, and equal-degree factorization, where all irreducible factors of the input polynomial have the same degree. The reader who is happy with the basic result of probabilistic polynomial-time factorization only has to go up to Section 14.4. The remaining sections discuss root finding (14.5), squarefree factorization (14.6), faster algorithms (14.7), methods using a different approach based on linear algebra (14.8), and the construction of irreducible polynomials and BCH codes (14.9 and 14.10). The implementations, briefly described in Section 15.7, show that this is an area where computer algebra has been tremendously successful: we can now factor enormously large polynomials.
Factorization of polynomials
The fundamental theorem of number theory states that every integer can be (essentially uniquely) factored as a product of primes. Similarly, for any field F the polynomials in F[x1,…,xn] can be (essentially uniquely) factored into a product of irreducible polynomials. In other words, ℤ and F[x1,…,xn] are Unique Factorization Domains (Sections 6.2, 25.2).
Isaac Newton (1642–1727) had a rather tough childhood. His father died during his mother's pregnancy and his mother remarried when he was three years old—and left little Isaac in the care of his grandmother.
In 1661, Newton entered Trinity College in Cambridge, and graduated with a BA in 1664, after an unimpressive student career. But then the university shut down for two years because of the Great Plague, and Newton, back in his native Woolsthorpe, laid the ground for much of his future work in the anni mirabiles 1664–1666. He invented calculus (his method of fluxions) and the law of gravitation, and showed by experiment the prismatic composition of white light. All this before he turned 25. (Inventing calculus means that he developed a widely applicable theory; its roots go back, of course, to the work of many people, Archimedes and Fermat among them.)
Back at Cambridge, Newton became Lucasian Professor of Mathematics, at the age of 26. His former teacher, Isaac Barrow, resigned from that position to make way for the greater scientist (and to prepare his own move into a better position as chaplain to King Charles II). At that time, Newton was the prototype of the “forgetful professor”, rather negligent about trifles such as his appearance. His nephew Humphrey Newton wrote: He very rarely went to Dine in ye Hall unless upon some Publick Dayes, & then, if He has not been minded, would go very carelesly, wth Shooes down at Heels, Stockins unty'd, surplice on, & his Head scarcely comb'd.
Recently, Constantinescu and Ilie proved a variant of the well-known periodicity theoremof Fine and Wilf in the case of two relatively prime abelian periods and conjectured aresult for the case of two non-relatively prime abelian periods. In this paper, we answersome open problems they suggested. We show that their conjecture is false but we givebounds, that depend on the two abelian periods, such that the conjecture is true for allwords having length at least those bounds and show that some of them are optimal. We alsoextend their study to the context of partial words, giving optimal lengths and describingan algorithm for constructing optimal words.
In this chapter, we present an important algorithmic approach to dealing with polynomials in several variables. Hironaka (1964) introduced in his work on resolution of singularities over ℂ—for which he received the Fields medal, the “Nobel prize” in mathematics—a special type of basis for polynomial ideals, called “standard basis”. Bruno Buchberger (1965) invented them independently in his Ph. D. thesis, and named them Gröbner bases after his thesis advisor Wolfgang Gröbner. They are a vital tool in modern computational algebraic geometry.
We start with two examples, one from robotics and one illustrating “automatic” proofs of theorems in geometry. We then introduce the basic notions of orders on monomials and the resulting division algorithm. Next come two important theorems, by Dickson and by Hilbert, that guarantee finite bases for certain ideals. Then we can define Gröbner bases and Buchberger's algorithm to compute them.
The end of this chapter presents two “geometric” applications: implicitization of algebraic varieties and solution of systems of polynomial equations. While these fall naturally into the realm of manipulating polynomials, the examples in Sections 24.1 and 24.2 below are less expected: logical proof systems and analysis of parallel processes. We cannot even mention numerous other applications, for example, in tiling problems and term rewriting. We finish with some facts—without proof—on the cost of computing Gröbner bases.
We compute the first three terms of the 1/d expansions for the growth constants and one-point functions of nearest-neighbour lattice trees and lattice (bond) animals on the integer lattice $\mathbb{Z}^d$, with rigorous error estimates. The proof uses the lace expansion, together with a new expansion for the one-point functions based on inclusion–exclusion.
One of the first graph-theoretical problems to be given serious attention (in the 1950s) was the decision whether a given integer sequence is equal to the degree sequence of a simple graph (or graphical, for short). One method to solve this problem is the greedy algorithm of Havel and Hakimi, which is based on the swap operation. Another, closely related question is to find a sequence of swap operations to transform one graphical realization into another of the same degree sequence. This latter problem has received particular attention in the context of rapidly mixing Markov chain approaches to uniform sampling of all possible realizations of a given degree sequence. (This becomes a matter of interest in the context of the study of large social networks, for example.) Previously there were only crude upper bounds on the shortest possible length of such swap sequences between two realizations. In this paper we develop formulae (Gallai-type identities) for the swap-distances of any two realizations of simple undirected or directed degree sequences. These identities considerably improve the known upper bounds on the swap-distances.