To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Je me suit souvent hasardé dans ma vie à avancer des propositions dont je n'était pas sûr; mais tous ce que j'ai écrit là est depuis bientôt un an dans ma tête, et il est trop de mon intérêt de ne pas me tromper pour qu'on me soupçonne d'énouncer des théorèmes dont je n'aurais pas la démonstration complète.
Tu prieras publiquement Jacobi et Gauss de donner leur avis, non sur la vérité, mais sur I'importance des théorèmes.
Aprés cela, il y aura, j'espére, des gens qui trouveront leur profit à déchiffrer tout ce gâchis.
E. Galois
This chapter is devoted to the Galois approach to solving polynomial equations.
After introducing the settings of this research, i.e. normal separable extensions K ⊃ k and the group G(K/k) of the k-automorphisms of K (Section 14.1), I discuss the correspondence between the intermediate fields F, K ⊇ F ⊇ k, and the subgroups of G(K/k); in this biunivocal correspondence, a field F corresponds to the subgroup of the k-automorphisms which leave F invariant and a group G corresponds to the subfield of the elements which are kept invariant by all the elements of G (Section 14.2), and we characterize the subgroups which are equivalent to the normal extensions F ⊃ k.
This chapter is devoted to two of Gauss' important contributions to solving:
12.1 is devoted to a proof of the Fundamental Theorem of Algebra: I use the second proof by Gauss, which is the most algebraic of his four proofs;
12.2 presents a résumé of the Disquisitiones Arithmeticae's section devoted to the solution of the cyclotomic equation: I consider these results to be the best pages of Computational Algebra, and I hope to be able to transmit my feeling to the reader.
These two sections also play the rôle of introducing the arguments discussed in the last two chapters: the generalization of Kronecker's Method to real algebraic numbers and Galois Theory.
The Fundamental Theorem of Algebra
In order to present a proof of the Fundamental Theorem of Algebra, and, mainly, to give a statement and a proof which can be easily generalized to an interesting setting (real closed fields), I must start by discussing the elementary and well-known difference between ℝ and ℂ, i.e. that one is ‘ordered’ and the other not:
Definition 12.1.1.A field K is said to be ordered if there is a subset P ⊂ K, the positive cone, which satisfies the following conditions:
Obviously the definition generalizes the trivial property of the ‘positiveness’ relation over ℝ where P is the set of the positive numbers,
In this generalization, it is clearer if we work in the other way: a positive cone P ⊂ K induces on K the total ordering < P defined by:
From now on I will write < omitting the dependence on P.
In concluding this part I want to give a résumé of the state of the art of polynomial factorization over K[X], K a field:
The Berlekamp and Cantor–Zassenhaus Algorithms allow factorization if K is a finite field;
The Berlekamp–Hensel–Zassenhaus and Lenstra–Lenstra–Lovász factorization algorithms allows us to lift factorization over ℤp to one over ℤ and, by the Gauss Lemma, to one over ℚ, so that factorization is available over the prime fields.
Algebraic extensions K = F(α) are dealt with by the Kronecker Algorithm (Section 16.3) if F is infinite, and by Berlekamp otherwise,
while Hensel–Zassenhaus allows us to factorize multivariate polynomials in F[X1, …, Xn] if factorization over F is available, so that
the Gauss Lemma allows us to deal with transcendental extensions K = F(X)
so that factorization is available over every field explicitly given in Kronecker's Model.
Van der Waerden's Example
Within the development of computational techniques for polynomial ideal theory, started by the benchmark work by G. Herrmann, van der Waerden pointed to a fascinating limitation of the ability to build fields within Kronecker's Model.
A lambda-free logical framework takes parameterisation and definitions as the basic notions to provide schematic mechanisms for specification of type theories and their use in practice. The framework presented here, PAL+, is a logical framework for specification and implementation of type theories, such as Martin-Löf's type theory or UTT. As in Martin-Löf's logical framework (Nordström et al., 1990), computational rules can be introduced and are used to give meanings to the declared constants. However, PAL+ only allows one to talk about the concepts that are intuitively in the object type theories: types and their objects, and families of types and families of objects of types. In particular, in PAL+, one cannot directly represent families of families of entities, which could be done in other logical frameworks by means of lambda abstraction. PAL+ is in the spirit of de Bruijn's PAL+ for Automath (de Bruijn, 1980). Compared with PAL, PAL+ allows one to represent parametric concepts such as families of types and families of non-parametric objects, which can be used by themselves as totalities as well as when they are fully instantiated. Such parametric objects are represented by local definitions (let-expressions). We claim that PAL+ is a correct meta-language for specifying type theories (e.g., dependent type theories), as it has the advantage of exactly capturing the intuitive concepts in object type theories, and that its implementation reflects the actual use of type theories in practice. We shall study the meta-theory of PAL+ by developing its typed operational semantics and showing that it has nice meta-theoretic properties.
We show how to incorporate rewriting into the Calculus of Constructions and we prove that the resulting system is strongly normalizing with respect to beta and rewrite reductions. An important novelty of this paper is the possibility to define rewriting rules over dependently typed function symbols. We prove strong normalization for any term rewriting system, such that all function symbols satisfy the, so called, star dependency condition, and every rule is accepted by the Higher Order Recursive Path Ordering (which is an extension of the method created by Jouannaud and Rubio for the setting of the simply typed lambda calculus). The proof of strong normalization is done by using a typed version of reducibility candidates due to Coquand and Gallier. Our criterion is general enough to accept definitions by rewriting of many well-known higher order functions, for example dependent recursors for inductive types or proof carrying functions. This makes it a very good candidate for inclusion in a proof assistant based on the Curry-Howard isomorphism.
Formalising mathematics in dependent type theory often requires to represent sets as setoids, i.e. types with an explicit equality relation. This paper surveys some possible definitions of setoids and assesses their suitability as a basis for developing mathematics. According to whether the equality relation is required to be reflexive or not we have total or partial setoid, respectively. There is only one definition of total setoid, but four different definitions of partial setoid, depending on four different notions of setoid function. We prove that one approach to partial setoids in unsuitable, and that the other approaches can be divided in two classes of equivalence. One class contains definitions of partial setoids that are equivalent to total setoids; the other class contains an inherently different definition, that has been useful in the modeling of type systems. We also provide some elements of discussion on the merits of each approach from the viewpoint of formalizing mathematics. In particular, we exhibit a difficulty with the common definition of subsetoids in the partial setoid approach.
There is both a great unity and a great diversity in presentations of logic. The diversity is staggering indeed – propositional logic, first-order logic, higher-order logic belong to one classification; linear logic, intuitionistic logic, classical logic, modal and temporal logics belong to another one. Logical deduction may be presented as a Hilbert style of combinators, as a natural deduction system, as sequent calculus, as proof nets of one variety or other, etc. Logic, originally a field of philosophy, turned into algebra with Boole, and more generally into meta-mathematics with Frege and Heyting. Professional logicians such as Gödel and later Tarski studied mathematical models, consistency and completeness, computability and complexity issues, set theory and foundations, etc. Logic became a very technical area of mathematical research in the last half century, with fine-grained analysis of expressiveness of subtheories of arithmetic or set theory, detailed analysis of well-foundedness through ordinal notations, logical complexity, etc. Meanwhile, computer modelling developed a need for concrete uses of logic, first for the design of computer circuits, then more widely for increasing the reliability of sofware through the use of formal specifications and proofs of correctness of computer programs. This gave rise to more exotic logics, such as dynamic logic, Hoare-style logic of axiomatic semantics, logics of partial values (such as Scott's denotational semantics and Plotkin's domain theory) or of partial terms (such as Feferman's free logic), etc. The first actual attempts at mechanisation of logical reasoning through the resolution principle (automated theorem proving) had been disappointing, but their shortcomings gave rise to a considerable body of research, developing detailed knowledge about equational reasoning through canonical simplification (rewriting theory) and proofs by induction (following Boyer and Moore successful integration of primitive recursive arithmetic within the LISP programming language). The special case of Horn clauses gave rise to a new paradigm of non-deterministic programming, called Logic Programming, developing later into Constraint Programming, blurring further the scope of logic. In order to study knowledge acquisition, researchers in artificial intelligence and computational linguistics studied exotic versions of modal logics such as Montague intentional logic, epistemic logic, dynamic logic or hybrid logic. Some others tried to capture common sense, and modeled the revision of beliefs with so-called non-monotonic logics. For the careful crafstmen of mathematical logic, this was the final outrage, and Girard gave his anathema to such “montres à moutardes”.
TinkerType is a pragmatic framework for compact and modular description of formal systems (type systems, operational semantics, logics, etc.). A family of related systems is broken down into a set of clauses – individual inference rules – and a set of features controlling the inclusion of clauses in particular systems. Simple static checks are used to help maintain consistency of the generated systems. We present TinkerType and its implementation and describe its application to two substantial repositories of typed lambda-calculi. The first repository covers a broad range of typing features, including subtyping, polymorphism, type operators and kinding, computational effects, and dependent types. It describes both declarative and algorithmic aspects of the systems, and can be used with our tool, the TinkerType Assembler, to generate calculi either in the form of typeset collections of inference rules or as executable ML typecheckers. The second repository addresses a smaller collection of systems, and provides modularized proofs of basic safety properties.
This paper discusses an application of the higher-order abstract syntax technique to general-purpose theorem proving, yielding shallow embeddings of the binders of formalized languages. Higher-order abstract syntax has been applied with success in specialized logical frameworks which satisfy a closed-world assumption. As more general environments (like Isabelle/HOL or Coq) do not support this closed-world assumption, higher-order abstract syntax may yield exotic terms, that is, datatypes may produce more terms than there should actually be in the language. The work at hand demonstrates how such exotic terms can be eliminated by means of a two-level well-formedness predicate, further preparing the ground for an implementation of structural induction in terms of rule induction, and hence providing fully-fledged syntax analysis. In order to apply and justify well-formedness predicates, the paper develops a proof technique based on a combination of instantiations and reabstractions of higher-order terms. As an application, syntactic principles like the theory of contexts (as introduced by Honsell, Miculan, and Scagnetto) are derived, and adequacy of the predicates is shown, both within a formalization of the π-calculus in Isabelle/HOL.
We investigate the asymptotic behaviour of the average displacement of the simple random walk on the Sierpiński graph. The existence of an oscillating factor in this asymptotics is shown rigorously. The proof depends mainly on the analysis of the corresponding generating function. Using a functional equation and techniques from complex analysis we obtain the desired properties of this generating function.
In this chapter, we develop a nearly linear-time library for constructing certain important subgroups of a given group. All algorithms are of the Monte Carlo type, since they are based on results of Section 4.5. However, if a base and SGS are known for the input group, then all algorithms in this chapter are of Las Vegas type (and in most cases there are even deterministic versions).
A current research project of great theoretical and practical interest is the upgrading of Monte Carlo permutation group algorithms to Las Vegas type. The claim we made in the previous paragraph implies that it is enough to upgrade the SGS constructions; we shall present a result in this direction in Section 8.3. To prove that all algorithms in this chapter are of Las Vegas type or deterministic, we always suppose that the input is an SGS S for some G ≤ Sym(Ω) relative to some base B, S satisfies S = S-1, and transversals corresponding to the point stabilizer chain defined by B are coded in shallow Schreier trees. Throughout this chapter, shallow Schreier tree means a Schreier tree of depth at most 2 log |G|. We remind the reader that, by Lemma 4.4.2, given an arbitrary SGS for G, a new SGS S satisfying S = S-1 and defining a shallow Schreier tree data structure can be computed in nearly linear time by a deterministic algorithm.
The central theme of this book is the design and analysis of nearly linear-time algorithms. Given a permutation group G = 〈S〉 ≤ Sym(Ω), with |Ω| = n, such algorithms run in O(|S|n logc |G|) time for some constant c. If, as happens most of the time in practice, log |G| is bounded from above by a polylogarithmic function of n then these algorithms run in time that is nearly linear as a function of the input length |S|n. In most cases, we did not make an effort to minimize the exponent c of log |G| in the running time since achieving the smallest possible exponent c was not the most pressing problem from either the theoretical or practical point of view. However, in families of groups where log |G| or, equivalently, the minimal base size of G is large, the nearly linear-time algorithms do not run as fast as their name indicates; in fact, for certain tasks, the may not be the asymptotically fastest algorithms at all. Another issue is the memory requirement of the algorithms, which again may be unnecessarily high. The purpose of this chapter is to describe algorithms that may be used in the basic handling of large-base groups. The practical limitation is their memory requirement, which is in most cases a quadratic function of n.