To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
New C++-implementations of the classical factorization algorithms for polynomials over finite fields of Berlekamp and the new ones of Niederreiter are presented. Their performances on various types of inputs are compared.
Introduction
The basic problem of factorizing univariate polynomials over the finite field Fq has got new impulses in the past few years with a new linearization technique developed by Niederreiter in [8], [9], [10]. Unlike Berlekamp's classical approach, which uses the Frobenius fixed point algebra in A := Fq[X]/(f) (where f is the polynomial to be factored), Niederreiter's method is based on the analysis of the solution space of certain differential equations in the field of rational functions Fq(X).
From the very beginning there have been several striking similarities between Niederreiter's and Berlekamp's algorithms in each step. Suppose for simplicity that the polynomial is monic and squarefree. Then in both algorithms a certain system of linear equations has to be set up and solved, leading to an Fq - subspace S of A, whose dimension coincides with the number of irreducible factors of f. Now the elements of S can be used to extract the irreducible factors of f by suitable gcd operations.
Niederreiter's algorithm has the following practical advantages: In the case of small fields the linear equations to be solved can be set up very efficiently. In particular in F2 they can be read off directly from the coefficients of f.
Abstract. This paper is a working out of the same-titled talk given by the author at the Third International Conference on Finite Fields and Their Applications in Glasgow, 1995. We give a survey on recent results on the characterization, the structure, the enumeration, and the construction of completely free elements and normal bases in finite dimensional extensions over finite fields.
A Strengthening of the Normal Basis Theorem. If E is a finite dimensional Galois extension over a field F with Galois group G, then the Normal Basis Theorem states that the additive group (E, +) of E is a cyclic module over the group algebra FG, i.e., there exists an element w in E such that the set {g(w) | g ∈ G} of G-conjugates of w is an F-basis of E. Such a basis is called a normal basis in E over F. Every generator w of E as FG-module is called a normal basis generator in E over F. For the sake of simplicity such an element is also called free in E over F.
If H is a subgroup of G, and Fix(H) is the intermediate field of E over F belonging to H via the Galois correspondence, i.e., the subfield of E which is fixed elementwise by H, then (E, +) likewise carries the structure of a Fix(H) H-module.
Abstract – A general algebraic method for decoding all cyclic codes up to their actual minimum distance d is presented. Full error-correcting capabilities t = [(d − 1)/2] of the codes are therefore achieved. In contrast to the decoding method recently suggested by Chen et. al., our method uses for the first time characteristic sets instead of Gröbner bases as the algebraic tool to solve the system of multivariate syndrome equations. The characteristic sets method is generally faster than the Gröbner bases method.
A new strategy called “Fill-Holes” method is also presented. It uses Gröbner bases or characteristic sets to find certain unknown syndromes and then combines the computational methods with the well-implemented BCH decoding algorithm.
One important objective in coding theory has always been the construction of algebraic algorithms, that are capable of decoding all cyclic codes up to their actual minimum distance. Full error-correcting capabilities of the codes can only be achieved when such algorithms are available. For many years, algebraic decoding of cyclic codes has been constrained by the lower bound on the minimum distance of the codes. For example, the commonly used Berlekamp-Massey algorithm is known to be restricted within the BCH bound when it is used to decode all cyclic codes. Such restrictions can be traced to the fact that the algorithm requires syndromes to be contiguous in the Newton's identities.
ABSTRACT. We present a survey of recent work of the authors in which sequences of quasirandom points are constructed by new methods based on global function fields. These methods yield significant improvements on all earlier constructions. The most powerful of these methods employ global function fields with many rational places, or equivalently algebraic curves over finite fields with many rational points. With the help of class field theory for global function fields, it can be shown that our constructions are best possible in the sense of the order of magnitude of quality parameters. The paper contains also a new construction of sequences of quasirandom points and new facts about the earlier constructions designed by the authors.
Introduction
The motivation for the work that we want to present here stems from the theory of uniform distribution of sequences in number theory and from quasi-Monte Carlo methods in numerical analysis. A key problem in these areas is how to distribute points as uniformly as possible over an s-dimensional unit cube Is = [0, 1]s, s ≥ 1. A precise formulation of this problem will be given below. The essence of our work is that methods based on global function fields (or, equivalently, on algebraic curves over finite fields) yield excellent constructions of (finite) point sets and (infinite) sequences with strong uniformity properties. In fact, these methods are so powerful that they lead to constructions which are, in a sense to be explained later, best possible.
Abstract – Recently, a new direction in coding theory has been to apply the Gray map to codes that are linear over Z4 to obtain binary nonlinear codes better than comparable binary linear codes. The distance properties of these codes as well as the correlation properties of sequences obtained from Z4-linear codes depend in many cases on exponential sums over Galois rings. We present a survey of recent results on exponential sums over Galois rings and their applications to coding theory and sequence designs.
In an important paper, Hammons et. al. show how to construct well known binary nonlinear codes like Kerdock codes and Delsarte-Goethals codes by applying the Gray map to linear codes over Z4. Further, they explain an old open problem in coding theory that the weight enumerators of the nonlinear Kerdock codes and Preparata codes satisfy the MacWilliams identities. Nechaev has shown that the Kerdock code punctured in two coordinates, is equivalent to a cyclic (but still nonlinear) code. The coordinate permutation that yields the binary cyclic code is identified by making a connection between the Kerdock code and a Z4-linear code. These discoveries lead to a strong interest in Z4-linear codes, and recently several other binary nonlinear codes which are better than comparable binary linear codes have been found using the Gray map on Z4-linear codes.
Many of the new codes are constructed from extended cyclic codes over Z4.
Bayesian approaches have enjoyed a great deal of recent success in their application to problems in computer vision (Grenander, 1976-1981; Bolle & Cooper, 1984; Geman & Geman, 1984; Marroquin et al., 1985; Szeliski, 1989; Clark & Yuille, 1990; Yuille & Clark, 1993; Madarasmi et al., 1993). This success has led to an emerging interest in applying Bayesian methods to modeling human visual perception (Bennett et al., 1989; Kersten, 1990; Knill & Kersten, 1991; Richards et al., 1993). The chapters in this book represent to a large extent the fruits of this interest: a number of new theoretical frameworks for studying perception and some interesting new models of specific perceptual phenomena, all founded, to varying degrees, on Bayesian ideas. As an introduction to the book, we present an overview of the philosophy and fundamental concepts which form the foundation of Bayesian theory as it applies to human visual perception. The goal of the chapter is two-fold: first, it serves as a tutorial to the basics of the Bayesian approach to readers who are unfamiliar with it, and second, to characterize the type of theory of perception the approach is meant to provide. The latter topic, by its meta-theoretic nature, is necessarily subjective. This introduction represents the views of the authors in this regard, not necessarily those held by other contributors to the book.
First, we introduce the Bayesian framework as a general formalism for specifying the information in images which allows an observer to perceive the world.