To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Finite fields are used in most of the known constructions of pseudorandom sequences and analysis of periods, correlations, and linear spans of linear feedback shift register (LFSR) sequences and nonlinear generated sequences. They are also important in many cryptographic primitive algorithms, such as the Diffie-Hellman key exchange, the Digital Signature Standard (DSS), the El Gamal public-key encryption, elliptic curve public-key cryptography, and LFSR (or Torus) based public-key cryptography. Finite fields and shift register sequences are also used in algebraic error-correcting codes, in code-division multiple-access (CDMA) communications, and in many other applications beyond the scope of this book. This chapter gives a description of these fields and some properties that are frequently used in sequence design and cryptography. Section 3.1 introduces definitions of algebraic structures of groups, rings and fields, and polynomials. Section 3.2 shows the construction of the finite field GF(pn). Section 3.3 presents the basic theory of finite fields. Section 3.4 discusses minimal polynomials. Section 3.5 introduces subfields, trace functions, bases, and computation of the minimal polynomials over intermediate subfields. Computation of a power of a trace function is shown in Section 3.6. And, the last section presents some counting numbers related to finite fields.
Algebraic structures
In this section, we give the definitions of the algebraic structures of groups, rings and fields, polynomials, and some concepts that will be needed for the study of finite fields in the later sections.
In this chapter, we introduce constructions for signal sets with low crosscorrelation. These sequences have important applications in wireless CDMA communications. There are three classic constructions for signal sets with low correlation, namely, the Gold-pair construction, the Kasami (small) set construction, and the bent function signal set construction. In Section 10.1, we introduce some basic concepts and properties for crosscorrelation of sequences or functions, signal sets, and one-to-one correspondences among sequences, polynomial functions, and boolean functions. After that, three classic constructions will be presented in Sections 9.2, 9.3, and 9.4 respectively. With the development of new technologies, the demand for constraints on other parameters, such as linear spans of sequences, and the sizes of the signal sets has increased. Here, we will provide two examples of constructions that sacrifice ideal correlation in order to improve other properties, in Sections 9.5 and 9.6, respectively. One example is the interleaved construction for large linear spans, and the other is ℤ4 sequences to obtain large sizes of signal sets.
Crosscorrelation, signal sets, and boolean functions
In this section, we discuss some basic properties of crosscorrelation of sequences (some of them have been discussed in Chapter 1), refine the concept of signal sets, and develop the one-to-one correspondence between sequences and boolean functions. (Note that the one-to-one correspondence between sequences and functions is discussed in Chapter 6.)
We will keep the following notation in this section.
Before 1997, only two essentially different constructions that were not based on a number theory approach were known for cyclic Hadamard difference sets with parameter (2n − 1, 2n−1 − 1, 2n−2 − 1) or, equivalently, for binary 2-level autocorrelation sequences of period 2n − 1 for arbitrary n. One is the Singer construction, which gives m-sequences, and the other is the GMW construction, which produces four types of GMW sequences. Exhaustive searches had been done for n = 7, 8, and 9 in 1971, 1983, and 1992, respectively. However, there was no explanation for several of the sequences found for these lengths that did not follow from then-known constructions. In this chapter, we will describe the remarkable progress in finding new constructions for 2-level autocorrelation sequences of period 2n − 1 since 1997. (An exhaustive search was also done for n = 10 in 1998.) The order of presentation of these remarkable constructions will follow the history of the developments of this research. Section 9.1 presents constructions of 2-level autocorrelation sequences having multiple trace terms. In Section 9.2, the hyper-oval constructions are introduced. Section 9.3 shows the Kasami power construction. In the last section, we introduce the iterative decimation-Hadamard transform, a method of searching for new sequences with 2-level autocorrelation.
Multiple trace term sequences
In this section, we present 3-term sequences, 5-term sequences, and the Welch-Gong transformation sequences.
The prehistory of our subject can be backdated to 1202, with the appearance of Leonardo Pisano's Liber Abaci (Fibonacci 1202), containing the famous problem about breeding rabbits that leads to the linear recursion fn+1 = fn + fn−1 for n ≥ 2, f1 = f2 = 1, which yields the Fibonacci sequence. Additional background can be attributed to Euler, Gauss, Kummer, and especially Edouard Lucas (Lucas 1876). For the history proper, the earliest milestones are papers by O. Ore (Ore 1934), R.E.A.C. Paley (Paley 1933), and J. Singer (Singer 1938). Ore started the systematic study of linear recursions over finite fields (including GF(2)), Paley inaugurated the search for constructions yielding Hadamard matrices, and Singer discovered the Singer difference sets that are mathematically equivalent to binary maximum length linear shift register sequences (also known as pseudorandom sequences, pseudonoise (PN) sequences, or m-sequences).
It appears that by the early 1950s devices that performed the modulo 2 sum of two positions on a binary delay line were being considered as key generators for stream ciphers in cryptographical applications. The question of what the periodicity of the resulting output sequence would be seemed initially mysterious. This question was explored outside the cryptographic community by researchers at a number of locations in the 1953–1956 time period, resulting in company reports by E. N. Gilbert at Bell Laboratories, by N. Zierler at Lincoln Laboratories, by L. R. Welch at the Jet Propulsion Laboratory, by S.W. Golomb at the Glenn L. Martin Company (now part of Lockheed-Martin), and probably by others as well.
We show that the standard normalization-by-evaluation construction for the simply-typed λβη-calculus has a natural counterpart for the untyped λβ-calculus, with the central type-indexed logical relation replaced by a “recursively defined” invariant relation, in the style of Pitts. In fact, the construction can be seen as generalizing a computational-adequacy argument for an untyped,call-by-name language to normalization instead of evaluation.In the untyped setting, not all terms have normal forms, so the normalization function is necessarily partial. We establish its correctness in the senses of soundness (the output term, if any, is in normal form and β-equivalent to the input term); identification (β-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like, call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness, as well as the proofs, extend naturally to this generalization.
A grammar formalism based upon CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. These grammars execute as robust bottom-up parsers with an inherent treatment of ambiguity and a high flexibility to model various linguistic phenomena. The formalism extends previous logic programming based grammars with a form of context-sensitive rules and the possibility to include extra-grammatical hypotheses in both head and body of grammar rules. Among the applications are straightforward implementations of Assumption Grammars and abduction under integrity constraints for language analysis. CHR grammars appear as a powerful tool for specification and implementation of language processors and may be proposed as a new standard for bottom-up grammars in logic programming.
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP). This presentation compares an improved version of conflict-directed backjumping and two variants of dynamic backtracking with respect to chronological backtracking on some of the AIM instances which are a benchmark set of random 3-SAT problems. A CHR implementation of a Boolean constraint solver combined with these different search strategies in Java is thus being compared with a CHR implementation of the same Boolean constraint solver combined with chronological backtracking in SICStus Prolog. This comparison shows that the addition of “intelligence” to the search process may reduce the number of search steps dramatically. Furthermore, the runtime of their Java implementations is in most cases faster than the implementations of chronological backtracking. More specifically, conflict-directed backjumping is even faster than the SICStus Prolog implementation of chronological backtracking, although our Java implementation of CHR lacks the optimisations made in the SICStus Prolog system.
FLUX is a programming method for the design of agents that reason logically about their actions and sensor information in the presence of incomplete knowledge. The core of FLUX is a system of Constraint Handling Rules, which enables agents to maintain an internal model of their environment by which they control their own behavior. The general action representation formalism of the fluent calculus provides the formal semantics for the constraint solver. FLUX exhibits excellent computational behavior due to both a carefully restricted expressiveness and the inference paradigm of progression.
We introduce adhesive categories, which arecategories with structure ensuring that pushouts along monomorphismsare well-behaved, as well as quasiadhesive categories whichrestrict attention to regular monomorphisms.Many examples of graphical structures used in computerscience are shown to be examples of adhesive and quasiadhesivecategories. Double-pushout graph rewriting generalizes well torewriting on arbitrary adhesive and quasiadhesive categories.
This paper provides a framework to address termination problems in term rewriting by using orderings induced by algebras over the reals. The generation of such orderings is parameterized by concrete monotonicity requirements which are connected with different classes of termination problems: termination of rewriting,termination of rewriting by using dependency pairs, termination of innermost rewriting,top-termination of infinitary rewriting,termination of context-sensitive rewriting,etc.We show how to define term orderings based on algebraic interpretations over the real numbers which can be used for these purposes. From apractical point of view, we show how to automatically generate polynomialalgebras over the reals by using constraint-solving systems to obtain the coefficients of a polynomial in the domain of the real or rational numbers. Moreover, as a consequence of our work, we argue thatsoftware systems which are able to generate constraints for obtaining polynomial interpretations over the naturals which prove termination of rewriting (e.g., AProVE, CiME, and TTT), are potentially able to obtain suitable interpretations over the reals by just solving the constraints in the domain of the real or rational numbers.
In this paper we discuss the optimizing compilation of Constraint Handling Rules (CHRs). CHRs are a multi-headed committed choice constraint language, commonly applied for writing incremental constraint solvers. CHRs are usually implemented as a language extension that compiles to the underlying language. In this paper we show how we can use different kinds of information in the compilation of CHRs to obtain access efficiency, and a better translation of the CHR rules into the underlying language, which in this case is HAL. The kinds of information used include the types, modes, determinism, functional dependencies and symmetries of the CHR constraints. We also show how to analyze CHR programs to determine this information about functional dependencies, symmetries and other kinds of information supporting optimizations.
More than a decade ago, Moller and Tofts published their seminal work on relating processes, which are annotated with lower time bounds, with respect to speed. Their paper has left open many questions regarding the semantic theory for the suggested bisimulation-based faster-than preorder, the MT-preorder, which have not been addressed since. The encountered difficulties concern a general compositionality result, a complete axiom system for finite processes, a convincing intuitive justification of the MT-preorder, and the abstraction from internal computation. This article solves these difficulties by developing and employing a novel commutation lemma relating the sequencing of action and clock transitions in discrete-time process algebra. Most importantly, it is proved that the MT-preorder is fully-abstract with respect to a natural amortized preorder that uses a simple bookkeeping mechanism for deciding whether one process is faster than another. Together these results reveal the intuitive roots of the MT-preorder as a faster-than relation, while testifying to its semantic elegance. This lifts some of the barriers that have so far hampered progress in semantic theories for comparing the speed of processes.
During the last decade, Constraint Handling Rules (CHR) have become a major specification and implementation language for logical, constraint-based algorithms and intelligent applications (as witnessed for example by several hundred publications available online that mention CHR). Algorithms are often specified using inference rules, rewrite rules, sequents, proof rules, or logical axioms that can be almost directly written in CHR.