To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Finite fields are used in most of the known constructions of pseudorandom sequences and analysis of periods, correlations, and linear spans of linear feedback shift register (LFSR) sequences and nonlinear generated sequences. They are also important in many cryptographic primitive algorithms, such as the Diffie-Hellman key exchange, the Digital Signature Standard (DSS), the El Gamal public-key encryption, elliptic curve public-key cryptography, and LFSR (or Torus) based public-key cryptography. Finite fields and shift register sequences are also used in algebraic error-correcting codes, in code-division multiple-access (CDMA) communications, and in many other applications beyond the scope of this book. This chapter gives a description of these fields and some properties that are frequently used in sequence design and cryptography. Section 3.1 introduces definitions of algebraic structures of groups, rings and fields, and polynomials. Section 3.2 shows the construction of the finite field GF(pn). Section 3.3 presents the basic theory of finite fields. Section 3.4 discusses minimal polynomials. Section 3.5 introduces subfields, trace functions, bases, and computation of the minimal polynomials over intermediate subfields. Computation of a power of a trace function is shown in Section 3.6. And, the last section presents some counting numbers related to finite fields.
Algebraic structures
In this section, we give the definitions of the algebraic structures of groups, rings and fields, polynomials, and some concepts that will be needed for the study of finite fields in the later sections.
In this chapter, we introduce constructions for signal sets with low crosscorrelation. These sequences have important applications in wireless CDMA communications. There are three classic constructions for signal sets with low correlation, namely, the Gold-pair construction, the Kasami (small) set construction, and the bent function signal set construction. In Section 10.1, we introduce some basic concepts and properties for crosscorrelation of sequences or functions, signal sets, and one-to-one correspondences among sequences, polynomial functions, and boolean functions. After that, three classic constructions will be presented in Sections 9.2, 9.3, and 9.4 respectively. With the development of new technologies, the demand for constraints on other parameters, such as linear spans of sequences, and the sizes of the signal sets has increased. Here, we will provide two examples of constructions that sacrifice ideal correlation in order to improve other properties, in Sections 9.5 and 9.6, respectively. One example is the interleaved construction for large linear spans, and the other is ℤ4 sequences to obtain large sizes of signal sets.
Crosscorrelation, signal sets, and boolean functions
In this section, we discuss some basic properties of crosscorrelation of sequences (some of them have been discussed in Chapter 1), refine the concept of signal sets, and develop the one-to-one correspondence between sequences and boolean functions. (Note that the one-to-one correspondence between sequences and functions is discussed in Chapter 6.)
We will keep the following notation in this section.
Before 1997, only two essentially different constructions that were not based on a number theory approach were known for cyclic Hadamard difference sets with parameter (2n − 1, 2n−1 − 1, 2n−2 − 1) or, equivalently, for binary 2-level autocorrelation sequences of period 2n − 1 for arbitrary n. One is the Singer construction, which gives m-sequences, and the other is the GMW construction, which produces four types of GMW sequences. Exhaustive searches had been done for n = 7, 8, and 9 in 1971, 1983, and 1992, respectively. However, there was no explanation for several of the sequences found for these lengths that did not follow from then-known constructions. In this chapter, we will describe the remarkable progress in finding new constructions for 2-level autocorrelation sequences of period 2n − 1 since 1997. (An exhaustive search was also done for n = 10 in 1998.) The order of presentation of these remarkable constructions will follow the history of the developments of this research. Section 9.1 presents constructions of 2-level autocorrelation sequences having multiple trace terms. In Section 9.2, the hyper-oval constructions are introduced. Section 9.3 shows the Kasami power construction. In the last section, we introduce the iterative decimation-Hadamard transform, a method of searching for new sequences with 2-level autocorrelation.
Multiple trace term sequences
In this section, we present 3-term sequences, 5-term sequences, and the Welch-Gong transformation sequences.
The prehistory of our subject can be backdated to 1202, with the appearance of Leonardo Pisano's Liber Abaci (Fibonacci 1202), containing the famous problem about breeding rabbits that leads to the linear recursion fn+1 = fn + fn−1 for n ≥ 2, f1 = f2 = 1, which yields the Fibonacci sequence. Additional background can be attributed to Euler, Gauss, Kummer, and especially Edouard Lucas (Lucas 1876). For the history proper, the earliest milestones are papers by O. Ore (Ore 1934), R.E.A.C. Paley (Paley 1933), and J. Singer (Singer 1938). Ore started the systematic study of linear recursions over finite fields (including GF(2)), Paley inaugurated the search for constructions yielding Hadamard matrices, and Singer discovered the Singer difference sets that are mathematically equivalent to binary maximum length linear shift register sequences (also known as pseudorandom sequences, pseudonoise (PN) sequences, or m-sequences).
It appears that by the early 1950s devices that performed the modulo 2 sum of two positions on a binary delay line were being considered as key generators for stream ciphers in cryptographical applications. The question of what the periodicity of the resulting output sequence would be seemed initially mysterious. This question was explored outside the cryptographic community by researchers at a number of locations in the 1953–1956 time period, resulting in company reports by E. N. Gilbert at Bell Laboratories, by N. Zierler at Lincoln Laboratories, by L. R. Welch at the Jet Propulsion Laboratory, by S.W. Golomb at the Glenn L. Martin Company (now part of Lockheed-Martin), and probably by others as well.
The basic tools for describing and analyzing random processes have all been developed in the preceding chapters along with a variety of examples of random processes with and without memory. The goal of this chapter is to use these tools to describe a menagerie of useful random processes, usually by taking a simple random process and applying some form of signal processing such as linear filtering in order to produce a more complicated random process. In Chapter 5 the effect of linear filtering on second-order moments was considered. In this chapter we look in more detail at the resulting output process and consider other forms of signal processing as well. In the course of the development a few new tools and several variations on old tools for deriving distributions are introduced. Much of this chapter can be considered as practice of the methods developed in the previous chapters, with names often being given to the specific examples developed. In fact several processes with memory have been encountered previously: the binomial counting process and the discrete time Wiener process, in particular. The goal now is to extend the techniques used in these special cases to more general situations and to introduce a wider variety of processes.
The development of examples begins with a continuation of the study of the output processes of linear systems with random process inputs.
In this appendix we provide some suggestions for supplementary reading. Our goal is to provide some leads for the reader interested in pursuing the topics treated in more depth. Admittedly we only scratch the surface of the large literature on probability and random processes. The books are selected based on our own tastes — they are books from which we have learned and from which we have drawn useful results, techniques, and ideas for our own research.
A good history of the theory of probability may be found in Maistrov, who details the development of probability theory from its gambling origins through its combinatorial and relative frequency theories to the development by Kolmogorov of its rigorous axiomatic foundations. A somewhat less serious historical development of elementary probability is given by Huff and Geis. Several early papers on the application of probability are given in Newman. Of particular interest are the papers by Bernoulli on the law of large numbers and the paper by George Bernard Shaw comparing the vice of gambling and the virtue of insurance.
An excellent general treatment of the theory of probability and random processes may be found in Ash, along with treatments of real analysis, functional analysis, and measure and integration theory. Ash is a former engineer turned mathematician, and his book is one of the best available for someone with an engineering background who wishes to pursue the mathematics beyond the level treated in this book.
A random or stochastic process is a mathematical model for a phenomenon that evolves in time in an unpredictable manner from the viewpoint of the observer. The phenomenon may be a sequence of real-valued measurements of voltage or temperature, a binary data stream from a computer, a modulated binary data stream from a modem, a sequence of coin tosses, the daily Dow–Jones average, radiometer data or photographs from deep space probes, a sequence of images from a cable television, or any of an infinite number of possible sequences, waveforms, or signals of any imaginable type. It may be unpredictable because of such effects as interference or noise in a communication link or storage medium, or it may be an information-bearing signal, deterministic from the viewpoint of an observer at the transmitter but random to an observer at the receiver.
The theory of random processes quantifies the above notions so that one can construct mathematical models of real phenomena that are both tractable and meaningful in the sense of yielding useful predictions of future behavior. Tractability is required in order for the engineer (or anyone else) to be able to perform analyses and syntheses of random processes, perhaps with the aid of computers. The “meaningful” requirement is that the models must provide a reasonably good approximation of the actual phenomena. An oversimplified model may provide results and conclusions that do not apply to the real phenomenon being modeled.