To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we introduce constructions for signal sets with low crosscorrelation. These sequences have important applications in wireless CDMA communications. There are three classic constructions for signal sets with low correlation, namely, the Gold-pair construction, the Kasami (small) set construction, and the bent function signal set construction. In Section 10.1, we introduce some basic concepts and properties for crosscorrelation of sequences or functions, signal sets, and one-to-one correspondences among sequences, polynomial functions, and boolean functions. After that, three classic constructions will be presented in Sections 9.2, 9.3, and 9.4 respectively. With the development of new technologies, the demand for constraints on other parameters, such as linear spans of sequences, and the sizes of the signal sets has increased. Here, we will provide two examples of constructions that sacrifice ideal correlation in order to improve other properties, in Sections 9.5 and 9.6, respectively. One example is the interleaved construction for large linear spans, and the other is ℤ4 sequences to obtain large sizes of signal sets.
Crosscorrelation, signal sets, and boolean functions
In this section, we discuss some basic properties of crosscorrelation of sequences (some of them have been discussed in Chapter 1), refine the concept of signal sets, and develop the one-to-one correspondence between sequences and boolean functions. (Note that the one-to-one correspondence between sequences and functions is discussed in Chapter 6.)
We will keep the following notation in this section.
Before 1997, only two essentially different constructions that were not based on a number theory approach were known for cyclic Hadamard difference sets with parameter (2n − 1, 2n−1 − 1, 2n−2 − 1) or, equivalently, for binary 2-level autocorrelation sequences of period 2n − 1 for arbitrary n. One is the Singer construction, which gives m-sequences, and the other is the GMW construction, which produces four types of GMW sequences. Exhaustive searches had been done for n = 7, 8, and 9 in 1971, 1983, and 1992, respectively. However, there was no explanation for several of the sequences found for these lengths that did not follow from then-known constructions. In this chapter, we will describe the remarkable progress in finding new constructions for 2-level autocorrelation sequences of period 2n − 1 since 1997. (An exhaustive search was also done for n = 10 in 1998.) The order of presentation of these remarkable constructions will follow the history of the developments of this research. Section 9.1 presents constructions of 2-level autocorrelation sequences having multiple trace terms. In Section 9.2, the hyper-oval constructions are introduced. Section 9.3 shows the Kasami power construction. In the last section, we introduce the iterative decimation-Hadamard transform, a method of searching for new sequences with 2-level autocorrelation.
Multiple trace term sequences
In this section, we present 3-term sequences, 5-term sequences, and the Welch-Gong transformation sequences.
The prehistory of our subject can be backdated to 1202, with the appearance of Leonardo Pisano's Liber Abaci (Fibonacci 1202), containing the famous problem about breeding rabbits that leads to the linear recursion fn+1 = fn + fn−1 for n ≥ 2, f1 = f2 = 1, which yields the Fibonacci sequence. Additional background can be attributed to Euler, Gauss, Kummer, and especially Edouard Lucas (Lucas 1876). For the history proper, the earliest milestones are papers by O. Ore (Ore 1934), R.E.A.C. Paley (Paley 1933), and J. Singer (Singer 1938). Ore started the systematic study of linear recursions over finite fields (including GF(2)), Paley inaugurated the search for constructions yielding Hadamard matrices, and Singer discovered the Singer difference sets that are mathematically equivalent to binary maximum length linear shift register sequences (also known as pseudorandom sequences, pseudonoise (PN) sequences, or m-sequences).
It appears that by the early 1950s devices that performed the modulo 2 sum of two positions on a binary delay line were being considered as key generators for stream ciphers in cryptographical applications. The question of what the periodicity of the resulting output sequence would be seemed initially mysterious. This question was explored outside the cryptographic community by researchers at a number of locations in the 1953–1956 time period, resulting in company reports by E. N. Gilbert at Bell Laboratories, by N. Zierler at Lincoln Laboratories, by L. R. Welch at the Jet Propulsion Laboratory, by S.W. Golomb at the Glenn L. Martin Company (now part of Lockheed-Martin), and probably by others as well.
The basic tools for describing and analyzing random processes have all been developed in the preceding chapters along with a variety of examples of random processes with and without memory. The goal of this chapter is to use these tools to describe a menagerie of useful random processes, usually by taking a simple random process and applying some form of signal processing such as linear filtering in order to produce a more complicated random process. In Chapter 5 the effect of linear filtering on second-order moments was considered. In this chapter we look in more detail at the resulting output process and consider other forms of signal processing as well. In the course of the development a few new tools and several variations on old tools for deriving distributions are introduced. Much of this chapter can be considered as practice of the methods developed in the previous chapters, with names often being given to the specific examples developed. In fact several processes with memory have been encountered previously: the binomial counting process and the discrete time Wiener process, in particular. The goal now is to extend the techniques used in these special cases to more general situations and to introduce a wider variety of processes.
The development of examples begins with a continuation of the study of the output processes of linear systems with random process inputs.
In this appendix we provide some suggestions for supplementary reading. Our goal is to provide some leads for the reader interested in pursuing the topics treated in more depth. Admittedly we only scratch the surface of the large literature on probability and random processes. The books are selected based on our own tastes — they are books from which we have learned and from which we have drawn useful results, techniques, and ideas for our own research.
A good history of the theory of probability may be found in Maistrov, who details the development of probability theory from its gambling origins through its combinatorial and relative frequency theories to the development by Kolmogorov of its rigorous axiomatic foundations. A somewhat less serious historical development of elementary probability is given by Huff and Geis. Several early papers on the application of probability are given in Newman. Of particular interest are the papers by Bernoulli on the law of large numbers and the paper by George Bernard Shaw comparing the vice of gambling and the virtue of insurance.
An excellent general treatment of the theory of probability and random processes may be found in Ash, along with treatments of real analysis, functional analysis, and measure and integration theory. Ash is a former engineer turned mathematician, and his book is one of the best available for someone with an engineering background who wishes to pursue the mathematics beyond the level treated in this book.
A random or stochastic process is a mathematical model for a phenomenon that evolves in time in an unpredictable manner from the viewpoint of the observer. The phenomenon may be a sequence of real-valued measurements of voltage or temperature, a binary data stream from a computer, a modulated binary data stream from a modem, a sequence of coin tosses, the daily Dow–Jones average, radiometer data or photographs from deep space probes, a sequence of images from a cable television, or any of an infinite number of possible sequences, waveforms, or signals of any imaginable type. It may be unpredictable because of such effects as interference or noise in a communication link or storage medium, or it may be an information-bearing signal, deterministic from the viewpoint of an observer at the transmitter but random to an observer at the receiver.
The theory of random processes quantifies the above notions so that one can construct mathematical models of real phenomena that are both tractable and meaningful in the sense of yielding useful predictions of future behavior. Tractability is required in order for the engineer (or anyone else) to be able to perform analyses and syntheses of random processes, perhaps with the aid of computers. The “meaningful” requirement is that the models must provide a reasonably good approximation of the actual phenomena. An oversimplified model may provide results and conclusions that do not apply to the real phenomenon being modeled.
In engineering practice we are often interested in the average behavior of measurements on random processes. The goal of this chapter is to link the two distinct types of averages that are used – long-term time averages taken by calculations on an actual physical realization of a random process and averages calculated theoretically by probabilistic calculus at some given instant of time, averages that are called expectations. As we shall see, both computations often (but by no means always) give the same answer. Such results are called laws of large numbers or ergodic theorems.
At first glance from a conceptual point of view, it seems unlikely that long-term time averages and instantaneous probabilistic averages would be the same. If we take a long-term time average of a particular realization of the random process, say {X(t, ω0); t ∈ T}, we are averaging for a particular ω which we cannot know or choose; we do not use probability in any way and we are ignoring what happens with other values of ω. Here the averages are computed by summing the sequence or integrating the waveform over t while ω0 stays fixed. If, on the other hand, we take an instantaneous probabilistic average, say at the time t0, we are taking a probabilistic average and summing or integrating over ω for the random variable X(t0, ω).
The theory of random processes is a branch of probability theory and probability theory is a special case of the branch of mathematics known as measure theory. Probability theory and measure theory both concentrate on functions that assign real numbers to certain sets in an abstract space according to certain rules. These set functions can be viewed as measures of the size or weight of the sets. For example, the precise notion of area in two-dimensional Euclidean space and volume in three-dimensional space are both examples of measures on sets. Other measures on sets in three dimensions are mass and weight. Observe that from elementary calculus we can find volume by integrating a constant over the set. From physics we can find mass by integrating a mass density or summing point masses over a set. In both cases the set is a region of three-dimensional space. In a similar manner, probabilities will be computed by integrals of densities of probability or sums of “point masses” of probability.
Both probability theory and measure theory consider only nonnegative real-valued set functions. The value assigned by the function to a set is called the probability or the measure of the set, respectively. The basic difference between probability theory and measure theory is that the former considers only set functions that are normalized in the sense of assigning the value of 1 to the entire abstract space, corresponding to the intuition that the abstract space contains every possible outcome of an experiment and hence should happen with certainty or probability 1.
The theory of random processes is constructed on a large number of abstractions. These abstractions are necessary to achieve generality with precision while keeping the notation used manageably brief. Students will probably find learning facilitated if, with each abstraction, they keep in mind (or on paper) a concrete picture or example of a special case of the abstraction. From this the general situation should rapidly become clear. Concrete examples and exercises are introduced throughout the book to help with this process.
Set theory
In this section the basic set theoretic ideas that are used throughout the book are introduced. The starting point is an abstract space, or simply a space, consisting of elements or points, the smallest quantities with which we shall deal. This space, often denoted by Ω, is sometimes referred to as the universal set. To describe a space we may use braces notation with either a list or a description contained within the braces { }. Examples are:
[A.0] The abstract space consisting of no points at all, that is, an empty (or trivial) space. This possibility is usually excluded by assuming explicitly or implicitly that the abstract space is nonempty, that is, it contains at least one point.
In Chapter 4 we saw that the second-order moments of a random process – the mean and covariance or, equivalently, the autocorrelation – play a fundamental role in describing the relation of limiting sample averages and expectations. We also saw, for example in Section 4.6.1 and Problem 4.26, that these moments also play a key role in signal processing applications of random processes, especially in linear least squares estimation. Because of the fundamental importance of these particular moments, this chapter considers their properties in greater depth and their evaluation for several important examples. A primary focus is on a second-order moment analog of a derived distribution problem. Suppose we are given the second-order moments of one random process and this process is then used as an input to a linear system. What are the resulting second-order moments of the output random process? These results are collectively known as second-order moment input/output or I/O relations for linear systems.
Linear systems may seem to be a very special case. As we will see, their most obvious attribute is that they are easier to handle analytically, which leads to more complete, useful, and stronger results than can be obtained for the class of all systems. This special case, however, plays a central role and is by far the most important class of systems.