To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Subsequent to the mathematical theory of electromagnetic waves formulated by James Clerk Maxwell in 1873 and the demonstration of the existence of these waves by Heinrich Hertz in 1887, Guglielmo Marconi made history by using radio waves for transatlantic wireless communications in 1901. In 1906, amplitude modulation (AM) radio was invented by Reginald Fessenden for music broadcasting. In 1913, Edwin H. Armstrong invented the superheterodyne receiver, based on which the first broadcast radio transmission took place at Pittsburgh in 1920. Land-mobile wireless communication was first used in 1921 by the Detroit Police Department. In 1929, Vladimir Zworykin performed the first experiment of TV transmission. In 1933, Edwin H. Armstrong invented frequency modulation (FM). The first public mobile telephone service was introduced in 1946 in five American cities. It was a half-duplex system that used 120 kHz of FM bandwidth. In 1958, the launch of the SCORE (Signal Communication by Orbital Relay Equipment) satellite ushered in a new era of satellite communications. By the mid-1960s, the FM bandwidth was cut to 30 kHz. Automatic channel trunking was introduced in the 1950s and 1960s, with which full-duplex was introduced. The most important breakthrough for modern mobile communications was the concept of cellular mobile systems by AT&T Bell Laboratories in the 1970s.
The 1G mobile cellular systems were analog speech communication systems. They were mainly deployed before 1990. They are featured by FDMA (frequency division multiple access) coupled with FDD (frequency division duplexing), analog FM (frequency modulation) for speech modulation, and FSK (frequency shift keying) for control signaling, and provide analog voice services. The 1G systems were mainly deployed at the frequency bands from 450 MHz to 1 GHz. The cell radius is between 2 km and 40 km.
The AMPS (Advanced Mobile Phone Services) technique was developed by Bell Labs in the 1970s and was first deployed in late 1983. Each channel occupies 30 kHz. The speech modulation is FM with a frequency deviation of ±12 kHz, and the control signal is modulated by FSK with a frequency deviation of ±8 kHz. The control channel transmits the data streams at 10 kbits/s. AMPS was deployed in the USA, South America, Australia, and China. In 1991, Motorola introduced the N-AMPS to support three users in a 30 kHz AMPS channel, each with a 10 kHz channel, thus increasing the capacity threefold.
The European TACS (Total Access Communication System) was first deployed in 1985. TACS is identical to AMPS, except for the channel bandwidth of 25 kHz. Speech is modulated by FM with a frequency deviation of ±12 kHz, and the control signal is modulated by FSK with a frequency deviation of ±6.4 kHz, achieving a data rate of 8 kbits/s.
Probability is an essential idea in both information theory and quantum mechanics. It is a highly developed mathematical and philosophical subject of its own, worthy of serious study. In this brief appendix, however, we can only sketch a few elementary concepts, tools, and theorems that we use elsewhere in the text.
In discussing the properties of a collection of sets, it is often useful to suppose that they are all subsets of an overall “universe” set U. The universe serves as a frame within which unions, intersections, complements, and other set operations can be described. In much the same way, the ideas of probability exist with a frame called a probability space Σ. For simplicity, we will consider only the discrete case. Then Σ consists of an underlying set of points and an assignment of probabilities. The set is called a sample space and its elements are events. The probability function, or probability distribution, assigns to each event e a real number P(e) between 0 and 1, such that the sum of all the probabilities is 1.
The probability P(e) is a measure of the likelihood that event e occurs. An impossible event has P(e) = 0 and a certain event would have P(e) = 1; in other cases, P(e) has some intermediate value.
The probability space itself contains all possible events that may occur, identified in complete detail, which makes it too elaborate for actual use. Suppose we flip a coin.
We cannot always assign a definite state vector |ψ〉 to a quantum system Q. It may be that Q is part of a composite system RQ that is in an entangled state |Ψ(RQ)〉. Or it may be that our knowledge of the preparation of Q is insufficient to determine a particular state |ψ〉. Consider, for instance, a qubit sent from Alice to Bob during the BB84 key distribution protocol from Section 4.4. The state of this qubit could be |0〉, |1〉, |+〉 or |−〉, each with equal likelihood. In either case – whether Q is a subsystem of an entangled system, or the state of Q is determined by a probabilistic process – we cannot specify a quantum state vector |ψ〉 for Q.
Nevertheless, in either case we are in a position to make statistical predictions about the outcomes of measurements on Q. In this chapter we describe the mathematical machinery for doing this.
Mixtures of states
Suppose the state of Q arises by a random process, so that the state |ψα〉 is prepared with probability pα. The possible states |ψα〉 need not be orthogonal (as you can see in the BB84 example above). We call this situation a mixture of the states |ψα〉, or a mixed state for short.
One way to interpret a mixed state is to return to the idea of an ensemble of systems, which we introduced in Section 3.3.
If the state of a quantum system is a kind of information, then the dynamics of that system is a kind of information processing. This is the basic idea behind quantum computing, which seeks to exploit the physics of quantum systems to do useful information processing. In this chapter we will acquaint ourselves with a few of the ideas of quantum computing, using the idealized quantum circuit model. Then we will turn our attention to an actual quantum process, nuclear magnetic resonance, that can be understood as a realization of quantum information processing.
In a quantum circuit, we have a set of n qubit systems whose dynamical evolution is completely under our control. We represent the evolution as a sequence of unitary operators, each of which acts on one or more of the qubits. Graphically, we represent the qubits as a set of horizontal lines, and the various stages of their evolution as a series of boxes or other symbols showing the structure of the sequence of operations. In such a diagram, time runs from left to right (see Fig. 18.1.). The n qubits form a kind of “computer,” and its overall evolution amounts to a “computation.” The key idea is that very complicated unitary operations on the n qubits can be built up step-by-step from many simple operations on one, two or a few qubits.
… [E]ven the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism …. The physicist rightly dreads precise argument, since an argument which is only convincing if precise loses all its force if the assumptions upon which it is based are slightly changed, while an argument which is convincing though imprecise may well be stable under small perturbations of its underlying axioms.
(Jack Schwartz, The Pernicious Influence of Mathematics on Science)
In this appendix, we review some techniques of mathematical physics. We present the mathematics in “physics style” – that is, with apparent disregard for the mathematical niceties. We will use “functions” whose properties cannot be matched by any actual function. We will exchange the order of limit operations by commuting integrals, derivatives, and infinite sums, all without any apparent consideration of the deep analytical issues involved. If the math police gave out tickets for reckless deriving, we would probably get one.
Why risk it? Often, the “reckless” derivation is a useful shorthand for a more sophisticated (and rigorous) chain of mathematical reasoning. An ironclad proof of a result may have to deal with many technical issues that, though necessary to close all of the logical loopholes, act to obscure the central ideas. The less formal approach is therefore both briefer and more revealing.
Angular momentum is one of the fundamental quantities of Newtonian physics, and in quantum physics its importance is at least as great. In quantum mechanics we often distinguish between two types of angular momentum: orbital angular momentum, which a system of particles possesses due to particle motion through space; and spin angular momentum, which is an intrinsic property of a particle. The distinction will be important later, but for now we will ignore it. We will here refer to angular momentum of any sort as “spin” and develop general-purpose mathematical tools for its description.
We have already dealt with spin systems, particularly the example of a spin-½ particle. Our approach began with the empirical observation that a measurement of any spin component of a spin-½ particle could yield only the results +ħ/2 or −ħ/2. We introduced the basis states |z±〉 for the two-dimensional Hilbert space ℋ. We also gave other basis states such as {|x±〉} and {|y±〉} in terms of the |z±〉 states. From basis states and measurement values we constructed operators for the spin components Sx, Sy, and Sz. With the operators in hand, we could then examine the algebraic relations between them (such as the commutation relation in Exercise 3.56).
Our job here is to generalize our analysis to systems of arbitrary spin. To do this, we will reverse our chain of logic.
In this chapter we prove the Ornstein isomorphism theorem, which states that any two finitely determined processes of the same entropy are isomorphic. In particular, since Bernoulli processes are finitely determined, this establishes that any two Bernoulli processes of equal entropy are isomorphic. (An earlier, related result of Sinai (1964) has, as a consequence, that any two Bernoulli processes of equal entropy are weakly isomorphic; i.e. each is a factor of the other. Both may be viewed as limiting, stationary versions of Shannon's (1948) noiseless coding theorem (1948).) Several (substantively different) proofs of Ornstein's theorem have been published; to the best of our knowledge, the one presented here adds to the variety. The closest match to our proof may be the proof of J. Kieffer (1984). Like Kieffer, we avoid the use of a marriage lemma.
568. Comment. In this chapter it is necessary to assume that our process is invertible (see, however Hoffman and Rudolph 2002, in which a condition is given for some non-invertible processes to be isomorphic to a one-sided Bernoulli shift). Also, though we restrict attention to the finite entropy case, the theorem has an infinite entropy version; see Ornstein (1970b).
Copying in distribution
In this section, we show that, given a process on a finite alphabet and a second system, we can construct a partition of the second system so that the second system with this partition is close in distribution to the first process. The reader should think of this as a very minor step in the direction of establishing an isomorphism. (It has to be minor, because you can do it for non-isomorphic systems!)