To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Solid-state systems, by their very nature, have a vast number of different possible quantum degrees of freedom. In Chapter 11, we saw that some of these degrees of freedom make good qubits. However, there are plenty more which are less suitable, since they cannot easily be localized and externally controlled. Once the qubit has been chosen, it is important to think about how it interacts with the other, uncontrolled quantum excitations in its environment. Such an interaction leads to unpredictable behaviour and can cause decoherence – the irretrievable loss of quantum information from the qubit – and this will be the topic of this chapter. The most obvious decoherence mechanism for any optical manipulation scheme is the spontaneous emission of photons. The theory behind this follows analogously from the theory we discussed in Chapter 7, with a suitable definition of a transition dipole for the relevant transitions. However, solid-state systems bring with them lattice vibrations, or phonons, which have no direct atomic analogue. We will therefore focus on phonons in this chapter, first discussing how we model them, and second how they interact with the electron-based qubit that we discussed in the last chapter. Later, we will see how this leads to a loss of coherence, and how optical methods can be used to slow the rate of coherence loss. Phonon interactions are complex and not easy to model exactly, but we will show that with certain approximations very successful theories can be developed.
We have so far spoken almost exclusively of photons and linear-optical elements, and seen just how powerful those two components can be for information processing. They provide unbreakable cryptographic tools, and allow for efficient quantum computing. However, many more possibilities become available when we allow photons to interact with atoms and solid matter in a quantum mechanical way. In particular, a quantum memory, the principal difficulty for linear-optical quantum computing, can be created. In this chapter we will take the first steps towards a full understanding of a photon's interaction with atoms. We will show how to describe the interactions within a system consisting of photons and few-level atoms and show how this interaction can be manipulated and exploited to provide quantum information processors based on both atomic and photonic qubits. We will also show that photon emission from atoms can degrade the quantum information contained within atoms, and we will present a formalism to model this effect. We begin with a general discussion of atom-photon interactions.
Atomic systems as qubits
Let us first consider an electron in an isolated atom. It is bound there by the Coulomb force due to the charge distribution of all the other electrons and the nucleus. The potential that describes this coupling is given by V(r).
In optical quantum information processing, two of the most basic elements are the sources of quantum mechanical states of light, and the devices that can detect these states. In this chapter, we narrow this down to photon sources and photodetectors. We will describe first how detectors work, starting from abstract ideal detectors, via a complete description of realistic detectors in terms of POVMS, to a brief overview of current photodetectors. Subsequently, we will define what is a single-photon source, and how we can determine experimentally whether a source produces single photons or something else. Having laid down the ground rules, we will survey some of the most popular ways photons are produced in the laboratory. Finally, we take a look at the production of entangled photon sources and quantum non-demolition measurements of photons.
A mathematical model of photodetectors
Photodetectors are devices that produce a macroscopic signal when triggered by one or more photons. In the ideal situation, every photon that hits the detector contributes to the macroscopic signal, and there are no ‘ghost’ signals, or so-called dark counts. In this situation we can define two types of detector, namely the ‘photon-number detector’, and ‘detectors without number resolution’.
First, the photon number detector is a (largely hypothetical) device that tells us how many photons there are in a given optical mode that is properly localized in space and time. This property is called ‘photon-number resolution’.
Wireless channels suffer from time-varying impairments such as multipath fading, interference, and noise. Diversity, such as time, frequency, space, polarization, or angle diversity, is typically used to mitigate these impairments. Diversity gain is achieved by receiving independent-fading replicas of the signal.
The multiple antenna system employs multiple antennas at either the transmitter or the receiver, and it can be either multiple-input single-output (MISO) for beamforming or transmit diversity at the transmitter, single-input multiple-output (SIMO) for diversity combining at the receiver, or MIMO, depending on the numbers of transmit and receive antennas. The MISO, SIMO, and MIMO channel models can be generated by using the angle-delay scattering function.
Multiple antenna systems are generally grouped as smart antenna systems and MIMO systems. A smart antenna system is a subsystem that contains multiple antennas; based on the spatial diversity and signal processing, it significantly increases the performance of wireless communication systems. Direction-finding and beamforming are the two most fundamental topics of smart antennas. Direction-finding is used to estimate the number of emitting sources and their DoAs, while beamforming is used to estimate the signal-of-interest (SOI) in the presence of interference.
A MIMO system consists of multiple antennas at both the transmitter and the receiver. They are typically used for transmit diversity and spatial multiplexing. Spatial multiplexing can maximize the system capacity by transmitting at each transmit antenna a different bitstream.
The term microwaves is used to describe electromagnetic waves with frequencies from 300 MHz to 300 GHz, corresponding to wavelengths in free space from 1 m to 1 mm. Within the microwave range, from 30 GHz to 300 GHz the wavelengths are between 1 mm and 10 mm, and hence these waves are known as millimeter waves. Below 300 MHz the spectrum of electromagnetic waves is known as the radio frequency (RF) spectrum, while above the microwave spectrum are the infrared, visible optical, ultraviolet, and x-ray spectrums. Wireless communications uses only the electromagnetic waves in the range of the microwave and RF spectrums. In the wireless communications literature, the term RF is often used to represent the entire RF and microwave spectrums.
Receiver performance requirements
The requirements on RF receivers are typically more demanding than those on transmitters. In addition to the requirements on gain and noise figure, the receiver must have:
A good sensitivity to the minimum power at the antenna for a given BER requirement. For example, the GSM standard requires a reception dynamic range from −102 dBm to −15 dBm, IEEE 802.11g requires a reception range of −92 dBm to −20 dBm, for WCDMA it is −117 to −25 dBm (before spreading), for CDMA2000 it is −117 dBm to −30 dBm, and for WideMedia it is −80.8 dBm/MHz (or −72.4 dBm/MHz at highest speed) to -41.25 dBm/MHz. For multiple data rates, a higher data rate requires a higher sensitivity, since it requires a larger SNR.
UWB technology, also known as impulse radio, was first used to transmit Morse codes by Marconi in 1900 through the transatlantic telegraph. Modern UWB technology has been used for radar and communications since the 1960s. Like CDMA systems, early UWB systems were designed for military covert radar and communications. The early applications of UWB technology were primarily related to radar, driven by the fine-ranging resolution that comes with large bandwidth. UWB technology for wireless communications was pioneered by Scholtz. With the intent of operating UWB in an unlicensed mode that overlaps licensed bands, the FCC issued rules under the FCC Rules and Regulations Part 15 for UWB operation in February 2002.
The FCC defined a UWB transmitter as “an intentional radiator that, at any point in time, has a fractional bandwidth equal to or greater than 0.20, or has a UWB bandwidth equal to or greater than 500 MHz, regardless of the fractional bandwidth”. “The UWB bandwidth is the frequency band bounded by the points that are 10 dB below the highest radiated emission, as based on the complete transmission system including the antenna.”
According to the FCC regulations, the transmitter sends pulses with a bandwidth of at least 500 MHz that is within the band 3.1 to 10.6 GHz, for output power densities below −41.25 dBm/MHz. The FCC Part 15 limit of 500 µV/m at 3 meters is equivalent to an effective isotropic radiated power (EIRP) of −41.25 dBm/MHz.
Spread spectrum communications was originally used in the military for the purpose of interference rejection and enciphering. In digital cellular communications, spread spectrum modulation is used as a multiple-access technique. Spectrum spreading is mainly performed by one of the following three schemes.
Direct sequence (DS): Data is spread and the carrier frequency is fixed.
Frequency hopping (FH): Data is directly modulated and the carrier frequency is spread by channel hopping.
Time hopping (TH): Signal transmission is randomized in time.
The first two schemes are known as spectral spreading, and are introduced in this chapter. Time hopping is known as temporal spreading, and will be introduced in Chapter 20. Spectrum spreading provides frequency diversity, low PSD of the transmitted signal, and reduced band-limited interference, while temporal spreading has the advantage of time diversity, low instantaneous power of the transmitted signals, and reduced impulse interference.
CDMA is a spread spectrum modulation technology in which all users occupy the same time and frequency, and they can be separated by their specific codes. For DS-CDMA systems, at the BS, the baseband bitstream for each MS is first mapped onto M-ary symbols such as QPSK symbols; each of the I and Q signals is then spread by multiplying a spreading code and then a scrambling code. The spread signals for all MSs are then amplified to their respective power, summed, modulated to the specified band, and then transmitted.
Two channels with different frequencies, polarizations, or physical locations experience fading independently of each other. By combining two or more such channels, fading can be reduced. This is called diversity. Diversity ensures that the same information reaches the receiver from statistically independent channels. There are two types of diversity: microdiversity that mitigates the effect of multipath fading, and macrodiversity that mitigates the effect of shadowing.
For a fading channel, if we use two well-separated antennas, the probability of both the antennas being in a fading dip is low. Diversity is most efficient when multiple diversity channels carry independently fading copies of the same signal. This leads to a joint pdf being the product of the marginal pdfs for the channels. Correlation between the fading of the channels reduces the effectiveness of diversity, and correlation is characterized by the correlation coefficient, as discussed in Section 3.4.2. Note that for an AWGN channel, diversity does not improve performance.
Common diversity methods for dealing with small-scale fading are spatial diversity (multiple antennas with space separation), temporal diversity (time division), frequency diversity (frequency division), angular diversity (multiple antennas using different antenna patterns), and polarization diversity (multiple antennas with different polarizations). Macrodiveristy is usually implemented by combining signals received by multiple BSs, repeaters or access points, and the coordination between them is part of the networking protocols.
A digital image is a rectangular array of picture elements (pixels), arranged in m rows and n columns. The resolution of the image is m × n. Images can be categorized into bi-level, grayscale, and color images. A natural scene, such as a picture taken by a digital camera or obtained by using a scanner, is typically a continuous-tone image, where the colors vary continuously to the eye and there is a lot of noise in the picture. An artificial image, such as a graphical image, does not have the noise or blurring of a natural image. A cartoon-like image consists of uniform color in each area, but adjacent areas have different colors.
The features in each type of image can be exploited to achieve a better compression. For example, for the bi-level image, each pixel is represented by one bit. A pixel has a high probability of being the same as its neighboring pixels, and thus RLE is suitable for compressing such image. The image can be scanned column by column or in zigzag. For the grayscale image, each pixel is represented by n bits, and a pixel tends to be similar to its immediate neighbors but may be not identical, thus RLE is not suitable. By representing the image using a Gray code that differs in only one bit for two consecutive integers, a grayscale image can be separated into n bi-level images, and each can be compressed by using RLE.
The cellular concept was a major breakthrough in mobile communications, and it initiated the era of modern wireless communications. It helped to solve the problem of spectral congestion and user capacity. For wireless communications, the antennas are typically required to be omnidirectional. The cellular structure divides the geographical area into many cells. A BS equipped with an omnidirectional antenna is installed at the center of each cell. Neighboring cells use different frequency bands to avoid co-channel interference (CCI).
The same frequency bands can be used by different cells that are sufficiently far away. This leads to frequency reuse. For a distance D between two cells that use the same frequency bands and a radius R of the cell, the relative distance D/R between the two cells is the reuse distance. There is no CCI within a cluster of cells, where each cell uses a different frequency spectrum. The number of cells in a cluster is called the cluster size. The cluster size determines the capacity of the cellular system: a smaller cluster size leads to a large capacity.
The cell shape usually takes the form of hexagon, and the overall division of space is like a beehive pattern. This is illustrated in Fig. 4.1 for cluster size 4 and reuse distance D/R ∼ 4. This hexagonal cell shape is suitable when the antennas of the BSs are placed on top of buildings with a coverage radius of a few miles, such as in the 1G mobile systems.
A channel is an abstract model describing how the received (or retrieved) data is associated with the transmitted (or stored) data. Channel coding starts with Claude Shannon's mathematical theory of communication.
Error detection/correction coding
Channel coding can be either error detection coding or error correction coding. When only error detection coding is employed, the receiver can request a transmission repeat, and this technique is known as automatic repeat request (ARQ). This requires two-way communications. An ARQ system requires a code with good error-detecting capability so that the probability of an undetected error is very small.
Forward error correction (FEC) coding allows errors to be corrected based on the received information, and it is more important for achieving highly reliable communications at rates approaching channel capacity. For example, by turbo coding, an uncoded BER of 10−3 corresponds to a coded BER of 10−6 after turbo decoding. For applications that use simplex (one-way) channels, FEC coding must be supported since the receiver must detect and correct errors, and no reverse channel is available for retransmission requests.
Another method using error detection coding is error concealment. This method processes data in such a way that the effect of errors is minimized. Error concealment is especially useful for applications that carry data for subjective appreciation, such as speech, music, image, and video. Loss of a part of the data is acceptable, since there is still some inherent redundancy in the data.
Source coding or data compression is used to remove redundancy in a message so as to maximize the storage and transmission of information. In Chapter 14, we have introduced Shannon's source-coding and rate-distortion theorems. We have also introduced lossless data compression based on the source-coding theorem. In Chapters 16 and 17, we will address speech/audio and image/video coding. Lossy data compression is obtained by quantizing the analog signals, and the performance of quantization is characterized by the rate-distortion bound. Source coding is the procedure used to convert an analog or digital signal into a bitstream; both quantization and noiseless data compression may be part of source coding.
Coding for analog sources
Source coding can be either lossy or lossless. For discrete sources, a lossless coding technique such as entropy coding is used. Huffman coding is a popular entropy coding scheme. Lossless coding uses more radio spectrum. For analog sources, lossy coding techniques are usually used.
PCM is a digital representation of an analog signal. The signal magnitude, sampled regularly at uniform intervals, is quantized to a series of symbols in a digital, usually binary code. This can be performed by using A/D converters. The demodulation of PCM signals can be performed by DACs. The PCM code is the original waveform for source coding. There are three approaches to source coding.
MIMO systems are wireless systems with multiple antenna elements at both ends of the link. MIMO systems can be used for beamforming, diversity combining, or spatial multiplexing. The first two applications are the same as for the smart antennas, while spatial multiplexing is the transmission of multiple data streams on multiple antennas in parallel, leading to a substantial increase in capacity. MIMO technology and turbo coding are the two most prominent recent breakthroughs in wireless communications. MIMO technology promises a significant increase in capacity.
MIMO systems have the ability to exploit, rather than combat, multipath propagation. The separability of the MIMO channel relies on the presence of rich multipath, which makes the channel spatially selective. Thus, MIMO effectively exploits multipath. In contrast, some smart antenna systems perform better in the LOS case, and their optimization criteria are based on the DoA/DoD. Although some smart antenna systems generate good results in the non-LOS channel, they mitigate multipath rather than exploit it.
The maximum spatial diversity obtained for a non-frequency-selective fading MIMO channel is proportional to the product of the numbers of receive and transmit antennas. In the uncorrelated Rayleigh fading channel, the MIMO channel capacity/throughout limit grows linearly with the number of transmit or receive antennas, whichever is smaller − i.e., min (Nt, Nr). According to the analysis and simulation performed in, MIMO can provide a spectral efficiency as high as 20–40 bits/s/Hz.
A wireless ad hoc network is an autonomous, self-organized, distributed, peer-to-peer network of fixed, nomadic, or mobile users that communicate over bandwidth-constrained wireless links. There is no preexisting infrastructure. Wireless ad hoc networks can be relay, mesh, or star networks comprising special cases. When the nodes are connected in mesh topology, the network is also known as a wireless mesh network. It is known as a mobile ad hoc network (MANET), when the nodes are mobile.
A wireless ad hoc network has a mutlihop relaying of packets, as shown in Fig. 22.1. It can be easily and rapidly deployed, and expensive infrastructures can be avoided. Without an infrastructure, the nodes handle the necessary control and networking tasks by themselves, generally by distributed control. Typical applications are wireless PANs for emergency operations, civilian, and military use. The wireless ad hoc network is playing an increasing role in wireless networks, and wireless ad hoc networking mode has been or is being standardized in most IEEE families of wireless networks.
MANETs and wireless sensor networks (WSNs) are the two major types of wireless ad hoc networks. They both are distributed, multi-hop systems. A MANET is an autonomous collection of mobile routers (and associated hosts) connected by wireless links. Each node is an information appliance, such as a personal digital assistant (PDA), equipped with a radio transceiver. The nodes are fully mobile.