To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Classically, light is an electromagnetic phenomenon, described by Maxwell's equations. However, under certain conditions, such as low intensity or in the presence of certain nonlinear optical materials, light starts to behave differently, and we have to construct a ‘quantum theory of light’. We can exploit this quantum behaviour of light for quantum information processing, which is the subject of this book. In this chapter, we develop the quantum theory of the free electromagnetic quantum field. This means that we do not yet consider the interaction between light and matter; we postpone that to Chapter 7. We start from first principles, using the canonical quantization procedure in the Coulomb gauge: we derive the field equations of motion from the classical Lagrangian density for the vector potential, and promote the field and its canonical momentum to operators and impose the canonical commutation relations. This will lead to the well-known creation and annihilation operators, and ultimately to the concept of the photon. After quantization of the free electromagnetic field we consider transformations of the mode functions of the field. We will demonstrate the intimate relation between these linear mode transformations and the effect of beam splitters, phase shifters, and polarization rotations, and show how they naturally give rise to the concept of squeezing. Finally, we introduce coherent and squeezed states.
Before we begin our detailed discussion of optical quantum information processing, we need to introduce certain figures of merit that quantify how well our information processor is performing. For a quantum computer, the time and resources it takes to complete a task are good measures, but we are faced with the problem that we do not know what the final design for a quantum computer will be. We therefore need additional figures of merit that are applicable in a wide range of situations. We first introduce the concept of the ‘density operator’, which will be used to describe quantum states about which we have incomplete knowledge. We then define the ‘fidelity’, which is used for assessing how close we are to a particular desired quantum state, and we discuss different measures of entanglement. The later part of the chapter will focus on figures of merit that are particularly relevant for assessing optical states, namely the first-order correlation functions, and the visibility of interference phenomena.
Density operators and superoperators
Classical physics often confronts us with situations where we can say only a limited amount about the state of a system: we can measure certain ‘bulk’ variables such as the temperature and pressure, but we do not know all of the details of the microscopic make-up of the state. For example, we typically have very little knowledge of the positions and velocities of all the atoms that constitute the system.
In this chapter we consider the physical limits to information extraction. This is an important aspect of optical quantum information processing in that many high-precision experiments (such as gravitational wave detection) are implemented in optical systems, i.e., interferometers. It is not surprising that just as in computation and communication, the use of quintessentially quantum mechanical properties allows us to improve the sensitivity in interferometry. We start this chapter with a derivation of the Fisher information and the Cramér-Rao bound, which tell us how much information we can extract about a parameter in a set of measurements. In Section 13.2 we introduce the statistical distance between two probability distributions. This can in turn be used to determine how many times the system needs to be queried before we can determine which probability distribution governs the system. In addition, we make a connection between the statistical distance and the angle between states in Hilbert space. In Sections 13.3 and 13.4 we derive bounds on how fast quantum states evolve to orthogonal states, and how entangled states can be used to improve parameter estimation. Finally, in Section 13.5 we present a number of approaches for implementing quantum metrology in optical systems, most importantly in optical interferometers.
Parameter estimation and Fisher information
In the theory of computation, discrete variables have the benefit that a practically perfect readout is often possible.
In this chapter we will discuss solid-state quantum computing, concentrating on systems where qubit manipulation, initialization or readout is performed optically. We will begin with a discussion of crystals with a periodic lattice and derive Bloch's theorem, which sets constraints on the form of electronic wave functions in crystals. We will then introduce semiconductor heterostructures and show that these have a discrete energy-level structure with transitions corresponding to the optical region of the electromagnetic spectrum. The discrete levels can be used as several different kinds of qubit, and we will discuss two that can be manipulated optically, namely an electron spin and an exciton. We will touch upon crystal defects and their importance in optical quantum computing. The emphasis will be on the NV− centre in diamond, which has produced some of the most important experimental results in recent years. Towards the end of the chapter, we will discuss specific implementations of single- and two-qubit gates in solid-state structures, before concluding with some methods for scaling up a solid-state device to a full-scale quantum computer.
Basic concepts of solid-state systems
In order to understand the optical characteristics of semiconductors, we must first review some basic concepts from solid-state physics. In particular, we will need the form and properties of the electronic wave functions in a periodic crystal structure. Unfortunately, the calculation of electronic states in a solid is impossible to do exactly.
In the previous chapter we developed some of the basic aspects of quantum information processing with single photons as qubits. Apart from noting the obvious benefit of using light for quantum communication, we identified some difficulties in manipulating quantum information that is encoded in photons. In particular, it is difficult to construct two-qubit gates for photonic qubits. In this chapter we will have to face this difficulty head-on in our discussion of quantum computation with single photons and linear-optical elements. The possibility of a quantum computer based on single photons, linear optics, and photon counting was demonstrated in a landmark paper by Knill, Laflamme, and Milburn in 2001. Their protocol is commonly referred to as the ‘klm protocol’. In subsequent years, the klm protocol has been dramatically slimmed down in terms of complexity and the necessary resources to create the universal set of quantum gates. For pedagogical reasons we give the streamlined version here, and we briefly describe the original klm protocol in Appendix 2. We will make extensive use of the results in Section 1.4 about mode transformations, and Section 2.3 about cluster-state quantum computing. We start this chapter, however, with a description of linear-optical networks, and how they fail as deterministic quantum computers. In Section 6.2 we discuss the principle of post-selection, and how this can be used to create probabilistic gates with deterministic feed-forward control.
After the preceding four introductory chapters, we are finally ready to discuss optical quantum information processing. While in principle quantum communication can be implemented with atoms or electrons, in practice the implementation of choice for long-distance quantum communication will almost certainly be optical. In this chapter, we will develop some of the key topics in quantum communication with single photons, and we will discuss continuous-variable quantum communication in Chapter 8. In some ways, quantum communication is the most technologically advanced part of quantum information processing. Most protocols in this chapter, such as teleportation and cryptography, have been convincingly demonstrated in the lab, and there are already several commercial organizations that sell cryptographic systems based on quantum mechanical principles. In particular, the promise of secure communication with quantum cryptography is one of the driving forces behind the development of single-photon sources and detectors. In this chapter, we first construct several optical representations of the qubit, including polarized photons and time-bin encoding. Since the single-photon implementation of a qubit is central to optical quantum information processing, this constitutes a large part of the chapter. In Section 5.2 we discuss quantum teleportation and entanglement swapping with single photons. We also present a method for entanglement distillation using entanglement swapping. In Section 5.3 we introduce decoherence-free subspaces, which can be used to reduce noise in the transmission of quantum information, and even establish communication channels in the absence of shared reference frames.
Solid-state systems, by their very nature, have a vast number of different possible quantum degrees of freedom. In Chapter 11, we saw that some of these degrees of freedom make good qubits. However, there are plenty more which are less suitable, since they cannot easily be localized and externally controlled. Once the qubit has been chosen, it is important to think about how it interacts with the other, uncontrolled quantum excitations in its environment. Such an interaction leads to unpredictable behaviour and can cause decoherence – the irretrievable loss of quantum information from the qubit – and this will be the topic of this chapter. The most obvious decoherence mechanism for any optical manipulation scheme is the spontaneous emission of photons. The theory behind this follows analogously from the theory we discussed in Chapter 7, with a suitable definition of a transition dipole for the relevant transitions. However, solid-state systems bring with them lattice vibrations, or phonons, which have no direct atomic analogue. We will therefore focus on phonons in this chapter, first discussing how we model them, and second how they interact with the electron-based qubit that we discussed in the last chapter. Later, we will see how this leads to a loss of coherence, and how optical methods can be used to slow the rate of coherence loss. Phonon interactions are complex and not easy to model exactly, but we will show that with certain approximations very successful theories can be developed.
We have so far spoken almost exclusively of photons and linear-optical elements, and seen just how powerful those two components can be for information processing. They provide unbreakable cryptographic tools, and allow for efficient quantum computing. However, many more possibilities become available when we allow photons to interact with atoms and solid matter in a quantum mechanical way. In particular, a quantum memory, the principal difficulty for linear-optical quantum computing, can be created. In this chapter we will take the first steps towards a full understanding of a photon's interaction with atoms. We will show how to describe the interactions within a system consisting of photons and few-level atoms and show how this interaction can be manipulated and exploited to provide quantum information processors based on both atomic and photonic qubits. We will also show that photon emission from atoms can degrade the quantum information contained within atoms, and we will present a formalism to model this effect. We begin with a general discussion of atom-photon interactions.
Atomic systems as qubits
Let us first consider an electron in an isolated atom. It is bound there by the Coulomb force due to the charge distribution of all the other electrons and the nucleus. The potential that describes this coupling is given by V(r).
In optical quantum information processing, two of the most basic elements are the sources of quantum mechanical states of light, and the devices that can detect these states. In this chapter, we narrow this down to photon sources and photodetectors. We will describe first how detectors work, starting from abstract ideal detectors, via a complete description of realistic detectors in terms of POVMS, to a brief overview of current photodetectors. Subsequently, we will define what is a single-photon source, and how we can determine experimentally whether a source produces single photons or something else. Having laid down the ground rules, we will survey some of the most popular ways photons are produced in the laboratory. Finally, we take a look at the production of entangled photon sources and quantum non-demolition measurements of photons.
A mathematical model of photodetectors
Photodetectors are devices that produce a macroscopic signal when triggered by one or more photons. In the ideal situation, every photon that hits the detector contributes to the macroscopic signal, and there are no ‘ghost’ signals, or so-called dark counts. In this situation we can define two types of detector, namely the ‘photon-number detector’, and ‘detectors without number resolution’.
First, the photon number detector is a (largely hypothetical) device that tells us how many photons there are in a given optical mode that is properly localized in space and time. This property is called ‘photon-number resolution’.
Wireless channels suffer from time-varying impairments such as multipath fading, interference, and noise. Diversity, such as time, frequency, space, polarization, or angle diversity, is typically used to mitigate these impairments. Diversity gain is achieved by receiving independent-fading replicas of the signal.
The multiple antenna system employs multiple antennas at either the transmitter or the receiver, and it can be either multiple-input single-output (MISO) for beamforming or transmit diversity at the transmitter, single-input multiple-output (SIMO) for diversity combining at the receiver, or MIMO, depending on the numbers of transmit and receive antennas. The MISO, SIMO, and MIMO channel models can be generated by using the angle-delay scattering function.
Multiple antenna systems are generally grouped as smart antenna systems and MIMO systems. A smart antenna system is a subsystem that contains multiple antennas; based on the spatial diversity and signal processing, it significantly increases the performance of wireless communication systems. Direction-finding and beamforming are the two most fundamental topics of smart antennas. Direction-finding is used to estimate the number of emitting sources and their DoAs, while beamforming is used to estimate the signal-of-interest (SOI) in the presence of interference.
A MIMO system consists of multiple antennas at both the transmitter and the receiver. They are typically used for transmit diversity and spatial multiplexing. Spatial multiplexing can maximize the system capacity by transmitting at each transmit antenna a different bitstream.
The term microwaves is used to describe electromagnetic waves with frequencies from 300 MHz to 300 GHz, corresponding to wavelengths in free space from 1 m to 1 mm. Within the microwave range, from 30 GHz to 300 GHz the wavelengths are between 1 mm and 10 mm, and hence these waves are known as millimeter waves. Below 300 MHz the spectrum of electromagnetic waves is known as the radio frequency (RF) spectrum, while above the microwave spectrum are the infrared, visible optical, ultraviolet, and x-ray spectrums. Wireless communications uses only the electromagnetic waves in the range of the microwave and RF spectrums. In the wireless communications literature, the term RF is often used to represent the entire RF and microwave spectrums.
Receiver performance requirements
The requirements on RF receivers are typically more demanding than those on transmitters. In addition to the requirements on gain and noise figure, the receiver must have:
A good sensitivity to the minimum power at the antenna for a given BER requirement. For example, the GSM standard requires a reception dynamic range from −102 dBm to −15 dBm, IEEE 802.11g requires a reception range of −92 dBm to −20 dBm, for WCDMA it is −117 to −25 dBm (before spreading), for CDMA2000 it is −117 dBm to −30 dBm, and for WideMedia it is −80.8 dBm/MHz (or −72.4 dBm/MHz at highest speed) to -41.25 dBm/MHz. For multiple data rates, a higher data rate requires a higher sensitivity, since it requires a larger SNR.
UWB technology, also known as impulse radio, was first used to transmit Morse codes by Marconi in 1900 through the transatlantic telegraph. Modern UWB technology has been used for radar and communications since the 1960s. Like CDMA systems, early UWB systems were designed for military covert radar and communications. The early applications of UWB technology were primarily related to radar, driven by the fine-ranging resolution that comes with large bandwidth. UWB technology for wireless communications was pioneered by Scholtz. With the intent of operating UWB in an unlicensed mode that overlaps licensed bands, the FCC issued rules under the FCC Rules and Regulations Part 15 for UWB operation in February 2002.
The FCC defined a UWB transmitter as “an intentional radiator that, at any point in time, has a fractional bandwidth equal to or greater than 0.20, or has a UWB bandwidth equal to or greater than 500 MHz, regardless of the fractional bandwidth”. “The UWB bandwidth is the frequency band bounded by the points that are 10 dB below the highest radiated emission, as based on the complete transmission system including the antenna.”
According to the FCC regulations, the transmitter sends pulses with a bandwidth of at least 500 MHz that is within the band 3.1 to 10.6 GHz, for output power densities below −41.25 dBm/MHz. The FCC Part 15 limit of 500 µV/m at 3 meters is equivalent to an effective isotropic radiated power (EIRP) of −41.25 dBm/MHz.