To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Throughout this text we assume that the reader is familiar with elementary quantum mechanics and the properties of complex vector spaces, and in this appendix we provide a brief reminder of these topics. In particular, we introduce Dirac's notation for describing quantum mechanical systems. Many areas of quantum mechanics studied in undergraduate degrees can be described without using Dirac notation, and its importance is unclear. In other areas, however, the advantages of Dirac notation are huge, and it is essentially the only notation in use. This is particularly true of quantum information theory.
Dirac's notation is closely related to that used to describe abstract vector spaces known as Hilbert spaces, and many formal arguments about the properties of quantum systems are in fact arguments about the properties of Hilbert spaces. Here we aim to steer a careful course between the twin perils of excessive mathematical sophistication and of taking too much on trust. We will not prove some elementary results whose proof can be found elsewhere, but will concentrate on how these results can be used.
Hilbert space
A Hilbert space is an abstract vector space. As such, it has many properties in common with the use of ordinary three-dimensional vectors, but it also differs in several important ways. Firstly, the vector space is not three-dimensional, but can have any number of dimensions. (The description below largely assumes that the number of dimensions is finite, but it is also possible to extend these results to infinite-dimensional spaces.) Secondly, when the vectors are multiplied by scalar numbers these numbers can be complex.
In Part II of this book we show how computations can be implemented using quantum systems. As we will see, the differences between bits and qubits, briefly outlined in Part I, lead to some important consequences for quantum computing. We begin with a brief introduction to the fundamental principles underlying quantum computing: specific implementations will be considered in later chapters.
Reversible computing
While quantum computation in its modern form is still a relatively young discipline, researchers have been interested in the relationship between quantum mechanics and computing for a long time. Early workers were not interested in the ideas of quantum parallelism, which will be explored in the next section, but rather in the question of whether explicitly quantum mechanical systems could be used to implement classical computations. In addition to its intrinsic interest, there are two technological reasons why this might be considered an important question.
The first reason is a direct consequence of Moore's laws. After the development of integrated circuits, computing technology began its headlong dash down the twin roads of ever-faster and ever-smaller devices. These two phenomena are closely related: as computing devices must communicate within themselves, and as the speed of information transfer cannot exceed the speed of light, faster computers must indeed be smaller. There is, however, a limit to this process, defined by the atomic scale: once the size of individual transistors becomes comparable with that of atoms, the old-fashioned approach of micro-electronics becomes completely untenable.
Classical information processing is performed using bits, which are just two-state systems, with the two states called 0 and 1. By grouping bits togetherwe can represent arbitrary pieces of information, and by manipulating these bits we can perform arbitrary computations. The corresponding basic element used in quantum information is the quantum bit, or qubit. This is simply a quantum system with two orthonormal basis states, which we shall call |0⧽ and |1⧽.
There are many possible physical implementations of a qubit, such as spin states of electrons or atomic nuclei, charge states of quantum dots, atomic energy levels, vibrational states of groups of atoms, polarization states of photons, or paths in an interferometer. At this stage the physical implementation is not important: the idea of a qubit is to abstract the discussion away from physical details. Taking the standard approach of quantum information theory, we shall begin by not worrying too much about the properties of these states, or even what their energies are; we shall simply assume that they are eigenstates of the system's Hamiltonian with known eigenvalues (that is, known energies). This approach allows us to concentrate on the fundamental properties of the system, without considering all the tedious details.
We can in principle perform classical information processing on our quantum system by using the two states |0⧽ and |1⧽ as our logical states 0 and 1 and proceeding in the usual fashion, giving rise to the field of reversible computing, which will be explored briefly in Part II. This, however, misses the point.
Cryptographic protocols can be classified by the type of security against eavesdropping which they provide. There exist mathematically secure schemes whose security relies on mathematical proofs (like the Vernam cipher discussed below) or conjectures (like public key RSA encryption) about the complexity of deciphering the message without possessing the correct key. The majority of current secure public Internet connections rely on such schemes. Alternatively, a cryptographic setup may provide a physically secure method for communicating. In these setups the security is provided by the physical laws governing the communication protocol. Here we first discuss a provably secure classical communication protocol and then quantum methods for distributing the necessary keys. The BB84 protocol relies on the impossibility of perfectly distinguishing non-orthogonal quantum states from one copy of the quantum system. The scheme was invented in 1984 and is the first of its kind. In contrast, the Ekert91 protocol makes use of Bell correlations between entangled pairs of photons. These correlations are destroyed when Eve attempts to make a measurement on one of the particles but also when imperfections, such as decoherence processes, affect the scheme. As long as no such “elements of reality” are introduced, and Bell correlations violating local realism can be generated by Alice and Bob, no eavesdropper can be present.
One-time pads and the Vernam cipher
The Vernam cipher is a cryptographic protocol which allows the encryption and decryption protocol to be publicly known. The security of the protocol relies entirely on the key which is private and not publicly known. Alice and Bob share identical n-bit secret key strings (one-time pad).
Why yet another book on quantum information theory? Like many lecturers we began writing this text because none of the alternatives seemed quite right. This book is aimed squarely at undergraduate physics students who want a brief but reasonably thorough introduction to the exciting ideas of quantum information, including its applications in computation and communication. It is based on a short course we have taught to fourth-year students at Oxford University since 2004; for the most part it only assumes knowledge of elementary quantum mechanics and linear algebra, and so could even be taught to third-year undergraduates. A brief revision guide to quantum mechanics is provided as an appendix, which should cover any minor points that have been missed.
As the title implies the book is structured in three parts, starting with the basics of quantum information and then applying this to quantum computation and quantum communication. Part I is self-contained, but contains only the barest hints of the exciting applications which attract many people to this field and so might prove unsatisfying on its own. Parts II and III draw heavily on Part I, but are largely independent of each other, and it would be perfectly possible to study only one of these two without the other.
As this text is aimed at physics undergraduates, we believe that it is vital to cover experimental techniques, rather than merely presenting quantum information as a series of abstract quantum operations. We have, however, concentrated on the basic ideas underlying each approach, rather than worrying about particular experimental details.
The natural choice of physical qubit for quantum communication is the photon: a photon can be transmitted quickly from the sender to a distant receiver and the technology for creating, manipulating, distributing, and measuring light pulses is well established. Many of these classical techniques can also be employed for quantum communication protocols. We will therefore focus our attention on optical setups for quantum communication.
We will study a number of optical setups in the next few sections and use the conventions introduced in Chapter 4 to analyze them. In particular, we will assume that the qubits are encoded in the degrees of freedom of a single photon. Some of the main technical challenges in quantum communication arise from this need to work with single photon pulses. For instance, photons do not interact with each other in vacuum or linear media, and interactions between two photons in non-linear optical media are relatively small, so that realizing entangling two photon gates via coherent interactions is a challenging task.
The quantum communication schemes discussed here circumvent this technical problem to a large extent. They only use non-linear optical materials for parametric down-conversion to create Bell-pairs of photons and then exploit standard linear optical devices and photo-detectors to manipulate them. No further non-linear entangling gates are required.
Parametric down-conversion
In non-linear optical media a single photon can be down-converted into a pair of photons. In this coherent process the incoming photon is destroyed and two photons of lower energy are created.
Trapped ions, trapped atoms, and NMR spin systems are all fine ways of building small “toy” quantum computers, each with its own advantages and disadvantages. There are also many other techniques which have been suggested, although these three so far remain in the lead for general-purpose quantum computing. However, the most powerful general-purpose quantum computers constructed to date have only about a dozen qubits, and this is not nearly large enough to make quantum computers useful rather than merely interesting. (Larger systems have been used to demonstrate particular quantum information processing techniques, but these cannot as yet be used to implement arbitrary quantum algorithms.)
Although it is not completely clear how complex a general-purpose quantum computer needs to be, it is clear that such a device will involve thousands or even millions of qubits, rather than the dozens involved today. It is, therefore, important to consider whether there is any hope of scaling up these technologies to useful sizes, and we will consider each of the three approaches in turn, before turning briefly to alternative technologies which have not been discussed so far.
Trapped ions
Trapped ions initially look very promising as a candidate for scaling up, as it is possible to trap thousands of ions while keeping a reasonable distance between them. Early experiments relied on particular tricks which only work with systems of two ions, but this is not true of more recent work, and there is no reason in principle why these large strings of ions could not be controlled.
Deutsch's algorithm and Grover's quantum search are theoretically interesting, but it is not obvious that these two algorithms are actually useful for anything important. We next consider a selection of more advanced algorithms, several of which may have real-life applications. Some of these will be too complicated to explain fully, and their properties will only be sketched briefly.
The Deutsch–Jozsa algorithm
Deutsch's algorithm is simple, but important, as it shows that a quantum algorithm can find a property of an unknown function (its parity) with a smaller number of queries than any possible classical algorithm (one rather than two). For this reason we can say that quantum computing is more efficient than classical computing within the oracle model of function evaluation. (It is widely believed that quantum computing is more efficient than classical computing in general, but this is a surprisingly hard thing to prove.) The simplicity of the algorithm is also an advantage, as it can be implemented on very primitive quantum computers. Beyond this, however, Deutsch's algorithm is also important as the simplest member of a large family of quantum algorithms, including most notably Shor's quantum factoring algorithm.
The second simplest algorithm in the family is the Deutsch–Jozsa algorithm, which solves a very closely related problem. Consider an unknown binary function with n input bits, giving N = 2n possible inputs, and a single output bit.
Electronic and photoelectron spectroscopy can provide extraordinarily detailed information on the properties of molecules and are in widespread use in the physical and chemical sciences. Applications extend beyond spectroscopy into important areas such as chemical dynamics, kinetics and atmospheric chemistry. This book aims to provide the reader with a firm grounding of the basic principles and experimental techniques employed. The extensive use of case studies effectively illustrates how spectra are assigned and how information can be extracted, communicating the matter in a compelling and instructive manner. Topics covered include laser-induced fluorescence, resonance-enhanced multiphoton ionization, cavity ringdown and ZEKE spectroscopy. The volume is for advanced undergraduate and graduate students taking courses in spectroscopy and will also be useful to anyone encountering electronic and/or photoelectron spectroscopy during their research.
The mathematical physicist and engineer William Thomson, 1st Baron Kelvin (1824–1904) is best known for devising the Kelvin scale of absolute temperature and for his work on the first and second laws of thermodynamics. The lectures in this collection demonstrate an attempt by Baron Kelvin to formulate a physical model for the existence of ether. This concept of a medium for light propagation became prominent in the late nineteenth century, arising from the combination of Maxwell's equations stating that light is an electromagnetic wave with the demands of Newtonian physics that light must move in a unique reference frame. First published in 1904, Kelvin's lectures describe the difficulties inherent in this model. These problems with the concept of ether are credited for inspiring Einstein to devise the theory of special relativity and the photoelectric effect, both of which are central to modern physics.
The development of lasers capable of producing high-intensity pulses has opened a new area in the study of light-matter interactions. The corresponding laser fields are strong enough to compete with the Coulomb forces in controlling the dynamics of atomic systems and give rise to multiphoton processes. This book presents a unified account of this rapidly developing field of physics. The first part describes the fundamental phenomena occurring in intense laser-atom interactions and gives the basic theoretical framework to analyze them. The second part contains a detailed discussion of Floquet theory, the numerical integration of the wave equations and approximation methods for the low- and high-frequency regimes. In the third part, the main multiphoton processes are discussed: multiphoton ionization, high harmonic and attosecond pulse generation, and laser-assisted electron-atom collisions. Aimed at graduate students in atomic, molecular and optical physics, the book will also interest researchers working on laser interactions with matter.