To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Until now we have dealt almost solely with the dynamics of closed quantum systems, that is, with quantum systems that do not suffer any unwanted interactions with the outside world. Although fascinating conclusions can be drawn about the information processing tasks which may be accomplished in principle in such ideal systems, these observations are tempered by the fact that in the real world there are no perfectly closed systems, except perhaps the universe as a whole. Real systems suffer from unwanted interactions with the outside world. These unwanted interactions show up as noise in quantum information processing systems. We need to understand and control such noise processes in order to build useful quantum information processing systems. This is a central topic of the third part of this book, which begins in this chapter with the description of the quantum operations formalism, a powerful set of tools enabling us to describe quantum noise and the behavior of open quantum systems.
What is the distinction between an open and a closed system? A swinging pendulum like that found in some mechanical clocks can be a nearly ideal closed system. A pendulum interacts only very slightly with the rest of the world – its environment – mainly through friction. However, to properly describe the full dynamics of the pendulum and why it eventually ceases to move one must take into account the damping effects of air friction and imperfections in the suspension mechanism of the pendulum.
Understanding some elementary number theory is necessary if we are to understand cryptosystems and how quantum computers can be used to break them. In this appendix we review some basic facts about number theory.
Fundamentals
Let's start off by agreeing about a few conventions for nomenclature and notation. The set of integers is the set {…, −2, −1, 0, 1, 2, …}, denoted Z. We may occasionally refer to the natural numbers, meaning non-negative integers, but more often we'll say non-negative integer or positive integer, in order to make the distinction between the case when zero is included, and when zero is not included.
Suppose n is an integer. An integer d divides n (written d|n) if there exists an integer k such that n = dk. We say in this case that d is a factor or divisor of n. Notice that 1 and n are always factors of n. When d does not divide (is not a factor of) n we write d∤n. For example, 3|6 and 3|18, but 3∤|5 and 3∤7.
Exercise A4.1: (Transitivity) Show that if a|b and b|c then a|c.
Exercise A4.2: Show that if d|a and d|b then d also divides linear combinations of a and b, ax + by, where x and y are integers.
Exercise A4.3: Suppose a and b are positive integers. Show that if a|b then a ≤ b. Conclude that if a|b and b|a then a = b.
Entropy is a key concept of quantum information theory. It measures how much uncertainty there is in the state of a physical system. In this chapter we review the basic definitions and properties of entropy in both classical and quantum information theory. In places the chapter contains rather detailed and lengthy mathematical arguments. On a first reading these sections may be read lightly and returned to later for reference purposes.
Shannon entropy
The key concept of classical information theory is the Shannon entropy. Suppose we learn the value of a random variable X. The Shannon entropy of X quantifies how much information we gain, on average, when we learn the value of X. An alternative view is that the entropy of X measures the amount of uncertainty about X before we learn its value. These two views are complementary; we can view the entropy either as a measure of our uncertainty before we learn the value of X, or as a measure of how much information we have gained after we learn the value of X.
Intuitively, the information content of a random variable should not depend on the labels attached to the different values that may be taken by the random variable. For example, we expect that a random variable taking the values ‘heads’ and ‘tails’ with respective probabilities ¼ and ¾ contains the same amount of information as a random variable that takes the values 0 and 1 with respective probabilities ¼ and ¾.
What does it mean to say that two items of information are similar? What does it mean to say that information is preserved by some process? These questions are central to a theory of quantum information processing, and the purpose of this chapter is the development of distance measures giving quantitative answers to these questions. Motivated by our two questions we will be concerned with two broad classes of distance measures, static measures and dynamic measures. Static measures quantify how close two quantum states are, while dynamic measures quantify how well information has been preserved during a dynamic process. The strategy we take is to begin by developing good static measures of distance, and then to use those static measures as the basis for the development of dynamic measures of distance.
There is a certain arbitrariness in the way distance measures are defined, both classically and quantum mechanically, and the community of people studying quantum computation and quantum information has found it convenient to use a variety of distance measures over the years. Two of those measures, the trace distance and the fidelity, have particularly wide currency today, and we discuss both these measures in detail in this chapter. For the most part the properties of both are quite similar, however for certain applications one may be easier to deal with than the other. It is for this reason and because both are widely used within the quantum computation and quantum information community that we discuss both measures.
Science offers the boldest metaphysics of the age. It is a thoroughly human construct, driven by the faith that if we dream, press to discover, explain, and dream again, thereby plunging repeatedly into new terrain, the world will somehow come clearer and we will grasp the true strangeness of the universe. And the strangeness will all prove to be connected, and make sense.
– Edward O. Wilson
Information is physical.
– Rolf Landauer
What are the fundamental concepts of quantum computation and quantum information? How did these concepts develop? To what uses may they be put? How will they be presented in this book? The purpose of this introductory chapter is to answer these questions by developing in broad brushstrokes a picture of the field of quantum computation and quantum information. The intent is to communicate a basic understanding of the central concepts of the field, perspective on how they have been developed, and to help you decide how to approach the rest of the book.
Our story begins in Section 1.1 with an account of the historical context in which quantum computation and quantum information has developed. Each remaining section in the chapter gives a brief introduction to one or more fundamental concepts from the field: quantum bits (Section 1.2), quantum computers, quantum gates and quantum circuits (Section 1.3), quantum algorithms (Section 1.4), experimental quantum information processing (Section 1.5), and quantum information and communication (Section 1.6).
Quantum mechanics has the curious distinction of being simultaneously the most successful and the most mysterious of our scientific theories. It was developed in fits and starts over a remarkable period from 1900 to the 1920s, maturing into its current form in the late 1920s. In the decades following the 1920s, physicists had great success applying quantum mechanics to understand the fundamental particles and forces of nature, culminating in the development of the standard model of particle physics. Over the same period, physicists had equally great success in applying quantum mechanics to understand an astonishing range of phenomena in our world, from polymers to semiconductors, from superfluids to superconductors. But, while these developments profoundly advanced our understanding of the natural world, they did only a little to improve our understanding of quantum mechanics.
This began to change in the 1970s and 1980s, when a few pioneers were inspired to ask whether some of the fundamental questions of computer science and information theory could be applied to the study of quantum systems. Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed. This seems a small change in perspective, but the implications are profound. No longer is the quantum world taken merely as presented, but instead it can be created.
This book provides an introduction to the main ideas and techniques of the field of quantum computation and quantum information. The rapid rate of progress in this field and its cross-disciplinary nature have made it difficult for newcomers to obtain a broad overview of the most important techniques and results of the field.
Our purpose in this book is therefore twofold. First, we introduce the background material in computer science, mathematics and physics necessary to understand quantum computation and quantum information. This is done at a level comprehensible to readers with a background at least the equal of a beginning graduate student in one or more of these three disciplines; the most important requirements are a certain level of mathematical maturity, and the desire to learn about quantum computation and quantum information. The second purpose of the book is to develop in detail the central results of quantum computation and quantum information. With thorough study the reader should develop a working understanding of the fundamental tools and results of this exciting field, either as part of their general education, or as a prelude to independent research in quantum computation and quantum information.
Structure of the book
The basic structure of the book is depicted in Figure 1. The book is divided into three parts. The general strategy is to proceed from the concrete to the more abstract whenever possible.
In Chapter 4 we showed that an arbitrary unitary operation U may be implemented on a quantum computer using a circuit consisting of single qubit and controlled-not gates. Such universality results are important because they ensure the equivalence of apparently different models of quantum computation. For example, the universality results ensure that a quantum computer programmer may design quantum circuits containing gates which have four input and output qubits, confident that such gates can be simulated by a constant number of controlled-not and single qubit unitary gates.
An unsatisfactory aspect of the universality of controlled-not and single qubit unitary gates is that the single qubit gates form a continuum, while the methods for fault-tolerant quantum computation described in Chapter 10 work only for a discrete set of gates. Fortunately, also in Chapter 4 we saw that any single qubit gate may be approximated to arbitrary accuracy using a finite set of gates, such as the controlled-not gate, Hadamard gate H, phase gate S, and π/8 gate. We also gave a heuristic argument that approximating the chosen single qubit gate to an accuracy ∈ required only Θ(1/∈) gates chosen from the finite set. Furthermore, in Chapter 10 we showed that the controlled-not, Hadamard, phase and π/8 gates may be implemented in a fault-tolerant manner.
As we emphasized back in Chapter 1, atmospheres are not static. The mass and composition of an atmosphere evolves over time, as a result of a great variety of chemical, physical, and biological processes. Now it is time to survey those processes in greater detail, and to put numbers on them to the extent possible in the limited space available in this chapter.
Throughout the following we will need to refer to some constituents of a planet as volatiles. These are “not rocks” – things that can become gases to a significant extent. The concept of a volatile is relative to the temperature of a planet. On Earth, water is a volatile but on Titan it is basically a rock, as is CO2, though N2 and CH4 remain as volatiles even at the low temperatures of Titan. On Earth, sand (SiO2) is a rock, but on a roaster – a hot extrasolar Jupiter in a close orbit – it could be a volatile.
For planets in which some atmospheric volatiles exchange with a condensed reservoir, as in the case of Earth's ocean and glaciers, the whole atmosphere–ocean–cryosphere system is best treated as a unit for many purposes, and we will refer to this as the volatile envelope. In other cases, the portion of the volatile envelope which resides in the atmosphere plays a distinguished role.
The atmospheres which are our principal objects of study are made of compressible gases. The compressibility has a profound effect on the vertical profile of temperature in these atmospheres. As things progress it will become clear that the vertical temperature variation in turn strongly influences the planet's climate. To deal with these effects it will be necessary to know some thermodynamics – though just a little. This chapter does not purport to be a complete course in thermodynamics. It can only provide a summary of the key thermodynamic concepts and formulae needed to treat the basic problems of planetary climate. It is assumed that the student has obtained (or will obtain) a more fundamental understanding of the general subject of thermodynamics elsewhere.
A FEW OBSERVATIONS
The temperature profile in Fig. 2.1, measured in the Earth's tropics, introduces most of the features that are of interest in the study of general planetary atmospheres. It was obtained by releasing an instrumented balloon (radiosonde) which floats upward from the ground, and sends back data on temperature and pressure as it rises. Pressure goes down monotonically with height, so the lower pressures represent greater altitudes. The units of pressure used in the figure are millibars (mb). One bar is very nearly the mean sea-level pressure on Earth, and there are 1000 mb in a bar.
This chapter will survey a few of the major questions raised by observed features of present and past Earth and planetary climates. Some of these questions have been answered to one extent or another, but many remain largely unresolved. This will not be a comprehensive synopsis of Earth and planetary climate evolution; we will be content to point out a few striking facts about climate that demand a physical explanation. Then, in subsequent chapters, we'll develop the physics necessary to think about these problems. Although we hope not to be too Earth-centric in this book, in the present chapter we will perforce talk at greater length about Earth's climate than about those of other planets, because so much more is known about Earth's past climate than is known about the past climates of other planets. A careful study of Earth history suggests generalities that may apply to other planets, and also raises interesting questions about how things might have happened differently elsewhere, and it is with this goal in mind that we begin our journey.
CLOSE TO HOME
When the young Carl Linnaeus set off on his journey of botanical discovery to Lapland in 1732, he left on foot from his home in Uppsala. He didn't wait until he reached his destination to start making observations, but found interesting things to think about all along the way, even in the plant life at his doorstep.
This results of this chapter are pertinent to a planet with a distinct surface, which may be defined as an interface across which the density increases substantially and discontinuously. The typical interface would be between a gaseous atmosphere and a solid or liquid surface. In the Solar System, there are only four examples of bodies having both a distinct surface and a thick enough atmosphere to significantly affect the surface temperature. These are Venus, Earth, Titan, and Mars; among these, the present Martian atmosphere is so thin that it only marginally affects the surface temperature, though this situation was probably different early in the planet's history when the atmosphere may have been thicker. Although thin atmospheres have little effect on the surface temperature, the atmosphere itself can still have interesting behavior, and the flux of energy from the surface to the atmosphere provides a crucial part of the forcing which drives the atmospheric circulation. This is the case, for example, for the thin nitrogen atmosphere of Neptune's moon Triton. Apart from the examples we know, it is worth thinking of the surface balance in general terms, because of the light it sheds on the possible nature of the climates of extrasolar planets already detected or awaiting discovery.
The exchange of energy between the surface and the overlying atmosphere determines the surface temperature relative to the air temperature.
So far, we have studiously avoided discussing the circulations of atmospheres or oceans, or indeed fluid mechanics of any type, with the exception of a brief foray into compressible one-dimensional hydrodynamics in Section 8.7.4. This is not because the subject is unimportant, but rather because the subject is too important to be relegated to the kind of superficial discussion we could accord it while doing justice to the rest of the physics governing the fluid envelopes of planets. This chapter provides a glimpse at what the reader has been missing. It highlights what the reader needs to keep in mind when learning atmosphere/ocean fluid dynamics, and, for the student who has already acquired some familiarity with that subject, connects fluid mechanical effects with the key planetary climate phenomena that have been the subject of this book. It is, in essence, a sampler of some of the many ways that large-scale fluid dynamics affects planetary climate.
This being the final chapter (for now) of a long journey, we will also take stock of how well we have done at coming to an understanding of the Big Questions introduced in Chapter 1. We wrap up with a reminder of the great breadth of largely unexplored problems the reader is already equipped to take on. The universe of problems becomes all the richer once planetary fluid dynamics is brought into the picture.