To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The state of a quantum system is a vector in a complex vector space. (Technically, if the dimension of the vector space is infinite, then it is a separable Hilbert space.) Here we will always assume that our systems are finite-dimensional. We do this because everything we will discuss transfers without change to infinite-dimensional systems. Further, when one actually simulates a system on a computer, one must always truncate an infinite-dimensional space so that it is finite.
Consider two separate quantum systems. Now consider the fact that if we take them together, then we should be able to describe them as one big quantum system. That is, instead of describing them by two separate state vectors, one for each system, we should be able to represent the states of both of them together using one single state vector. (This state vector will naturally need to have more elements in it than the separate state-vectors for each of the systems, and we will work out exactly how many below.) In fact, in order to describe a situation in which the two systems have affected each other – by interacting in some way – and have become correlated with each other as a result of this interaction, we will need to go beyond using a separate state vector for each. If each system is in a pure state, described by a state vector for it alone, then there are no correlations between the two systems. Using a single state vector to describe the joint state of the two systems is the natural way to describe all possible states that the two systems could be in (correlated or uncorrelated). We now show how this is done.
Input–output theory, developed by Collett and Gardiner in 1984, is a way to treat the interaction of a system with a thermal bath, in which the bath is modeled as a quantum field [128, 199]. It applies to exactly the same situation as that of Lindblad Markovian master equations, such as the Redfield equation, and these master equations can easily be derived from it. But input–output theory goes much further than the equivalent master equations: (i) it allows one to calculate the output that flows from the system into the bath; (ii) it makes the physical connection between thermal baths and continuous measurements; (iii) its derivation avoids the obscure approximations used in the standard derivation of master equations; (iv) it can be used to connect systems together in networks by connecting the outputs of some systems to the inputs of others; (v) more generally the quantum Langevin approach of input–output theory can be applied outside the regime of validity of the Markovian master equations, to explicitly treat open systems in which non-Markovian effects are significant [211, 715].
Input–output theory was originally derived by considering the damping of a mode of an optical cavity. We use this example here in our derivation as a concrete reference, but the theory provides a model for damping and thermalization of any quantum system that is weakly coupled to a large environment. If you are not familiar with the concept of an optical cavity, a quick review is given in Section 3.3. The transmission of light through one of the end-mirrors of the optical cavity will be modeled as an interaction of the cavity mode with the electromagnetic field outside the cavity. In the original treatment the cavity just has two mirrors, and thus the output light comes through the same mirror as the input light. Because of this the electromagnetic field with which the cavity interacts fills only half of the real line (see Fig. F.1). There are other treatments in which the output field goes in the same direction as the input field [100, 669, 686]. While the original method requires a bit more work, we prefer it because it teaches one how to analyze this experimentally natural situation. Our derivation is a pedagogically expanded version of that given by Gardiner and Zoller [201].
In 1948 Claude Shannon realized that there was a way to quantify the intuitive notion that some messages contain more information than others. He approached this problem by saying that a message provides information when it reveals which of a set of possibilities has occurred. This concept certainly makes sense given our intuitive notion of uncertainty: information should reduce our uncertainty about a set of possibilities. He then asked, how long must a message be to convey one of M possibilities?
To answer this question we must first specify the alphabet we are using. Specifically, we need to fix the number of letters, or symbols, in our alphabet. It is simplest to take the smallest workable alphabet, one that contains only two symbols, and we will call these 0 and 1. Imagine now that there is a new movie showing, and you are trying to decide whether to go see it. You call a friend who has seen it, Alice, and ask her whether she liked it. To answer your question yes or no she need only send you one symbol, so long as you have agreed beforehand that 1 will represent yes and 0 no. Note that two people must always agree beforehand on the meaning of the symbols they use to communicate – the only reason you can read this text is because we have a prior agreement about the meaning of each word.
Superconducting circuits, photonic cavities, and tiny mechanical oscillators are versatile systems in which quantum dynamics can be observed and controlled. These various technologies can be combined, and the resulting “circuits” are often referred to as nano-electromechanical or optomechanical systems. In the context of quantum physics, such devices are referred to as “mesoscopic” because they contain a large number of atoms. This terminology distinguishes them from the “microscopic” systems of atom-optics, such as single ions [364] and atoms [545, 233, 26], and the spins of individual nuclei [412]. Nano-engineered systems provide an especially powerful arena for exploring quantum measurement and control because of their potential to realize complex circuits. In this chapter we explain how to treat these systems and describe in detail a common method used to measure them.
Superconducting circuits
The electrical circuits that we describe in this chapter are quite different from those that use normal conductors, such as copper wire at room temperature. Normal conductors are subject to strong dissipation and decoherence from the environment. As a result the dynamical degrees of freedom of the circuits, the currents and voltages, behave as classical variables. In superconducting circuits, the resistance, and thus the dissipation, is so small that the currents and voltages behave as quantum mechanical degrees of freedom. Of course, the current in a wire is actually composed of many elementary particles that each obey quantum mechanics. Why then should the current act like a single quantum degree of freedom? A mechanical analogy is useful here. Recall that even though a macroscopic pendulum is composed of many particles, when one talks about the motion of the pendulum one means the motion of the center-of-mass of its constituent particles. Since the center of mass is merely a linear combination of the coordinates of each particle, it is itself a valid quantum mechanical coordinate with a conjugate momentum. In a similar way the total current in a superconducting wire is a collective degree of freedom that obeys quantum mechanics.
The notion of feedback control comes to us from the classical world of mechanical and electrical engineering. Machines designed to operate a certain way may deviate because of small errors in their physical construction, because of inherent instability, or because of external sources of noise. Such deviations can be controlled by monitoring a machine’s behavior, and using this information to apply forces that periodically or continuously correct the motion. This is called feedback control, and the device that receives the measurement signal, and translates it into the appropriate correcting forces, is called the “feedback controller” (or “controller” for short). We will usually refer to the system to be controlled as the primary, and the system that acts as a feedback controller as the auxiliary. The device(s) that actually apply the forces to the system are often called the actuator(s), and the prescription by which the control forces are chosen based on the measurement results is variously called the control algorithm, law, strategy, or protocol. Here we exclusively use the terms strategy and protocol.
Feedback control is also known as closed-loop control, because the flow of information to the controller, and the flow back to the system via actuators, are thought of as forming a loop. As a historical note, Maxwell appears to have been the first to perform a mathematical analysis of a feedback control system. He studied “governors,” mechanical devices that control the speed of an engine by using the centrifugal force generated by the engine to control the flow of fuel [413]. In fact the use of centrifugal governors goes back even further, as they were used to control the pressure between millstones in windmills in the 17th century [253].
While the term metrology refers to measurement techniques in general, it is used more specifically to mean the study of techniques to measure classical quantities. Metrology is concerned with how to make measurements as accurately as possible, and to quantify how accurate a measurement technique is. Precise measurements of quantities such as frequency and mass are important for establishing and maintaining standard units. Other effectively classical quantities that one may wish to measure are acceleration and the strengths of magnetic or electric fields.
Metrology is concerned not only with measurements of single fixed quantities but also with quantities that vary in time, usually called signals or waveforms. Since all signals have some maximum rate at which they change, they can be discretized by sampling with sufficient frequency. Thus all signals can be regarded as finite sets of values to be measured. Nevertheless, the fact that these values arrive in a sequence with a given duration places practical limitations on their measurement. Optimal methods for measuring signals are important in communication, and in many scientific applications where data is time-varying and noisy.
This is an exceptionally accessible, accurate, and non-technical introduction to quantum mechanics. After briefly summarizing the differences between classical and quantum behaviour, this engaging account considers the Stern-Gerlach experiment and its implications, treats the concepts of probability, and then discusses the Einstein-Podolsky-Rosen paradox and Bell's theorem. Quantal interference and the concept of amplitudes are introduced and the link revealed between probabilities and the interference of amplitudes. Quantal amplitude is employed to describe interference effects. Final chapters explore exciting new developments in quantum computation and cryptography, discover the unexpected behaviour of a quantal bouncing-ball, and tackle the challenge of describing a particle with no position. Thought-provoking problems and suggestions for further reading are included. Suitable for use as a course text, The Strange World of Quantum Mechanics enables students to develop a genuine understanding of the domain of the very small. It will also appeal to general readers seeking intellectual adventure.
The study of computational processes based on the laws of quantum mechanics has led to the discovery of new algorithms, cryptographic techniques, and communication primitives. This book explores quantum computation from the perspective of the branch of theoretical computer science known as semantics, as an alternative to the more well-known studies of algorithmics, complexity theory, and information theory. It collects chapters from leading researchers in the field, discussing the theory of quantum programming languages, logics and tools for reasoning about quantum systems, and novel approaches to the foundations of quantum mechanics. This book is suitable for graduate students and researchers in quantum information and computation, as well as those in semantics, who want to learn about a new field arising from the application of semantic techniques to quantum information and computation.
Quantum information processing offers fundamental improvements over classical information processing, such as computing power, secure communication, and high-precision measurements. However, the best way to create practical devices is not yet known. This textbook describes the techniques that are likely to be used in implementing optical quantum information processors. After developing the fundamental concepts in quantum optics and quantum information theory, the book shows how optical systems can be used to build quantum computers according to the most recent ideas. It discusses implementations based on single photons and linear optics, optically controlled atoms and solid-state systems, atomic ensembles, and optical continuous variables. This book is ideal for graduate students beginning research in optical quantum information processing. It presents the most important techniques of the field using worked examples and over 120 exercises.
Reviewing macroscopic quantum phenomena and quantum dissipation, from the phenomenology of magnetism and superconductivity to the presentation of alternative models for quantum dissipation, this book develops the basic material necessary to understand the quantum dynamics of macroscopic variables. Macroscopic quantum phenomena are presented through several examples in magnetism and superconductivity, developed from general phenomenological approaches to each area. Dissipation naturally plays an important role in these phenomena, and therefore semi-empirical models for quantum dissipation are introduced and applied to the study of a few important quantum mechanical effects. The book also discusses the relevance of macroscopic quantum phenomena to the control of meso- or nanoscopic devices, particularly those with potential applications in quantum computation or quantum information. It is ideal for graduate students and researchers.
Providing a pedagogical introduction to the essential principles of path integrals and Hamiltonians, this book describes cutting-edge quantum mathematical techniques applicable to a vast range of fields, from quantum mechanics, solid state physics, statistical mechanics, quantum field theory, and superstring theory to financial modeling, polymers, biology, chemistry, and quantum finance. Eschewing use of the Schrödinger equation, the powerful and flexible combination of Hamiltonian operators and path integrals is used to study a range of different quantum and classical random systems, succinctly demonstrating the interplay between a system's path integral, state space, and Hamiltonian. With a practical emphasis on the methodological and mathematical aspects of each derivation, this is a perfect introduction to these versatile mathematical methods, suitable for researchers and graduate students in physics and engineering.
All the chapters of the book have examined and studied various aspects of path integrals and Hamiltonians, which in turn exemplified different aspects of quantum mathematics. The principles of quantum mathematics, stated in Chapter 2, can be summarized as follows:
• The fundamental degrees of freedom F form the bedrock of the quantum system.
• A linear vector state space V based on the degrees of freedom F provides an exhaustive description of the quantum system.
• Operators O, which includes the Hamiltonian H, represent the physical properties of the degree of freedom and act on the state space V. Observable quantities are the matrix elements of the physical operators.
• A spacetime description of quantum indeterminacy is encoded in the Lagrangian L and the Dirac–Feynman formula relates it to the Hamiltonian.
• The path integral provides a representation of all the physical properties of a quantum system. In particular, the path integral yields the correlation functions of the degrees of freedom as well as the probability amplitudes for quantum transitions.
• The interconnection of the path integral with the underlying Hamiltonian and state space is a specific feature of quantum mathematics that distinguishes path integration from functional integration in general.
Several path integrals are exactly evaluated here using Gaussian path integration. A few general ideas are illustrated using the advantage of being able to exactly evaluate Gaussian path integrals.
Path integrals defined over a particular collection of allowed indeterminate paths can sometimes be represented by a Fourier expansion of the paths. This leads to two important techniques for performing path integrations:
Expanding the action about the classical solution of the Lagrangian;
Expanding the degree of freedom in a Fourier expansion of the allowed paths.
Various cases are considered to illustrate the usage of classical solutions and Fourier expansions, and these also provide a set of relatively simple examples to familiarize oneself with the nuts and bolts of the path integral. The Lagrangian of the simple harmonic oscillator is used for all of the following examples; all the computations are carried out explicitly and exactly.
The following different cases are considered:
• Correlators of exponential functions of the degree of freedom are discussed in Section 12.1.
• The generating functional for periodic paths is evaluated in Section 12.2.
• The path integral required for evaluating the normalization constant for the oscillator evolution kernel is discussed in Section 12.3. The path integral entails summing over all paths that start from and return to the same fixed position.
• Section 12.4 discusses the evolution kernel for a particle starting at an initial position xi and, after time τ, having a final position that is indeterminate.