To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this chapter is to illustrate the basic physical principles and mathematical framework of dynamical decoupling (DD) techniques for open quantum systems, as relevant to quantum information processing (QIP) applications. Historically, the physical origins of DD date back to the idea of coherent averaging of interactions, as pioneered in high-resolution solid-state nuclear magnetic resonance (NMR) by Haeberlen and Waugh using elegantly designed multiple-pulse sequences [HW68, WHH68, H76]. It was in the same landmark work [HW68] that average Hamiltonian theory was developed as a formalism on which the design and analysis of DD sequences has largely relied since then. In the original context of NMR spectroscopy, decoupling serves the purpose of enhancing resolution by simplifying complex spectra. This is achieved by realizing that an otherwise static spin Hamiltonian “can be made to appear time-dependent in a controlled way,” so that “as the characteristic repetition period of the pulses becomes [sufficiently] small, the spin system comes to behave over long times as though under the influence of a time-independent average Hamiltonian” [HW68]. Some basic insight may be gained by revisiting the paradigmatic example offered by the so-called Hahn echo [H50] and Carr–Purcell (CP) sequences [CP54] in the simplest setting of a two spin-1/2 system.
In this chapter we will consider some of the practical difficulties in building a large-scale quantum computing device. This discussion relies heavily upon the prior chapters that introduced faulttolerant quantum computation. Assuming that fault tolerance is possible allows us to focus on the physical realization of these ideas with a few specific examples.
Before going into detail, we review the governing ideas behind any fault-tolerant architecture. These are the necessary components that we analyze in this chapter: good quantum memory, high-fidelity quantum operations, long-range quantum gates, and highly parallel operation, such that error correction in different sections of the device can be accomplished at the same time. While a wide variety of potential implementations may be possible, the goal of a fault-tolerant architecture is not only to be scalable, i.e., to be able to run an arbitrarily large computation with at most polynomial overhead [S95, S96e], but also to be as efficient as possible. Efficiency in this context means using the fewest physical resources (quantum bits, time, control operations) necessary to accomplish the desired computation [S03b].
From this perspective, the recipe for fault tolerance is well established. First, we need to identify what quantum operations are available for the various quantum bits at our disposal. Developing error models for these operations forms the bulk of this chapter. Important questions about operations include noise, implementation time, and bandwidth (how many such operations may be performed in parallel).
One of the applications of quantum error correction is protecting quantum computers from noise. Certainly, there is nothing particularly quantum mechanical in the idea of protecting information by encoding it. Even ordinary digital computers use various fault tolerance methods at the software level to correct errors during the storage or the transmission of information; e.g., the integrity of the bits stored in hard disks is verified by using parity checks (checksums). In addition, for critical computing systems such as those inside airplanes or nuclear reactors, software fault tolerance methods are also applied during the processing of information; e.g., airplane control computers compare the results from multiple parallel processors to detect faults. In general, however, the hardware of modern computers is remarkably robust to noise, so that for most applications the processing of information can be executed with high reliability without using software error correction.
In contrast to the ease and robustness with which classical information can be processed, the processing of quantum information appears at present to be much more challenging. Although constructing reliable quantum computing hardware is certainly a daunting task, we have nevertheless strong hopes that large-scale quantum computers, able to implement useful long computations, can in fact be realized. This optimism is founded on methods of quantum fault tolerance, which show that scalable quantum computation is, in principle, possible against a variety of noise processes.
We began this book with a short example about targeted drug delivery, an important application area for molecular communication; we now elaborate on this significant and motivating example for molecular communication. We also discuss other potential application areas of molecular communication, such as tissue engineering, lab-on-a-chip technology, and unconventional computation.
For each application area, we start with a brief introduction to the area and describe potential application scenarios where bio-nanomachines communicate through molecular communication to achieve the goal of an application. We then describe in detail selected designs of molecular communication systems as well as experimental results in the area.
Drug delivery
Drug delivery provides novel methodologies for drug administration that can maximize the therapeutic effect of drug molecules [1,2]. One goal of drug delivery is to develop drug delivery carriers that can carry and deliver drug molecules to a target site in a body. Such drug delivery carriers are made from synthetic or natural particles (e.g., pathogens or blood cells) and they are typically nano to micrometer in size, so they can be injected into the circulatory system to propagate in a body. Targeting of drug delivery carriers in a body can be performed by exploiting pathological conditions that appear at a target site (e.g., tumor tissues). For instance, tumor tissues develop small gaps between nearby endothelial cells in a blood vessel, so drug delivery carriers, if small enough, can propagate through the gaps to accumulate in the tumor tissues.
For most of human history we maneuvered our way through the world based on an intuitive understanding of physics, an understanding wired into our brains by millions of years of evolution and constantly bolstered by our everyday experience. This intuition has served us very well, and functions perfectly at the typical scales of human life – so perfectly, in fact, that we rarely even think about it. It took many centuries before anyone even tried to formulate this understanding; centuries more before the slightest evidence suggested that these assumptions might not always hold. When the twin revolutions of relativity and quantum mechanics overturned twentieth-century physics, they also overturned this intuitive notion of the world.
In spite of this, the direct effect at the human scale has been small. Our cars do not run on Szilard engines. Very few freeways, even in Los Angeles, have signs saying “Speed Limit 300,000 km/s.” And human intuition remains rooted in its evolutionary origins. It takes years of training for scientists to learn the habits of thought appropriate to quantum mechanics; and even then, surprises still come along in the areas we think we understand the best.
Technology has transformed how we live our lives. Computers and communications depend on the amazingly rapid developments of electronics, which in turn derive from our understanding of quantum mechanics: we use the microscopic movements of electrons in solids to do work and play games; pulses of coherent light in optical fibers tie the world together.
Many of the quantum codes discussed in this book are quantum block codes. Quantum block codes are useful both in quantum computing and in quantum communication, but one of the drawbacks for quantum communication is that, in general, the sender must have her block of qubits ready before encoding takes place. This preparation may be a heavy demand on the sender when the size of the block code is large.
Quantum convolutional coding theory [OT03, OT04, FGG07] offers a paradigm different from quantum block coding and has numerous benefits for quantum communication. The convolutional structure is useful when a sender possesses a stream of qubits to transmit to a receiver. It is possible to encode and decode quantum convolutional codes with quantum shift-register circuits [W09b], the natural “quantization” of a classical shift register circuit. These quantum circuits ensure a low complexity for encoding and decoding while also providing higher performance than a block code with equivalent encoding complexity [FGG07]. Quantum shift-register circuits have the property that the sender Alice and the receiver Bob can respectively send and receive qubits in an “online” fashion. Alice can encode an arbitrary number of information qubits without worrying beforehand how many she may want to send over the quantum communication channel.
Most quantum error-correcting codes (QECCs) that are designed to correct local errors are stabilizer codes that correspond to additive classical codes. Because of this correspondence, stabilizer codes are also referred to as additive codes, while nonstabilizer codes are called nonadditive codes. Shortly after the invention of stabilizer codes by Gottesman [G96a] and independently Calderbank et al. [CRS+98], the nonadditive ((5, 6, 2)) code was found [RHS+97], which constitutes the first example of a nonadditive quantum code with a higher dimension than any stabilizer code of equal length and minimum distance. About ten years later, the first one-error-correcting ((9, 12, 3)) code and the optimal ((10, 24, 3)) code were found [YCL+09, YCO07], both better than any stabilizer code of the same length.
Here we describe a framework that not only allows a joint description of these codes, but also enables the construction of new nonadditive codes with better parameters than the best known stabilizer codes. These codes are based on the classical nonlinear Goethals and Preparata codes, which themselves are better than any classical linear code.
In Section 10.3, the framework will be presented from two different points of view, namely union stabilizer codes and codeword stabilized codes, each highlighting different aspects of the construction. To illustrate the relationship of these nonadditive codes to stabilizer codes we first recall the main aspects of the latter. Section 10.4 is devoted to methods for obtaining new codes, including the aforementioned quantum Goethals–Preparata codes.
The purpose of quantum error correction (QEC) is to preserve a quantum state despite the presence of decoherence. As we see throughout this book, the desired robustness exacts a cost in resources, most commonly the inclusion of redundant qubits. The following are reasonable engineering queries: How much will it cost to provide the desired robustness? For a fixed cost, what is the best performance I can achieve? In this chapter, we present some numerical tools to illuminate these kinds of questions.
To understand the cost/performance trade-off quantitatively, we need a clear measure of performance and a model for permissible operations. With this in mind, we will revisit the concepts of fidelity and quantum operations. As it turns out, this quantitative approach can yield very well structured convex optimization problems. Using powerful numerical tools, we determine optimal encodings and recovery operations. Even if the optimal results are not directly implemented, the optimization tools can provide insight into practical solutions by providing the ultimate performance limits.
Limitation of the independent arbitrary errors model
As pointed out in Chapter 2, our rich history of classical error correction has provided a significant foundation for QEC methods. As such, the initial QEC breakthroughs involved importing classical coding concepts into a framework that made robust quantum codes (such as CSS codes or the stabilizer code formalism). We learned that we could create general purpose codes that made minimal assumptions about the structure of the decoherence process.
We were inspired to put this book together during the process of organizing the First International Conference on Quantum Error Correction at the University of Southern California (in December 2007, with a sequel in December 2011).With many of the world's foremost experts in the various branches of quantum error correction gathered together in Los Angeles, we solicited chapters on what we thought were the most important topics in the broad field. As editors, we then faced the difficult challenge of integrating material from many expert authors into one comprehensive and yet coherent volume. To achieve this feeling of coherence, we asked all our contributors to work with a single, common notation, and to work with the authors of other chapters in order to minimize overlap and maximize synchronicity. This proved hard to enforce, and while we made every effort to achieve consistency among the different chapters, this goal was surely only partly met. To the extent that the reader discovers inconsistencies, we as editors take full responsibility. The resulting book is not a textbook; for one, it doesn't include any exercises, and figures are not abundant. Moreover, it can only be a snapshot of such a rapidly evolving subject as quantum error correction. Nevertheless, we believe that the basic results in the field are now well enough established that this book, with its extensive index and list of references to the literature, will serve both as a reference for experts and as a guidebook for new researchers, for some years to come.
Cluster states were introduced in Chapter 18, along with some of the approaches to achieving fault tolerance in measurement-based quantum computing. In this chapter, we describe an extremely promising fault-tolerant cluster state quantum computing scheme [RH07, RHG07] with a threshold error rate of 7.46 × 10−3, low overhead arbitrarily long-range logical gates and novel adjustable strength error correction capable of correcting general errors through the correction of Z errors only. Detailed proposed implementations of this scheme exist for ion traps [SJ09] and single photons with cavity mediated interactions [DGI+07].
The discussion is organized as follows. In Section 20.2, we describe the topological cluster state, which is a specific three-dimensional (3D) cluster state, and give a brief overview of what topological cluster state quantum computing involves. Section 20.3 describes logical qubits in more detail and how to initialize them to ∣0L〉 and ∣+L〉 and measure them in the ZL and XL bases. State injection, the non-fault-tolerant construction of arbitrary logical states, is covered in Section 20.4. Logical gates, namely the logical identity gate and the logical CNOT gate, are discussed in Section 20.5 along with their byproduct operators. Section 20.6 describes the errorcorrection procedure. In Section 20.7, we calculate an estimate for the threshold of this scheme.
Section 20.8 presents an analysis of the overhead as a function of both the circuit size and the error rate.
In Chapter 16 it was shown how holonomic quantum computation (HQC) can be combined with the method of decoherence-free subspaces (DFSs), leading to passive protection against certain types of correlated errors. However, this is not enough for fault tolerance since other types of errors can accumulate detrimentally unless corrected. Scalability of HQC therefore requires going beyond that scheme, e.g., by combining the holonomic approach with active error correction. One way of combining HQC with active quantum error-correcting codes, which is similar to the way HQC is combined with DFSs, was also mentioned in Chapter 16. This approach, however, is not scalable since it requires Hamiltonians that commute with the stabilizer elements, and when the code increases in size, this necessitates couplings that become increasingly nonlocal.
In this chapter, we will show how HQC can be made fault tolerant by combining it with the techniques for fault-tolerant quantum error correction (FTQEC) on stabilizer codes using Hamiltonians of finite locality. The fact that the holonomic method can be mapped directly to the circuit model allows us to construct procedures that resort almost entirely to these techniques. We will discuss an approach that makes use of the encoding already present in a stabilizer code and does not require additional qubits [O08, OBL08]. An alternative approach, which requires ancillary qubits for implementing transversal operations between qubits in the code, can be found in [O09].
This comprehensive guide, by pioneers in the field, brings together, for the first time, everything a new researcher, graduate student or industry practitioner needs to get started in molecular communication. Written with accessibility in mind, it requires little background knowledge, and provides a detailed introduction to the relevant aspects of biology and information theory, as well as coverage of practical systems. The authors start by describing biological nanomachines, the basics of biological molecular communication and the microorganisms that use it. They then proceed to engineered molecular communication and the molecular communication paradigm, with mathematical models of various types of molecular communication and a description of the information and communication theory of molecular communication. Finally, the practical aspects of designing molecular communication systems are presented, including a review of the key applications. Ideal for engineers and biologists looking to get up to speed on the current practice in this growing field.
Quantum computation and information is one of the most exciting developments in science and technology of the last twenty years. To achieve large scale quantum computers and communication networks it is essential not only to overcome noise in stored quantum information, but also in general faulty quantum operations. Scalable quantum computers require a far-reaching theory of fault-tolerant quantum computation. This comprehensive text, written by leading experts in the field, focuses on quantum error correction and thoroughly covers the theory as well as experimental and practical issues. The book is not limited to a single approach, but reviews many different methods to control quantum errors, including topological codes, dynamical decoupling and decoherence-free subspaces. Basic subjects as well as advanced theory and a survey of topics from cutting-edge research make this book invaluable both as a pedagogical introduction at the graduate level and as a reference for experts in quantum information science.
The term optimal filtering traditionally refers to a class of methods that can be used for estimating the state of a time-varying system which is indirectly observed through noisy measurements. The term optimal in this context refers to statistical optimality. Bayesian filtering refers to the Bayesian way of formulating optimal filtering. In this book we use these terms interchangeably and always mean Bayesian filtering.
In optimal, Bayesian, and Bayesian optimal filtering the state of the system refers to the collection of dynamic variables such as position, velocity, orientation, and angular velocity, which fully describe the system. The noise in the measurements means that they are uncertain; even if we knew the true system state the measurements would not be deterministic functions of the state, but would have a distribution of possible values. The time evolution of the state is modeled as a dynamic system which is perturbed by a certain process noise. This noise is used for modeling the uncertainties in the system dynamics. In most cases the system is not truly stochastic, but stochasticity is used for representing the model uncertainties.
Bayesian smoothing (or optimal smoothing) is often considered to be a class of methods within the field of Bayesian filtering. While Bayesian filters in their basic form only compute estimates of the current state of the system given the history of measurements, Bayesian smoothers can be used to reconstruct states that happened before the current time.