To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter presents an introduction to the theory of quantum fault tolerance and quantum error correction, which provide a collection of techniques to deal with imperfect operations and unavoidable noise afflicting the physical hardware, at the expense of moderately increased resource overheads.
This chapter covers the quantum algorithmic primitive called quantum gradient estimation, where the goal is to output an estimate for the gradient of a multivariate function. This primitive features in other primitives, for example, quantum tomography. It also features in several quantum algorithms for end-to-end problems in continuous optimization, finance, and machine learning, among other areas. The size of the speedup it provides depends on how the algorithm can access the function, and how difficult the gradient is to estimate classically.
Unit basis vectors emerged from Hamilton’s quaternions, and quite literally form the basis of rotation and attitude. I begin with their role in the dot product, and then study the matrix determinant. This determines the handedness of any three vectors, which is necessary for building a right-handed cartesian coordinate system. That idea naturally gives rise to the cross product, which I study in some detail, including in higher dimensions. The chapter ends with comments on matrix multiplication, and in particular the fast multiplication of sparse 3×3 matrices that we use frequently later in the book.
This chapter covers quantum algorithms for numerically solving differential equations and the areas of application where such capabilities might be useful, such as computational fluid dynamics, semiconductor chip design, and many engineering workflows. We focus mainly on algorithms for linear differential equations (covering both partial and ordinary linear differential equations), but we also mention the additional nuances that arise for nonlinear differential equations. We discuss important caveats related to both the data input and output aspects of an end-to-end differential equation solver, and we place these quantum methods in the context of existing classical methods currently in use for these problems.
This chapter covers the quantum algorithmic primitive of approximate tensor network contraction. Tensor networks are a powerful classical method for representing complex classical data as a network of individual tensor objects. To evaluate the tensor network, it must be contracted, which can be computationally challenging. A quantum algorithm for approximate tensor network contraction can provide a quantum speedup for contracting tensor networks that satisfy certain conditions.
This chapter provides an overview of how to perform quantum error correction using the surface code, which is the most well-studied quantum error correcting code for practical quantum computation. We provide formulas for the code distance—which determines the resource overhead when using the surface code—as a function of the desired logical error rate and underlying physical error rate. We discuss several decoders for the surface code and the possibility of experiencing the backlog problem if the decoder is too slow.
This chapter begins the proper study of the closely related subjects of how to transform vector coordinates across bases, and how to quantify vehicle attitude. The direction-cosine matrix appears, and I discuss its properties. I then cover several in-depth examples of using it to describe aircraft attitude. Transforming coordinates leads to a discussion of the meaning of position, which serves to introduce homogeneous coordinates. I end with an example of calculating the motion of a ship.
This chapter covers quantum tomography, a quantum algorithmic primitive that enables a quantum algorithm to learn a full classical description of a quantum state. Generally, the goal of a quantum tomography procedure is to obtain this description using as few copies of the state as possible. The optimal number of copies may depend on what kind of measurements are allowed and what error metric is being used, and in most cases, quantum tomography procedures have been developed with provably optimal complexity.
This chapter covers the potential use of quantum algorithms for cryptanalysis, that is, the breaking and weakening of cryptosystems. We discuss Shor’s algorithm for factoring and discrete logarithm, which render widely used public-key cryptosystems vulnerable to attack, given access to a sufficiently large-scale quantum computer. We present resource estimates from the literature for running Shor’s algorithm, and we discuss the outlook for postquantum cryptography, which aims to replace existing cryptosystems while being resistant to quantum attack. We also cover quantum approaches for weakening the security of cryptosystems based on Grover’s search algorithm.
Trigonometry is the basis of the book’s subject. I begin with length and angle, and then generalise to coordinates. This requires the important idea of a directed angle, which enables us to relate the sine and cosine of an angle to coordinates in any given orientation of a set of axes. I discuss the details of inverting the sine/cosine/tangent functions, and introduce a new function name to replace the inappropriate name “atan2” that often appears in the literature. The chapter ends with examples of calculating bearing and elevation.
Results of previous chapters come together here in the equations that model a vehicle’s position and attitude given a knowledge of, for example, its angular turn rates. These equations can seem perplexing at first glance, and so I derive them in careful steps, again making strong use of vectors and the frame dependence of the time derivative. I end with a detailed example of applying these equations to a spinning top.
Before considering the business and social impacts of the current data-driven AI revolution, it is worth considering how our use of information has evolved over the last 5,000 or so years. In many ways, the forces that led to the rise of information being transcribed for the first time are not too dissimilar to those driving change today. Commerce and the desire to maintain power over disparate groups of citizens have always required the creation, storage and distribution of information in multiple forms and across a variety of media.
One of the earliest forms of transcription was discovered in 1929 by Julius Jordan, a German archaeologist who excavated a collection of clay tablets over 5,000 years old in what is now Iraq. It took almost 50 years for researchers to decipher the markings on the tablets, which turned out to be records of the sale of commodities such as sheep, grain and honey. What many archae - ologists had thought were early forms of poetry or personal correspondence were, in fact, much more mundane, but vital to the orderly flow of goods across the region (Harford, 2017). Further study of the tablets revealed an unexpected sophistication in the way information was transcribed in the clay. As higher quantities of goods needed to be recorded, new ways of representing larger numbers were needed. For example, the sale of three sheep involved the sheep being pressed into the clay three times, but when quantities grew above ten, other symbols were required. These symbols allowed records of ever-larger exchanges to take place.
This chapter covers the quantum algorithmic primitive of Hamiltonian simulation, which aims to digitally simulate the evolution of a quantum state forward in time according to a Hamiltonian. There are several approaches to Hamiltonian simulation, which are best suited to different situations. We cover approaches for time-independent Hamiltonian simulation based on product formulas, the randomized compiling approach called qDRIFT, and quantum signal processing. We also discuss a method that leverages linear combination of unitaries and truncation of Taylor and Dyson series, which is well suited for time-dependent Hamiltonian simulation
This chapter provides an overview of how to perform a universal set of logical gates on qubits encoded with the surface code, via a procedure called lattice surgery. This is the most well-studied approach for practical fault-tolerant quantum computation. We perform a back-of-the-envelope end-to-end resource estimation for the number of physical qubits and total runtime required to run a quantum algorithm in this paradigm. This provides a method for converting logical resource estimates for quantum algorithms into physical resource estimates.
This chapter covers the quantum algorithmic primitive called quantum phase estimation. Quantum phase estimation is an essential quantum algorithmic primitive that computes an estimate for the eigenvalue of a unitary operator, given as input an eigenstate of the operator. It features prominently in many end-to-end quantum algorithms, for example, computing ground state energies of physical systems in the areas of condensed matter physics and quantum chemistry. We carefully discuss nuances of quantum phase estimation that appear when it is applied to a superposition of eigenstates with different eigenvalues.