To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers the potential use of quantum algorithms for cryptanalysis, that is, the breaking and weakening of cryptosystems. We discuss Shor’s algorithm for factoring and discrete logarithm, which render widely used public-key cryptosystems vulnerable to attack, given access to a sufficiently large-scale quantum computer. We present resource estimates from the literature for running Shor’s algorithm, and we discuss the outlook for postquantum cryptography, which aims to replace existing cryptosystems while being resistant to quantum attack. We also cover quantum approaches for weakening the security of cryptosystems based on Grover’s search algorithm.
Trigonometry is the basis of the book’s subject. I begin with length and angle, and then generalise to coordinates. This requires the important idea of a directed angle, which enables us to relate the sine and cosine of an angle to coordinates in any given orientation of a set of axes. I discuss the details of inverting the sine/cosine/tangent functions, and introduce a new function name to replace the inappropriate name “atan2” that often appears in the literature. The chapter ends with examples of calculating bearing and elevation.
Results of previous chapters come together here in the equations that model a vehicle’s position and attitude given a knowledge of, for example, its angular turn rates. These equations can seem perplexing at first glance, and so I derive them in careful steps, again making strong use of vectors and the frame dependence of the time derivative. I end with a detailed example of applying these equations to a spinning top.
Before considering the business and social impacts of the current data-driven AI revolution, it is worth considering how our use of information has evolved over the last 5,000 or so years. In many ways, the forces that led to the rise of information being transcribed for the first time are not too dissimilar to those driving change today. Commerce and the desire to maintain power over disparate groups of citizens have always required the creation, storage and distribution of information in multiple forms and across a variety of media.
One of the earliest forms of transcription was discovered in 1929 by Julius Jordan, a German archaeologist who excavated a collection of clay tablets over 5,000 years old in what is now Iraq. It took almost 50 years for researchers to decipher the markings on the tablets, which turned out to be records of the sale of commodities such as sheep, grain and honey. What many archae - ologists had thought were early forms of poetry or personal correspondence were, in fact, much more mundane, but vital to the orderly flow of goods across the region (Harford, 2017). Further study of the tablets revealed an unexpected sophistication in the way information was transcribed in the clay. As higher quantities of goods needed to be recorded, new ways of representing larger numbers were needed. For example, the sale of three sheep involved the sheep being pressed into the clay three times, but when quantities grew above ten, other symbols were required. These symbols allowed records of ever-larger exchanges to take place.
This chapter covers the quantum algorithmic primitive of Hamiltonian simulation, which aims to digitally simulate the evolution of a quantum state forward in time according to a Hamiltonian. There are several approaches to Hamiltonian simulation, which are best suited to different situations. We cover approaches for time-independent Hamiltonian simulation based on product formulas, the randomized compiling approach called qDRIFT, and quantum signal processing. We also discuss a method that leverages linear combination of unitaries and truncation of Taylor and Dyson series, which is well suited for time-dependent Hamiltonian simulation
This chapter provides an overview of how to perform a universal set of logical gates on qubits encoded with the surface code, via a procedure called lattice surgery. This is the most well-studied approach for practical fault-tolerant quantum computation. We perform a back-of-the-envelope end-to-end resource estimation for the number of physical qubits and total runtime required to run a quantum algorithm in this paradigm. This provides a method for converting logical resource estimates for quantum algorithms into physical resource estimates.
This chapter covers the quantum algorithmic primitive called quantum phase estimation. Quantum phase estimation is an essential quantum algorithmic primitive that computes an estimate for the eigenvalue of a unitary operator, given as input an eigenstate of the operator. It features prominently in many end-to-end quantum algorithms, for example, computing ground state energies of physical systems in the areas of condensed matter physics and quantum chemistry. We carefully discuss nuances of quantum phase estimation that appear when it is applied to a superposition of eigenstates with different eigenvalues.
This chapter covers applications of quantum computing in the area of continuous optimization, including both convex and nonconvex optimization. We discuss quantum algorithms for computing Nash equilibria for zero-sum games and for solving linear, second-order, and semidefinite programs. These algorithms are based on quantum implementations of the multiplicative weights update method or interior point methods. We also discuss general quantum algorithms for convex optimization which can provide a speedup in cases where the objective function is much easier to evaluate than the gradient of the objective function. Finally, we cover quantum algorithms for escaping saddle points and finding local minima in nonconvex optimization problems.
This chapter covers quantum interior point methods, which are quantum algorithmic primitives for application to convex optimization problems, particularly linear, second-order, and semidefinite programs. Interior point methods are a successful classical iterative technique that solve a linear system of equations at each iteration. Quantum interior point methods replace this step with quantum a quantum linear system solver combined with quantum tomography, potentially offering a polynomial speedup.
One of the aims of this book is to show how the evolution of the data industry and the organisations that use data in all its forms has facilitated the current wave of innovation in AI. The two sectors are inextricably linked, with data acting as the fuel powering the rapidly developing plethora of AI services.
In Chapter 1, we have seen how the first transcribing and storing of information had its roots in commerce, with technology evolving to better manage and share data within and between organisations. More recently, data-driven innovation has resulted in the creation of some of the world's largest and most profitable companies. During the second half of the 20th century, AI has moved from universities and research centres into commercial applications, with businesses’ and the public's imaginations being captured by GenAI as a tangible and usable product. We have also examined some of the research and frameworks that explain the drivers of innovation with a focus on the impact of digital platforms. Finally, we looked at how the recent wave of data-powered platforms and concerns about the applications of AI are impacting political and legal initiatives around the world.
These developments are all in a state of flux and it will take several years before the economic and social implications of this data-driven AI revolution are fully understood. This chapter considers how this all might play out in the medium term in the context of how AI technologies will evolve (and what this means for the data sector and businesses) as well as the broader social and economic impacts.
Predicting the future is fraught with dangers and, for the most part, inaccurate. However, looking at recent precedents in the ways new technologies have diffused, the unchanging realities of business and the need to generate profits, and the current state of AI technologies, we can make some broad assumptions in this space.
Books on vehicle attitude and motion often use tensors in their analyses, and I have discussed the reasons for that in a previous chapter. But tensors also carry an esotericism arising from being used to quantify the curved spacetime of general relativity. And so I end the book by telling the inquisitive reader how tensors ‘work’ more generally, and how this more advanced topic makes quick work of calculating the gradient, divergence, laplacian, and curl of vector calculus. I end with a discussion of parallel transport, which has found its way into the exotic ‘wander azimuth’ axes used in some navigation systems.
This chapter covers the quantum algorithmic primitive called Gibbs sampling. Gibbs sampling accomplishes the task of preparing a digital representation of the thermal state, also known as the Gibbs state, of a quantum system in thermal equilibrium. Gibbs sampling is an important ingredient in quantum algorithms to simulate physical systems. We cover multiple approaches to Gibbs sampling, including algorithms that are analogues of classical Markov chain Monte Carlo algorithms.
I derive the important equation that relates the time derivative of a vector computed in one frame to that computed in another frame. I make the point that we must understand the distinction between frames and coordinates to appreciate what the equations are saying. That discussion leads naturally to the concept of centrifugal and Coriolis forces in rotating frames. I use the frame-dependent time derivative to derive some equations for robotics, and finish with a wider discussion of the time derivative for tensors and in fluid flow.
We are entering a new phase in the information revolution driven by the introduction of new artificial intelligence (AI) technologies and how they are being used to transform the data amassed within organisations over the previous 30 or so years. This revolution started hundreds of years ago when moveable type was used to print books at a scale and speed not previously possible. The rise of mass media and then mass communications in the form of telecommunication networks, coupled with the data processing capabilities of computers, powered the next phases. Thirty years ago, the expansion of the internet and the widespread adoption of the World Wide Web (WWW) as a means to publish and share information brought the digital revolution into our homes and businesses. Mobile computing has put powerful, always connected computers in the pockets of most people in the industrialised world and social networks have provided platforms for individuals to reach billions of others with their thoughts and ideas.
A result of these innovations and the ways they have been used is a world awash with information, most of which sits unused, unstructured and hidden away in archives and data silos within the organisations, public and private, that created it (Lange, 2023). Despite significant advances in information management techniques and technologies, only a fraction of the value of this data is being realised. However, we are now at an inflection point where much of the infrastructure is in place to begin the process of changing that.
This chapter covers applications of quantum computing in the area of nuclear and particle physics. We cover algorithms for simulating quantum field theories, where end-to-end problems include computing fundamental physical quantities and scattering cross sections. We also discuss simulations of nuclear physics, which encompasses individual nuclei as well as dense nucleonic matter such as neutron stars.
This chapter starts by showing that the DCM is a rotation matrix, and vice versa. I introduce Euler matrices as important examples of rotation matrices. I give examples extracting angle–axis information from a DCM. This chapter includes a study of what tensors are, and their role in this subject.