To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers applications of quantum computing in the area of condensed matter physics. We discuss algorithms for simulating the Fermi-Hubbard model, which is used to study high-temperature superconductivity and other physical phenomena. We also discuss algorithms for simulating spin models such as the Ising model and Heisenberg model. Finally, we cover algorithms for simulating the Sachdev-Ye-Kitaev (SYK) model of strongly interacting fermions, which is used to model quantum chaos and has connections to black holes.
This chapter covers applications of quantum computing in the area of combinatorial optimization. This area is related to operations research, and it encompasses many tasks that appear in science and industry, such as scheduling, routing, and supply chain management. We cover specific problems where a quadratic quantum speedup may be available via Grover’s quantum algorithm for unstructured search. We also cover several more recent proposals for achieving superquadratic speedups, including the quantum adiabatic algorithm, the quantum approximate optimization algorithm (QAOA), and the short-path algorithm.
An important set of coordinates to understand is that of our oblate Earth. I derive the equations transforming latitude/longitude/height to and from the ECEF cartesian axes. I use the model aircraft of a previous chapter as an aid to visualise the rotation sequences that are useful for calculating NED or ENU coordinates at a given point on or near Earth’s surface. I use these in a detailed example of sighting a distant aircraft. This leads to a description of the ‘DIS standard’ designed for such scenarios. I also use these ideas in a detailed example of estimating Earth’s gravity at a given point, which is necessary for implementing inertial navigation systems.
The initial excitement as well as considerable hype from software companies, AI developers and technology commentators following the launch of ChatGPT and other GenAI products in late 2022 and into 2023 has died down to a large extent. The ‘magic’ of seeing text and images being created in seconds from a few simple prompts is now just another clever thing that computers can do and is becoming part of many people's daily workflows. It was the same with e-mail, the World Wide Web (WWW), mobile phones and social media when they first became available. They are all now part of the warp and weft of everyday life. In time, GenAI and the applications that incorporate it will be no different. However, ‘time’ is the watchword here. This will not happen overnight for all the reasons discussed in this book. Developers need to demonstrate the value AI offers to organisations through real use cases and solid evidence of a return on investment. Adopting organisations need to be confident that the benefits outweigh the risks and this requires further work from developers and vendors in removing problems such as hallucinations and privacy breaches. If agentic AI is to take hold, then trust in such systems will be key. Alongside this, regulators and public policy makers will need to adapt their approaches as the technology evolves and its opportunities and risks become clearer. Finally, education will be a vital factor in helping workers, existing and yet to enter the workforce, adapt to this transformative technology, as well as teaching all individuals what they can and cannot trust online. This last requirement is, perhaps, the most important as it touches on foundational issues such as literacy and democracy.
A short chapter that describes the book’s content. It covers the core principles, and discusses some ways in which the book’s description of them differs from that of less technical descriptions.
This chapter covers variational quantum algorithms, which act as a primitive ingredient for larger quantum algorithms in several application areas, including quantum chemistry, combinatorial optimization, and machine learning. Variational quantum algorithms are parameterized quantum circuits where the parameters are trained to optimize a certain cost function. They are often shallow circuits, which potentially makes them suitable for near-term devices that are not error corrected.
I introduce quaternions by recounting the story of how Hamilton discovered them, but in far more detail than other authors give. This detail is necessary for the reader to understand why Hamilton wrote his quaternion equations in the way that he did. I describe the role of quaternions in rotation, show how to convert between them and matrices, and discuss their role in modern computer graphics. I describe a modern problem in detail whereby Hamilton’s original definition has been ‘hijacked’ in a way that has now produced much confusion. I end by describing how quaternions play a role in topology and quantum mechanics.
This chapter covers a number of disparate applications of quantum computing in the area of machine learning. We only consider situations where the dataset is classical (rather than quantum). We cover quantum algorithms for big-data problems relying upon high-dimensional linear algebra, such as Gaussian process regression and support vector machines. We discuss the prospect of achieving a quantum speedup with these algorithms, which face certain input/output caveats and must compete against quantum-inspired classical algorithms. We also cover heuristic quantum algorithms for energy-based models, which are generative machine learning models that learn to produce outputs similar to those in a training dataset. Next, we cover a quantum algorithm for the tensor principal component analysis problem, where a quartic speedup may be available, as well as quantum algorithms for topological data analysis, which aim to compute topologically invariant properties of a dataset. We conclude by covering quantum neural networks and quantum kernel methods, where the machine learning model itself is quantum in nature.
When using machine learning to model environmental systems, it is often a model’s ability to predict extreme behaviors that yields the highest practical value to policy makers. However, most existing error metrics used to evaluate the performance of environmental machine learning models weigh error equally across test data. Thus, routine performance is prioritized over a model’s ability to robustly quantify extreme behaviors. In this work, we present a new error metric, termed Reflective Error, which quantifies the degree at which our model error is distributed around our extremes, in contrast to existing model evaluation methods that aggregate error over all events. The suitability of our proposed metric is demonstrated on a real-world hydrological modeling problem, where extreme values are of particular concern.
Reduced-order models encapsulating complex whole-body dynamics have facilitated stable walking in various bipedal robots. These models have enabled intermittent control methods by applying control inputs intermittently (alternating between zero input and feedback input), allowing robots to follow natural dynamics and provide energetically and computationally efficient walking. However, due to their inability to derive closed-form solutions for the angular momentum generated by swing motions and other dynamic actions, constructing a precise model for the walking phase with zero input is challenging, and controlling walking behavior using an intermittent controller remains problematic. This paper proposes an intermittent controller for bipedal robots, modeled as a multi-mass system consisting of an inverted pendulum and an additional mass representing the swing leg. The proposed controller alternates between feedback control during the double support (DS) phase and zero-input control during the single support (SS) phase. By deriving a constrained trajectory, the system behaves as a conservative system during the SS phase, enabling closed-form solutions to the equations of motion. This constraint allows the robot to track the target behavior accurately, intermittently adjusting energy during the DS phase. The effectiveness of the proposed method is validated through simulations and experiments with a bipedal robot, demonstrating its capability to accurately and stably track the target walking velocity using intermittent control.
Tunnel boring machines (TBMs) are essential equipment for tunnel excavation. The main component of TBMs for breaking rock is the disc cutter. The effectiveness and productivity of TBM operations are directly impacted by the disc cutter design and performance. This study investigates the effects of confining stress on the breaking force of disc cutters with various diameters. Both saturated and dry rock, such as low-strength concrete, medium-strength marble, and high-strength granite, are used in the tests. It is found that disc cutters with larger diameter can reduce the influence of the confining stress. Moreover, this research indicates that the influence of confining stress is more notable in rocks with higher strengths, especially in dry condition as opposed to saturated condition. The failure load is related to the confining stress, cutter diameter, and compressive strength of the rock in a multivariate linear regression model, suggesting that the confining stress is more significant than the other variables. These results highlight the importance of considering in-situ stress conditions when excavating tunnels by TBMs.
We investigate causal computations, which take sequences of inputs to sequences of outputs such that the $n$th output depends on the first $n$ inputs only. We model these in category theory via a construction taking a Cartesian category $\mathbb{C}$ to another category $\mathrm{St}(\mathbb{C})$ with a novel trace-like operation called “delayed trace,” which misses yanking and dinaturality axioms of the usual trace. The delayed trace operation provides a feedback mechanism in $\mathrm{St}(\mathbb{C})$ with an implicit guardedness guarantee. When $\mathbb{C}$ is equipped with a Cartesian differential operator, we construct a differential operator for $\mathrm{St}(\mathbb{C})$ using an abstract version of backpropagation through time (BPTT), a technique from machine learning based on unrolling of functions. This obtains a swath of properties for BPTT, including a chain rule and Schwartz theorem. Our differential operator is also able to compute the derivative of a stateful network without requiring the network to be unrolled.
Peat is formed by the accumulation of organic material in water-saturated soils. Drainage of peatlands and peat extraction contribute to carbon emissions and biodiversity loss. Most peat extracted for commercial purposes is used for energy production or as a growing substrate. Many countries aim to reduce peat usage but this requires tools to detect its presence in substrates. We propose a decision support system based on deep learning to detect peat-specific testate amoeba in microscopy images. We identified six taxa that are peat-specific and frequent in European peatlands. The shells of two taxa (Archerella sp. and Amphitrema sp.) were well preserved in commercial substrate and can serve as indicators of peat presence. Images from surface and commercial samples were combined into a training set. A separate test set exclusively from commercial substrates was also defined. Both datasets were annotated and YOLOv8 models were trained to detect the shells. An ensemble of eight models was included in the decision support system. Test set performance (average precision) reached values above 0.8 for Archerella sp. and above 0.7 for Amphitrema sp. The system processes thousands of images within minutes and returns a concise list of crops of the most relevant shells. This allows a human operator to quickly make a final decision regarding peat presence. Our method enables the monitoring of peat presence in commercial substrates. It could be extended by including more species for applications in restoration ecology and paleoecology.
Increasing penetration of variable and intermittent renewable energy resources on the energy grid poses a challenge for reliable and efficient grid operation, necessitating the development of algorithms that are robust to this uncertainty. However, standard algorithms incorporating uncertainty for generation dispatch are computationally intractable when costs are nonconvex, and machine learning-based approaches lack worst-case guarantees on their performance. In this work, we propose a learning-augmented algorithm, RobustML, that exploits the good average-case performance of a machine-learned algorithm for minimizing dispatch and ramping costs of dispatchable generation resources while providing provable worst-case guarantees on cost. We evaluate the algorithm on a realistic model of a combined cycle cogeneration plant, where it exhibits robustness to distribution shift while enabling improved efficiency as renewables penetration increases.
In recent years, passive motion paradigms (PMPs), derived from the equilibrium point hypothesis and impedance control, have been utilised as manipulation methods for humanoid robots and robotic manipulators. These paradigms are typically achieved by creating a kinematic chain that enables the manipulator to perform goal-directed actions without explicitly solving the inverse kinematics. This approach leverages a kinematic model constructed through the training of artificial neural networks, aligning well with principles of cybernetics and cognitive computation by enabling adaptive and flexible control. Specifically, these networks model the relationship between joint angles and end-effector positions, facilitating the computation of the Jacobian matrix. Although this method does not require an accurate robot model, traditional neural networks often suffer from drawbacks such as overfitting and inefficient training, which can compromise the accuracy of the final PMP model. In this paper, we implement the method using a deep neural network and investigate the impact of activation functions and network depth on the performance of the kinematic model. Additionally, we propose a transfer learning approach to fine-tune the pre-trained model, enabling it to be transferred to other manipulator arms with different kinematic properties. Finally, we implement and evaluate the deep neural network-based PMP on the Universal Robots, comparing it with traditional kinematic controllers and assessing its physical interaction capabilities and accuracy.
A topological space has a domain model if it is homeomorphic to the maximal point space $\mbox{Max}(P)$ of a domain $P$. Lawson proved that every Polish space $X$ has an $\omega$-domain model $P$ and for such a model $P$, $\mbox{Max}(P)$ is a $G_{\delta }$-set of the Scott space of $P$. Martin (2003) then asked whether it is true that for every $\omega$-domain $Q$, $\mbox{Max}(Q)$ is $G_{\delta }$-set of the Scott space of $Q$. In this paper, we give a negative answer to Martin’s long-standing open problem by constructing a counterexample. The counterexample here actually shows that the answer is no even for $\omega$-algebraic domains. In addition, we also construct an $\omega$-ideal domain $\widetilde{Q}$ for the constructed $Q$ such that their maximal point spaces are homeomorphic. Therefore, $\textrm{Max}(Q)$ is a $G_\delta$-set of the Scott space of the new model $\widetilde{Q}$ .
Smooth Infinitesimal Analysis (SIA) is a remarkable late twentieth-century theory of analysis. It is based on nilsquare infinitesimals, and does not rely on limits. SIA poses a challenge of motivating its use of intuitionistic logic beyond merely avoiding inconsistency. The classical-modal account(s) provided here attempt to do just that. The key is to treat the identity of an arbitrary nilsquare, e, in relation to 0 or any other nilsquare, as objectually vague or indeterminate—pace a famous argument of Evans [10]. Thus, we interpret the necessity operator of classical modal logic as “determinateness” in truth-value, naturally understood to satisfy the modal system, S4 (the accessibility relation on worlds being reflexive and transitive). Then, appealing to the translation due to Gödel et al., and its proof-theoretic faithfulness (“mirroring theorem”), we obtain a core classical-modal interpretation of SIA. Next we observe a close connection with Kripke semantics for intuitionistic logic. However, to avoid contradicting SIA’s non-classical treatment of identity relating nilsquares, we translate “=” with a non-logical surrogate, ‘E,’ with requisite properties. We then take up the interesting challenge of adding new axioms to the core CM interpretation. Two mutually incompatible ones are considered: one being the positive stability of identity and the other being a kind of necessity of indeterminate identity (among nilsquares). Consistency of the former is immediate, but the proof of consistency of the latter is a new result. Finally, we consider moving from CM to a three-valued, semi-classical framework, SCM, based on the strong Kleene axioms. This provides a way of expressing “indeterminacy” in the semantics of the logic, arguably improving on our CM. SCM is also proof-theoretically faithful, and the extensions by either of the new axioms are consistent.
On both global and local levels, one can observe a trend toward the adoption of algorithmic regulation in the public sector, with the Chinese social credit system (SCS) serving as a prominent and controversial example of this phenomenon. Within the SCS framework, cities play a pivotal role in its development and implementation, both as evaluators of individuals and enterprises and as subjects of evaluation themselves. This study engages in a comparative analysis of SCS scoring mechanisms for individuals and enterprises across diverse Chinese cities while also scrutinizing the scoring system applied to cities themselves. We investigate the extent of algorithmic regulation exercised through the SCS, elucidating its operational dynamics at the city level in China and assessing its interventionism, especially concerning the involvement of algorithms. Furthermore, we discuss ethical concerns surrounding the SCS’s implementation, particularly regarding transparency and fairness. By addressing these issues, this article contributes to two research domains: algorithmic regulation and discourse surrounding the SCS, offering valuable insights into the ongoing utilization of algorithmic regulation to tackle governance and societal challenges.