To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Due to the complexity of urban and rural drainage systems, although many types of robots have been designed for this purpose, the mainstream pipeline inspection robots are currently dominated by four-wheeled designs. In this study, the shortcomings of four-wheeled pipeline robots were analyzed, including poor passability, difficulties in spatial positioning and orientation, and the limited effectiveness of conventional two-degree-of-freedom observation systems. Based on these issues, the spatial pose mathematical model of the four-wheeled robot inside the pipeline was investigated, along with the spatial geometric constraints and speed characteristics during cornering. This study was intended to reveal the spatial geometric parameter limitations and the kinematic characteristics of the four-wheeled pipeline robot under these constraints, providing corresponding recommendations. To address the issue of the outdated two-degree-of-freedom vision component, a three-degree-of-freedom visual component was designed, and forward kinematics analysis was conducted using Standard-Denavit-Hartenberg parametric modeling, revealing its motion speed and characteristics. Based on this visual component, a new concept of in-pipeline robot vision was proposed, providing new references for the design of four-wheeled pipeline robots.
This theoretical pearl shows how a graphical, relational, point-free, and calculational approach to linear algebra, known as graphical linear algebra, can be used to reason not only about matrices (and matrix algebra, as can be found in the literature) but also vector spaces and more generally linear relations. Linear algebra is usually seen as the study of vector spaces and linear transformations. However, to reason effectively with subspaces in a point-free and calculational manner, both can be generalized to an unifying concept: linear relations, much like relational algebra. While the semantics is relational, the syntax is graphical and uses string diagrams, 2-dimensional formal diagrams, which represent the linear relations. Most importantly, in a number of cases, the relational semantics allows algorithms and properties to be derived calculationally instead of just verified. Our approach is to proceed primarily by examples which involve finding inverses, switching from an implicit basis to an explicit basis (solving a homogeneous linear system), exploring both the exchange lemma and the Zassenhaus’ algorithm.
This paper presents a general approach to synthesizing closed-loop robots for machining and manufacturing of complex quadric surfaces, such as toruses, helicoids, and helical tubes. The proposed approach begins by employing finite screw theory to describe the motion sets generated by prismatic, rotational, and helical joints. Subsequently, generatrices and generating lines are put forward and combined for type synthesis of serial kinematic limbs capable of generating single-DoF translations along spatial curves and two-DoF translations on complex quadric surfaces. Following this manner, the two-DoF translational motion patterns on these complex quadric surfaces are algebraically defined and expressed as finite screw sets. Type synthesis of close-loop robots having the newly defined motion patterns can thus be carried out based upon analytical computations of finite screws. As application of the presented approach, closed-loop robots for machining toruses are synthesized, resulting in four-DoF and five-DoF standard and derived limbs together with their corresponding assembly conditions. Additionally, brief descriptions of robots for machining helicoids and helical tubes are provided, along with a comprehensive list of all the feasible limbs for these kinds of robots. The robots synthesized in this paper have promised applications in machining and manufacturing of spatial curves and surfaces, enabling precise control of machining trajectories ensured by mechanism structures and achieving high precision with low cost.
Simulations of critical phenomena, such as wildfires, epidemics, and ocean dynamics, are indispensable tools for decision-making. Many of these simulations are based on models expressed as Partial Differential Equations (PDEs). PDEs are invaluable inductive inference engines, as their solutions generalize beyond the particular problems they describe. Methods and insights acquired by solving the Navier–Stokes equations for turbulence can be very useful in tackling the Black-Scholes equations in finance. Advances in numerical methods, algorithms, software, and hardware over the last 60 years have enabled simulation frontiers that were unimaginable a couple of decades ago. However, there are increasing concerns that such advances are not sustainable. The energy demands of computers are soaring, while the availability of vast amounts of data and Machine Learning(ML) techniques are challenging classical methods of inference and even the need of PDE based forecasting of complex systems. I believe that the relationship between ML and PDEs needs to be reset. PDEs are not the only answer to modeling and ML is not necessarily a replacement, but a potent companion of human thinking. Algorithmic alloys of scientific computing and ML present a disruptive potential for the reliable and robust forecasting of complex systems. In order to achieve these advances, we argue for a rigorous assessment of their relative merits and drawbacks and the adoption of probabilistic thinking for developing complementary concepts between ML and scientific computing. The convergence of AI and scientific computing opens new horizons for scientific discovery and effective decision-making.
We introduce the framework FreeCHR which formalizes the embedding of Constraint Handling Rules (CHR) into a host language, using the concept of initial algebra semantics from category theory. We hereby establish a high-level implementation scheme for CHR as well as a common formalization for both theory and practice. We propose a lifting of the syntax of CHR via an endofunctor in the category Set and a lifting of the very abstract operational semantics of CHR into FreeCHR, using the free algebra, generated by the endofunctor. We give proofs for soundness and completeness with its original definition. We also propose a first abstract execution algorithm and prove correctness with the operational semantics. Finally, we show the practicability of our approach by giving two possible implementations of this algorithm in Haskell and Python. Under consideration in Theory and Practice of Logic Programming.
This article introduces a dome-type soft tactile sensor that can autonomously adjust its stiffness to evaluate surface contact characteristics, including the elastic modulus, contact force, and the presence of abnormal hardness within soft materials, using a strain gauge as a single sensing element. The strain sensor element is placed at the tip of the dome to measure the deformations during contact that reflect the properties of the contacted object. Using machine learning techniques, the sensor system can accurately predict these characteristics in various materials with an error rate of less than approximately 8%. A hybrid approach that combines experimental and simulation data enables the sensor to be trained effectively, generating sufficient data for accurate predictions without extensive experiments. The high accuracy results of the machine learning models demonstrate that the sensor system can precisely calculate the elastic modulus and depth of the defect. The adaptability and precision of the proposed sensor make it ideal for applications in medical diagnostics and other fields requiring careful interaction with soft materials. Furthermore, its innovative approach can be referenced for exploiting the properties of soft materials to achieve task-specific morphology without redesigning soft sensors or soft robots.
This book is the only scientific biography of the Nobel Prize - winning Indian American chemist, Har Gobind Khorana. It begins with the story of Khorana's origins in poverty in rural India and how he manages to emerge from that to be trained in chemistry in Britain and Switzerland before immigrating to Canada and the United States. Science was the dominant focus of Khorana's life, and his biography is treated chronologically in conjunction with his scientific career.
The book explains in detail Khorana's most important scientific achievements, his role in deciphering the genetic code (the reason for his Nobel Prize), the first synthesis of a functional gene in the laboratory, the elucidation of the idea behind the PCR technology that has since become ubiquitous in biotech, and his seminal studies of how structure determines the function of biological macromolecules in membranes. Finally, it focuses on his scientific legacy, and what his career means for future generations of scientists.
Artificial intelligence is dramatically reshaping scientific research and is coming to play an essential role in scientific and technological development by enhancing and accelerating discovery across multiple fields. This book dives into the interplay between artificial intelligence and the quantum sciences; the outcome of a collaborative effort from world-leading experts. After presenting the key concepts and foundations of machine learning, a subfield of artificial intelligence, its applications in quantum chemistry and physics are presented in an accessible way, enabling readers to engage with emerging literature on machine learning in science. By examining its state-of-the-art applications, readers will discover how machine learning is being applied within their own field and appreciate its broader impact on science and technology. This book is accessible to undergraduates and more advanced readers from physics, chemistry, engineering, and computer science. Online resources include Jupyter notebooks to expand and develop upon key topics introduced in the book.
This paper presents a novel simplification calculus for propositional logic derived from Peirce’s existential graphs’ rules of inference and implication graphs. Our rules can be applied to propositional logic formulae in nested form, are equivalence-preserving, guarantee a monotonically decreasing number of variables, clauses and literals, and maximise the preservation of structural problem information. Our techniques can also be seen as higher-level SAT preprocessing, and we show how one of our rules (TWSR) generalises and streamlines most of the known equivalence-preserving SAT preprocessing methods. In addition, we propose a simplification procedure based on the systematic application of two of our rules (EPR and TWSR) which is solver-agnostic and can be used to simplify large Boolean satisfiability problems and propositional formulae in arbitrary form, and we provide a formal analysis of its algorithmic complexity in terms of space and time. Finally, we show how our rules can be further extended with a novel n-ary implication graph to capture all known equivalence-preserving preprocessing procedures.
Session types are type-theoretic specifications of communication protocols in concurrent or distributed systems. By codifying the structure of communication, they make software more reliable and easier to construct. Over recent decades, the topic has become a large and active research area within the field of programming language theory and implementation. Written by leading researchers in the field, this is the first text to provide a comprehensive introduction to the key concepts of session types. The thorough theoretical treatment is complemented by examples and exercises, suitable for use in a lecture course or for self-study. It serves as an entry point to the topic for graduate students and researchers.
The theory of kernels offers a rich mathematical framework for the archetypical tasks of classification and regression. Its core insight consists of the representer theorem that asserts that an unknown target function underlying a dataset can be represented by a finite sum of evaluations of a singular function, the so-called kernel function. Together with the infamous kernel trick that provides a practical way of incorporating such a kernel function into a machine learning method, a plethora of algorithms can be made more versatile. This chapter first introduces the mathematical foundations required for understanding the distinguished role of the kernel function and its consequence in terms of the representer theorem. Afterwards, we show how selected popular algorithms, including Gaussian processes, can be promoted to their kernel variant. In addition, several ideas on how to construct suitable kernel functions are provided, before demonstrating the power of kernel methods in the context of quantum (chemistry) problems.
In this chapter, we change our viewpoint and focus on how physics can influence machine learning research. In the first part, we review how tools of statistical physics can help to understand key concepts in machine learning such as capacity, generalization, and the dynamics of the learning process. In the second part, we explore yet another direction and try to understand how quantum mechanics and quantum technologies could be used to solve data-driven task. We provide an overview of the field going from quantum machine learning algorithms that can be run on ideal quantum computers to kernel-based and variational approaches that can be run on current noisy intermediate-scale quantum devices.
In this chapter, we introduce the field of reinforcement learning and some of its most prominent applications in quantum physics and computing. First, we provide an intuitive description of the main concepts, which we then formalize mathematically. We introduce some of the most widely used reinforcement learning algorithms. Starting with temporal-difference algorithms and Q-learning, followed by policy gradient methods and REINFORCE, and the interplay of both approaches in actor-critic algorithms. Furthermore, we introduce the projective simulation algorithm, which deviates from the aforementioned prototypical approaches and has multiple applications in the field of physics. Then, we showcase some prominent reinforcement learning applications, featuring some examples in games; quantum feedback control; quantum computing, error correction and information; and the design of quantum experiments. Finally, we discuss some potential applications and limitations of reinforcement learning in the field of quantum physics.
This chapter discusses more specialized examples on how machine learning can be used to solve problems in quantum sciences. We start by explaining the concept of differentiable programming and its use cases in quantum sciences. Next, we describe deep generative models, which have proven to be an extremely appealing tool for sampling from unknown target distributions in domains ranging from high-energy physics to quantum chemistry. Finally, we describe selected machine learning applications for experimental setups such as ultracold systems or quantum dots. In particular, we show how machine learning can help in tedious and repetitive experimental tasks in quantum devices or in validating quantum simulators with Hamiltonian learning.
In this chapter, we describe basic machine learning concepts connected to optimization and generalization. Moreover, we present a probabilistic view on machine learning that enables us to deal with uncertainty in the predictions we make. Finally, we discuss various basic machine learning models such as support vector machines, neural networks, autoencoders, and autoregressive neural networks. Together, these topics form the machine learning preliminaries needed for understanding the contents of the rest of the book.
In this chapter, we review the growing field of research aiming to represent quantum states with machine learning models, known as neural quantum states. We introduce the key ideas and methods and review results about the capacity of such representations. We discuss in details many applications of neural quantum states, including but not limited to finding the ground state of a quantum system, solving its time evolution equation, quantum tomography, open quantum system dynamics and steady-state solution, and quantum chemistry. Finally, we discuss the challenges to be solved to fully unleash the potential of neural quantum states.