To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this article, we improve the efficiency of a turbine blade inspection robotic workcell. The workcell consists of a stationary camera and a 6-axis serial robot that is holding a blade and presenting different zones of the blade to the camera for inspection. The problem at hand consists of a 6-DOF (degree of freedom) continuous optimization of the camera placement and a discrete combinatorial optimization of the sequence of inspection poses (images). For each image, all robot configurations (up to eight) are taken into consideration. A novel combined approach is introduced, called blind dynamic particle swarm optimization (BD-PSO), to simultaneously obtain the optimal design for both domains. The objective is to minimize the cycle time of the inspection process, while avoiding any collisions. Even though PSO is vastly used in engineering problems, the novelty of our combinatorial optimization method is in its ability to be used efficiently in traveling salesman problems where the distances between the cities are unknown and subject to change. This highly unpredictable environment is the case of the inspection cell where the cycle time between two images will change for different camera placements.
This chapter examines science and technology from a regulatory and theoretical perspective, providing an important background to the substantive issues discussed throughout the text. The first part of the chapter looks at regulatory theory as it relates to technology, beginning with the general approach of John Braithwaite, and then at more recent approaches to information technology and the internet, specifically the ‘law is code’ approach of Lawrence Lessig and its further development by Andrew Murray. Next, the chapter examines political theory, considering the relationship between individuals and societies – how the behaviour of citizens is best managed according to competing interests, and how governments should legislate to manage these interests. The third part of the chapter examines the basic theories of ethical reasoning, deontology and consequentialism. Lastly, the chapter discusses the nature of scientific knowledge that underlies technology, and how scientific knowledge becomes established.
This paper considers the task allocation problem under the requirement that the assignments of some critical tasks must be maximized when the network capacity cannot accommodate all tasks due to the limited capacity for each unmanned aerial vehicle (UAV). To solve this problem, this paper proposes an extended performance impact algorithm with critical tasks (EPIAC) based on the traditional performance impact algorithm. A novel task list resizing phase is developed in EPIAC to deal with the constraint on the limited capacity of each UAV and maximize the assignments of critical tasks. Numerical simulations demonstrate the outstanding performance of EPIAC compared with other algorithms.
Anderson and Belnap presented indexed Fitch-style natural deduction systems for the relevant logics R, E, and T. This work was extended by Brady to cover a range of relevant logics. In this paper I present indexed tree natural deduction systems for the Anderson–Belnap–Brady systems and show how to translate proofs in one format into proofs in the other, which establishes the adequacy of the tree systems.
For over a century, the field of forensic science has been applying contemporary technology to the investigation of crime. The imperative to identify offenders, particularly in relation to serious offences, has meant that governments are willing to invest in new technologies to achieve this objective. Fingerprinting, first developed in the late 19th century to identify individuals based on the unique patterns on the fingertips, led the way as one of the earliest means of identifying people, and is still used today in a digitised format.
Dictionary learning has emerged as a powerful method for data-driven extraction of features from data. The initial focus was from an algorithmic perspective, but recently there has been increasing interest in the theoretical underpinnings. These rely on information-theoretic analytic tools and help us understand the fundamental limitations of dictionary-learning algorithms. We focus on theoretical aspects and summarize results on dictionary learning from vector- and tensor-valued data. Results are stated in terms of lower and upper bounds on sample complexity of dictionary learning, defined as the number of samples needed to identify or reconstruct the true dictionary underlying data from noiseless or noisy samples, respectively. Many analytic tools that help yield these results come from information theory, including restating the dictionary-learning problem as a channel-coding problem and connecting analysis of minimax risk in statistical estimation to Fano’s inequality. In addition to highlighting effects of parameters on the sample complexity of dictionary learning, we show the potential advantages of dictionary learning from tensor data and present unaddressed problems.
Technology offers a means of developing new therapies to treat human illness and has great potential to reduce suffering and increase living standards around the world. For this reason, there is a large investment in research and development for new pharmaceuticals and medical devices, and the healthcare sector is rich with new forms of technology and legal issues associated with them. Fields such as genomics, the study of the genome, are providing a more detailed understanding of human health, ranging from cardiovascular diseases to cancer, along with improved methods of prevention and treatment. Assisted reproductive technologies are giving couples who would otherwise not have been able to have children the opportunity to do so, and allowing serious conditions to be identified earlier during, or even prior to, a pregnancy. Stem cell technologies will lead to replacement organs and body parts in coming decades, and already form the basis of treatments for serious conditions such as leukaemia and myeloma. Artificial intelligence is already transforming areas of medicine such as radiology and pathology: screening images for disease and other abnormalities under the supervision of doctors, saving time and improving access to healthcare for patients in rural and remote areas.
We discuss the question of learning distributions over permutations of a given set of choices, options or items based on partial observations. This is central to capturing the so-called “choice’’ in a variety of contexts. The question of learning distributions over permutations arises beyond capturing “choice’’ too, e.g., tracking a collection of objects using noisy cameras, or aggregating ranking of web-pages using outcomes of multiple search engines. Here we focus on learning distributions over permutations from marginal distributions of two types: first-order marginals and pair-wise comparisons. We emphasize the ability to identify the entire distribution over permutations as well as the “best ranking’’.
Designing hierarchical reinforcement learning algorithms that exhibit safe behaviour is not only vital for practical applications but also facilitates a better understanding of an agent’s decisions. We tackle this problem in the options framework (Sutton, Precup & Singh, 1999), a particular way to specify temporally abstract actions which allow an agent to use sub-policies with start and end conditions. We consider a behaviour as safe that avoids regions of state space with high uncertainty in the outcomes of actions. We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency. The proposed objective results in a trade-off between maximizing the standard expected return and minimizing the effect of model uncertainty in the return. We propose a policy gradient algorithm to optimize the constrained objective function. We examine the quantitative and qualitative behaviours of the proposed approach in a tabular grid world, continuous-state puddle world, and three games from the Arcade Learning Environment: Ms. Pacman, Amidar, and Q*Bert. Our approach achieves a reduction in the variance of return, boosts performance in environments with intrinsic variability in the reward structure, and compares favourably both with primitive actions and with risk-neutral options.
Learn about the latest developments in Automotive Ethernet technology and implementation with this fully revised third edition. Including 20% new material and greater technical depth, coverage is expanded to include detailed explanations of the new PHY technologies 10BASE-T1S (including multidrop) and 2.5, 5, and 10GBASE-T1, discussion of EMC interference models, and description of the new TSN standards for automotive use. Featuring details of security concepts, an overview of power saving possibilities with Automotive Ethernet, and explanation of functional safety in the context of Automotive Ethernet. Additionally provides an overview of test strategies and main lessons learned. Industry pioneers share the technical and non-technical decisions that have led to the success of Automotive Ethernet, covering everything from electromagnetic requirements and physical layer technologies, QoS, and the use of VLANs, IP and service discovery, to network architecture and testing. The guide for engineers, technical managers and researchers designing components for in-car electronics, and those interested in the strategy of introducing a new technology.
The size-Ramsey number of a graph F is the smallest number of edges in a graph G with the Ramsey property for F, that is, with the property that any 2-colouring of the edges of G contains a monochromatic copy of F. We prove that the size-Ramsey number of the grid graph on n × n vertices is bounded from above by n3+o(1).
Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system. However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality. These principles are typically added axiomatically which disrupts the constructive properties of these systems. Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types (HITs). This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of HITs. These new primitives allow the direct definition of function and propositional extensionality as well as quotient types, all with computational content. Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types. The adoption of cubical type theory extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity.