To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this chapter is to contribute to the current discussion on what robot laws might look like once artificial intelligence (AI)-enabled robots become more widespread and integrated within society. The starting point of the discussion is Asimov’s three laws of robotics, and the realization that while Asimov’s laws are norms formulated in human language, the behavior of robots is fundamentally controlled by algorithms, that is, by code. Three conclusions can be drawn from this starting point in the discussion of what laws for robots might look like. One is that laws enacted for humans will be translated into laws for robots, which as discussed here, will be a difficult challenge for legal scholars. The second conclusion is that due to the norms which exist within society, the same rules will be simultaneously present in the natural language version of laws for humans and the code version of laws for robots. And the third conclusion is that the translation of the robots’ actions and outputs back into human language will also be a challenging process. In addition, the chapter also argues that the regulation of a robot’s behavior largely overlaps with the current discourse on providing explainable AI but with the added difficulty of understanding how explaining legal decisions differ from explaining the outputs of AI and AI-enabled robots in general.
In previous research, we considered several novel problems posed by robot accidents and assessed related legal and economic approaches to the creation of optimal incentives for robot manufacturers, operators, and prospective victims. In this chapter, we synthesize our previous work in a unified analysis. We begin with a discussion about the problems and legal challenges posed by robot torts. Next, we describe the novel liability regime we proposed, that is, “manufacturer residual liability,” which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts. This regime makes operator and victim liability contingent upon their negligence (incentivizing them to act diligently) and makes manufacturers residually liable for nonnegligent accidents (incentivizing them to make optimal investments in researching safer technologies). This rule has the potential to drive unsafe technology out of the market and also to incentivize operators to adopt optimal activity levels in their use of automated technologies.
In Chapter 7, the history of statistical analysis is reviewed and its legacy discussed. Four situations of interest to machine learning evaluation are subsequently discussed within different statistical paradigms: the comparison of two classifiers on a single domain; the comparison of multiple classifiers on a single domain; the comparison of two classifiers on multiple domains; and the comparison of multiple classifiers on multiple domains. The three statistical paradigms considered for each of these situations are the null hypothesis statistical testing (NHST) setting; an enhanced Fisher-flavored methodology that adds the notions of confidence intervals, effect size, and power analysis to NHST; and a newer approach based on Bayesian reasoning.
Humans categorize themselves and others on the basis of many attributes forging a range of social groups. Such group identities can influence our perceptions, attitudes, beliefs, and behaviors toward others and ourselves. While decades of psychological research has examined how dividing the world into “us” and “them” impacts our attitudes, beliefs, and behaviors toward others, a new and emerging area of research considers how humans can ascribe social group memberships to humanoid robots. Specifically, our social perceptions and evaluations of humanoids can be shaped by subtle characteristics about the robot’s appearance or other features, particularly if these characteristics are interpreted through the lens of important human group identities. The current chapter reviews research on the psychology of intergroup relations to consider its manifestations and expressions in the context of human–robot interaction. We first consider how robots despite being nonliving can be ascribed certain identities (e.g., race, gender, and national origin). We then consider how this can in turn impact attitudes, beliefs, and behaviors toward such technology. Given the nascency of this field of study, we highlight existing gaps in our knowledge and highlight important directions for future research. The chapter concludes by considering the societal, market, and legal implications of bias in the context of human–robot interaction.
Over the last years there has been growing research interest in religion within the robotics community. Along these lines, this chapter will provide a case study of the ‘religious robot’ SanTO, which is the world’s first robot designed to be ‘Catholic’. This robot was created with the aim of exploring the theoretical basis for the application of robot technology in the religious space. While the application of this technology has many potential benefits for users, the use and design of religious or other social robots raises a number of ethical, legal, and social issues (ELSI). This paper, which is concerned with such issues will start with a general introduction, offer an ELSI analysis, and finally develop conclusions from an ethical design perspective.
The First Amendment of the U.S. Constitution protects “the freedom of speech.” Courts have also said it protects “freedom of thought.” But does that mean we have a right to speak with or listen to the speech of robots? Does it mean we have a First Amendment right to recruit robots to help us think or change the way we think? By “robots” here, I mean the robots that are the primary focus of this book: Humanoid robots that might not only emulate the way we solve intellectual challenges or express ourselves, but also emulate us physically – by taking on a physical form similar to that of human beings and moving and acting in the physical space of our homes, workplaces, or other spaces, and not simply the virtual space on our computers.
Religion and artificial intelligence are now deeply enmeshed in humanity's collective imagination, narratives, institutions, and aspirations. Their growing entanglement also runs counter to several dominant narratives that engage with long-standing historical discussions regarding the relationship between the 'sacred” and the 'secular' - technology and science. This Cambridge Companion explores the fields of Religion and AI comprehensively and provides an authoritative guide to their symbiotic relationship. It examines established topics, such as transhumanism, together with new and emerging fields, notably, computer simulations of religion. Specific chapters are devoted to Judaism, Christianity, Islam, Hinduism, and Buddhism, while others demonstrate that entanglements between religion and AI are not always encapsulated through such a paradigm. Collectively, the volume addresses issues that AI raises for religions, and contributions that AI has made to religious studies, especially the conceptual and philosophical issues inherent in the concept of an intelligent machine, and social-cultural work on attitudes to AI and its impact on contemporary life. The diverse perspectives in this Companion demonstrate how all religions are now interacting with artificial intelligence.
In scenarios such as environmental data collection and traffic monitoring, timely responses to real-time situations are facilitated by persistently accessing nodes with revisiting constraints using unmanned aerial vehicles (UAVs). However, imbalanced task allocation may pose risks to the safety of UAVs and potentially lead to failures in monitoring tasks. For instance, continuous visits to nodes without replenishment may damage UAV batteries, while delays in recharging could result in missing task deadlines, ultimately causing task failures. Therefore, this study investigates the problem of achieving balanced multi-UAV path planning for persistent monitoring tasks, which has not been previously researched according to the authors’ knowledge. The main contribution of this study is the proposal of two novel indicators to assist in balancing task allocation regarding multi-UAV path planning for persistent monitoring. One of the indicators is namely the waiting factor, which reflects the urgency of a task node waiting to be accessed, and the other is the difficulty level which is introduced to measure the difficulty of tasks undertaken by a UAV. By minimizing differences in difficulty level among UAVs, we can ensure equilibrium in task allocation. For a single UAV, the ant colony initialized genetic algorithm (ACIGA) has been proposed to plan its path and obtain its difficulty level. For multiple UAVs, the K-means clustering algorithm has been improved based on difficulty levels to achieve balanced task allocation. Simulation experiments demonstrated that the difficulty level could effectively reflect the difficulty of tasks and that the proposed algorithms could enable UAVs to achieve balanced task allocation.
Testing prototypes with users is essential when developing new products. This qualitative study examines and organizes users’ (children) feedback on physical toy prototypes at various fidelity levels; the data collected comes from a multidisciplinary design studio class tasked with developing wooden toys for the company PlanToys. In this research, we use an organizational framework to categorize stakeholders’ feedback, which includes affirmative feedback, convergent critical feedback, and divergent critical feedback. In our analysis, the children’s feedback was compared by prototype fidelity in terms of both form and function, as well as by the feedback categorization. Findings suggest that form fidelity may bias children toward giving more divergent feedback, that function fidelity may have little impact on feedback given, that children tend to give more types of feedback on lower fidelity than higher fidelity models, and that play value, form, interaction, and function were the highest reported categories of feedback. Based on our observations, we encourage toy designers to be aware of their goals for testing with children before beginning to prototype. This paper is a resource for designers to understand how to prototype for children, as well as resource for researchers studying how specific end users influence the product design process.
Recently, the field of robotics development and control has been advancing rapidly. Even though humans effortlessly manipulate everyday objects, enabling robots to interact with human-made objects in real-world environments remains a challenge despite years of dedicated research. For example, typing on a keyboard requires adapting to various external conditions, such as the size and position of the keyboard, and demands high accuracy from a robot to be able to use it properly. This paper introduces a novel hierarchical reinforcement learning algorithm based on the Deep Deterministic Policy Gradient (DDPG) algorithm to address the dual-arm robot typing problem. In this regard, the proposed algorithm employs a Convolutional Auto-Encoder (CAE) to deal with the associated complexities of continuous state and action spaces at the first stage, and then a DDPG algorithm serves as a strategy controller for the typing problem. Using a dual-arm humanoid robot, we have extensively evaluated our proposed algorithm in simulation and real-world experiments. The results showcase the high efficiency of our approach, boasting an average success rate of 96.14% in simulations and 92.2% in real-world settings. Furthermore, we demonstrate that our proposed algorithm outperforms DDPG and Deep Q-Learning, two frequently employed algorithms in robotic applications.
We prove that any bounded degree regular graph with sufficiently strong spectral expansion contains an induced path of linear length. This is the first such result for expanders, strengthening an analogous result in the random setting by Draganić, Glock, and Krivelevich. More generally, we find long induced paths in sparse graphs that satisfy a mild upper-uniformity edge-distribution condition.
In response to the complex and challenging task of long-distance inspection of small-diameter and variable-diameter mine holes, this paper presents a design for an adaptive small-sized mine hole robot. First, focusing on the environment of small-diameter mine holes, the paper analyzes the robot’s functions and overall structural framework. A two-wheeled wall-pressing robot with good mobility, arranged in a straight line, is designed. Furthermore, an adaptive variable-diameter method is devised, which involves constructing an adaptive variable-diameter model and proposing a control method based on position and force estimators, enabling the robot to perceive external forces. Lastly, to verify the feasibility of the structural design and adaptive variable-diameter method, performance tests and analyses are conducted on the robot’s mobility and adaptive variable-diameter capabilities. Experimental results demonstrate that the robot can move within small-diameter mine holes at any inclination angle, with a maximum horizontal crawling speed of 3.96 m/min. By employing the adaptive variable-diameter method, the robot can smoothly navigate convex platform obstacles and slope obstacles in mine holes with diameters ranging from 70 mm to 100 mm, achieving the function of adaptive variable-diameter within 2 s. Thus, it can meet the requirements of moving inside mine holes under complex conditions such as steep slopes and small and variable diameters.
We study the problem of identifying a small number $k\sim n^\theta$, $0\lt \theta \lt 1$, of infected individuals within a large population of size $n$ by testing groups of individuals simultaneously. All tests are conducted concurrently. The goal is to minimise the total number of tests required. In this paper, we make the (realistic) assumption that tests are noisy, that is, that a group that contains an infected individual may return a negative test result or one that does not contain an infected individual may return a positive test result with a certain probability. The noise need not be symmetric. We develop an algorithm called SPARC that correctly identifies the set of infected individuals up to $o(k)$ errors with high probability with the asymptotically minimum number of tests. Additionally, we develop an algorithm called SPEX that exactly identifies the set of infected individuals w.h.p. with a number of tests that match the information-theoretic lower bound for the constant column design, a powerful and well-studied test design.
When working in homotopy type theory and univalent foundations, the traditional role of the category of sets, $\mathcal{Set}$, is replaced by the category $\mathcal{hSet}$ of homotopy sets (h-sets); types with h-propositional identity types. Many of the properties of $\mathcal{Set}$ hold for $\mathcal{hSet}$ ((co)completeness, exactness, local cartesian closure, etc.). Notably, however, the univalence axiom implies that $\mathsf{Ob}\,\mathcal{hSet}$ is not itself an h-set, but an h-groupoid. This is expected in univalent foundations, but it is sometimes useful to also have a stricter universe of sets, for example, when constructing internal models of type theory. In this work, we equip the type of iterative sets $\mathsf{V}^0$, due to Gylterud ((2018). The Journal of Symbolic Logic83 (3) 1132–1146) as a refinement of the pioneering work of Aczel ((1978). Logic Colloquium’77, Studies in Logic and the Foundations of Mathematics, vol. 96, Elsevier, 55–66.) on universes of sets in type theory, with the structure of a Tarski universe and show that it satisfies many of the good properties of h-sets. In particular, we organize $\mathsf{V}^0$ into a (non-univalent strict) category and prove that it is locally cartesian closed. This enables us to organize it into a category with families with the structure necessary to model extensional type theory internally in HoTT/UF. We do this in a rather minimal univalent type theory with W-types, in particular we do not rely on any HITs, or other complex extensions of type theory. Furthermore, the construction of $\mathsf{V}^0$ and the model is fully constructive and predicative, while still being very convenient to work with as the decoding from $\mathsf{V}^0$ into h-sets commutes definitionally for all type constructors. Almost all of the paper has been formalized in $\texttt{Agda}$ using the $\texttt{agda}$-$\texttt{unimath}$ library of univalent mathematics.
Current fault diagnosis (FD) methods for heating, ventilation, and air conditioning (HVAC) systems do not accommodate for system reconfigurations throughout the systems’ lifetime. However, system reconfiguration can change the causal relationship between faults and symptoms, which leads to a drop in FD accuracy. In this paper, we present Fault-Symptom Brick (FSBrick), an extension to the Brick metadata schema intended to represent information necessary to propagate system configuration changes onto FD algorithms, and ultimately revise FSRs. We motivate the need to represent FSRs by illustrating their changes when the system reconfigures. Then, we survey FD methods’ representation needs and compare them against existing information modeling efforts within and outside of the HVAC sector. We introduce the FSBrick architecture and discuss which extensions are added to represent FSRs. To evaluate the coverage of FSBrick, we implement FSBrick on (i) the motivational case study scenario, (ii) Building Automation Systems’ representation of FSRs from 3 HVACs, and (iii) FSRs from 12 FD method papers, and find that FSBrick can represent 88.2% of fault behaviors, 92.8% of fault severities, 67.9% of symptoms, and 100% of grouped symptoms, FSRs, and probabilities associated with FSRs. The analyses show that both Brick and FSBrick should be expanded further to cover HVAC component information and mathematical and logical statements to formulate FSRs in real life. As there is currently no generic and extensible information model to represent FSRs in commercial buildings, FSBrick paves the way to future extensions that would aid the automated revision of FSRs upon system reconfiguration.
The COVID-19 pandemic underscored the critical need for timely data and information to aid interventions and decision-making. Efforts by different actors resulted in various data-driven initiatives, constituting experiences of deploying data in the COVID-19 response and valuable lessons that can advance the sharing and use of data for social good beyond COVID-19. This commentary highlights key case studies detailing the experiences and lessons of those who implemented data science solutions for the COVID-19 response, as well as findings from 74 data-centric COVID-19 interventions. These interventions demonstrated successful data access strategies, productive intervention processes, and effective stakeholder engagement, all of which present potential pathways to overcoming data access obstacles across Africa. Additionally, this study also briefly explores three areas for action (i.e., institutions, people, and platforms) that can inform future policy development to increase data sharing for societal benefit in the long term.
The inverse dynamics model of an industrial robot can predict and control the robot’s motion and torque output, improving its motion accuracy, efficiency, and adaptability. However, the existing inverse rigid body dynamics models still have some unmodelled residuals, and their calculation results differ significantly from the actual industrial robot conditions. The bootstrap aggregating (bagging) algorithm is combined with a long short-term memory network, the linear layer is introduced as the network optimization layer, and a compensation method of hybrid inverse dynamics model for robots based on the BLL residual prediction algorithm is proposed to meet the above needs. The BLL residual prediction algorithm framework is presented. Based on the rigid body inverse dynamics of the Newton–Euler method, the BLL residual prediction network is used to perform error compensation on the inverse dynamics model of the Franka robot. The experimental results show that the hybrid inverse dynamics model based on the BLL residual prediction algorithm can reduce the average residuals of the robot joint torque from 0.5651 N·m to 0.1096 N·m, which improves the accuracy of the inverse dynamics model compared with those of the rigid body inverse dynamics model. This study lays the foundation for performing more accurate operation tasks using industrial robots.
The application of data analytics to product usage data has the potential to enhance engineering and decision-making in product planning. To achieve this effectively for cyber-physical systems (CPS), it is necessary to possess specialized expertise in technical products, innovation processes, and data analytics. An understanding of the process from domain knowledge to data analysis is of critical importance for the successful completion of projects, even for those without expertise in these areas. In this paper, we set out the foundation for a toolbox for data analytics, which will enable the creation of domain-specific pipelines for product planning. The toolbox includes a morphological box that covers the necessary pipeline components, based on a thorough analysis of literature and practitioner surveys. This comprehensive overview is unique. The toolbox based on it promises to support and enable domain experts and citizen data scientists, enhancing efficiency in product design, speeding up time to market, and shortening innovation cycles.