To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Simulating religion through computer modelling can demonstrate how fragmentary theories relate, untangle individual lines of causal influence, identify the relative importance of causal factors and enable experimentation that would never be possible (or ethical) in the real world. This chapter reviews the application of computational modelling and simulation to religion, presents findings from specific simulation studies and discusses some of the philosophical issues raised by this type of research. Social simulations are artificial complex systems that we can use to study real-world complex systems. The best of these simulation models are carefully validated in relation to real-world data. Multilevel validation justifies confidence that the causal architecture of the simulation reflects real-world causal processes, thereby delivering an invaluable proxy system into the hands of researchers who study religion.
This chapter explores design guidelines and potential regulatory issues that could be associated with future baby robot interaction. We coin the term “robot natives,” which we define as the first generation of human’s regularly interacting with robots in domestic environments. This term includes babies (0–1 year old) and toddlers (1–3 years old) born in the 2020s. Drawing from the experience of other interactive technologies becoming widely available in the home and the positive and negative impact they have on humans; we propose some insights into the design of future scenarios for baby–robot interaction, aiming to influence future legislation regulating service robots and social robots used with robot natives. Similarly, we aim to inform designers and developers to inhibit robot designs which can negatively affect the long-term interactions for the robot natives. We conclude that a qualitative, multidisciplinary, ethical, human-centered design approach should be beneficial to guide the design and use of robots in the home and around families as this is currently not a common approach in the design of studies in child robot interaction.
Romance between a human and robot will pose many questions for the laws that apply to human–robot interaction and, in particular, family law. Such questions include whether humans and robots can marry and what a subsequent divorce might look like. This chapter considers these issues, organized to track the seasons of romantic relationships, such as cohabitation, engagement, and marriage. Given that marriage is no longer devoid of the possibility of divorce, this chapter also considers issues of property division, alimony, child custody, and child support when a marriage between a human and robot dissolves. Even for skeptics of such a future, given rapid advances in robotics, the applicability of family law to relationships between a human and robot is nonetheless an increasingly relevant thought experiment and intersects with other emerging areas of law, technology, and robotics.
Chapter 8 introduces evaluation procedures for paradigms other than classification. In particular, it discusses evaluation for classical problems such as regression analysis, time-series analysis, outlier detection, and reinforcement learning, along with evaluation approaches for newer tasks such as positive-unlabelled classification, ordinal classification, multi-labeled classification, image segmentation, text generation, data stream mining, and lifelong learning.
The purpose of this chapter is to contribute to the current discussion on what robot laws might look like once artificial intelligence (AI)-enabled robots become more widespread and integrated within society. The starting point of the discussion is Asimov’s three laws of robotics, and the realization that while Asimov’s laws are norms formulated in human language, the behavior of robots is fundamentally controlled by algorithms, that is, by code. Three conclusions can be drawn from this starting point in the discussion of what laws for robots might look like. One is that laws enacted for humans will be translated into laws for robots, which as discussed here, will be a difficult challenge for legal scholars. The second conclusion is that due to the norms which exist within society, the same rules will be simultaneously present in the natural language version of laws for humans and the code version of laws for robots. And the third conclusion is that the translation of the robots’ actions and outputs back into human language will also be a challenging process. In addition, the chapter also argues that the regulation of a robot’s behavior largely overlaps with the current discourse on providing explainable AI but with the added difficulty of understanding how explaining legal decisions differ from explaining the outputs of AI and AI-enabled robots in general.
In previous research, we considered several novel problems posed by robot accidents and assessed related legal and economic approaches to the creation of optimal incentives for robot manufacturers, operators, and prospective victims. In this chapter, we synthesize our previous work in a unified analysis. We begin with a discussion about the problems and legal challenges posed by robot torts. Next, we describe the novel liability regime we proposed, that is, “manufacturer residual liability,” which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts. This regime makes operator and victim liability contingent upon their negligence (incentivizing them to act diligently) and makes manufacturers residually liable for nonnegligent accidents (incentivizing them to make optimal investments in researching safer technologies). This rule has the potential to drive unsafe technology out of the market and also to incentivize operators to adopt optimal activity levels in their use of automated technologies.
In Chapter 7, the history of statistical analysis is reviewed and its legacy discussed. Four situations of interest to machine learning evaluation are subsequently discussed within different statistical paradigms: the comparison of two classifiers on a single domain; the comparison of multiple classifiers on a single domain; the comparison of two classifiers on multiple domains; and the comparison of multiple classifiers on multiple domains. The three statistical paradigms considered for each of these situations are the null hypothesis statistical testing (NHST) setting; an enhanced Fisher-flavored methodology that adds the notions of confidence intervals, effect size, and power analysis to NHST; and a newer approach based on Bayesian reasoning.
Humans categorize themselves and others on the basis of many attributes forging a range of social groups. Such group identities can influence our perceptions, attitudes, beliefs, and behaviors toward others and ourselves. While decades of psychological research has examined how dividing the world into “us” and “them” impacts our attitudes, beliefs, and behaviors toward others, a new and emerging area of research considers how humans can ascribe social group memberships to humanoid robots. Specifically, our social perceptions and evaluations of humanoids can be shaped by subtle characteristics about the robot’s appearance or other features, particularly if these characteristics are interpreted through the lens of important human group identities. The current chapter reviews research on the psychology of intergroup relations to consider its manifestations and expressions in the context of human–robot interaction. We first consider how robots despite being nonliving can be ascribed certain identities (e.g., race, gender, and national origin). We then consider how this can in turn impact attitudes, beliefs, and behaviors toward such technology. Given the nascency of this field of study, we highlight existing gaps in our knowledge and highlight important directions for future research. The chapter concludes by considering the societal, market, and legal implications of bias in the context of human–robot interaction.
Over the last years there has been growing research interest in religion within the robotics community. Along these lines, this chapter will provide a case study of the ‘religious robot’ SanTO, which is the world’s first robot designed to be ‘Catholic’. This robot was created with the aim of exploring the theoretical basis for the application of robot technology in the religious space. While the application of this technology has many potential benefits for users, the use and design of religious or other social robots raises a number of ethical, legal, and social issues (ELSI). This paper, which is concerned with such issues will start with a general introduction, offer an ELSI analysis, and finally develop conclusions from an ethical design perspective.
The First Amendment of the U.S. Constitution protects “the freedom of speech.” Courts have also said it protects “freedom of thought.” But does that mean we have a right to speak with or listen to the speech of robots? Does it mean we have a First Amendment right to recruit robots to help us think or change the way we think? By “robots” here, I mean the robots that are the primary focus of this book: Humanoid robots that might not only emulate the way we solve intellectual challenges or express ourselves, but also emulate us physically – by taking on a physical form similar to that of human beings and moving and acting in the physical space of our homes, workplaces, or other spaces, and not simply the virtual space on our computers.
Religion and artificial intelligence are now deeply enmeshed in humanity's collective imagination, narratives, institutions, and aspirations. Their growing entanglement also runs counter to several dominant narratives that engage with long-standing historical discussions regarding the relationship between the 'sacred” and the 'secular' - technology and science. This Cambridge Companion explores the fields of Religion and AI comprehensively and provides an authoritative guide to their symbiotic relationship. It examines established topics, such as transhumanism, together with new and emerging fields, notably, computer simulations of religion. Specific chapters are devoted to Judaism, Christianity, Islam, Hinduism, and Buddhism, while others demonstrate that entanglements between religion and AI are not always encapsulated through such a paradigm. Collectively, the volume addresses issues that AI raises for religions, and contributions that AI has made to religious studies, especially the conceptual and philosophical issues inherent in the concept of an intelligent machine, and social-cultural work on attitudes to AI and its impact on contemporary life. The diverse perspectives in this Companion demonstrate how all religions are now interacting with artificial intelligence.
In scenarios such as environmental data collection and traffic monitoring, timely responses to real-time situations are facilitated by persistently accessing nodes with revisiting constraints using unmanned aerial vehicles (UAVs). However, imbalanced task allocation may pose risks to the safety of UAVs and potentially lead to failures in monitoring tasks. For instance, continuous visits to nodes without replenishment may damage UAV batteries, while delays in recharging could result in missing task deadlines, ultimately causing task failures. Therefore, this study investigates the problem of achieving balanced multi-UAV path planning for persistent monitoring tasks, which has not been previously researched according to the authors’ knowledge. The main contribution of this study is the proposal of two novel indicators to assist in balancing task allocation regarding multi-UAV path planning for persistent monitoring. One of the indicators is namely the waiting factor, which reflects the urgency of a task node waiting to be accessed, and the other is the difficulty level which is introduced to measure the difficulty of tasks undertaken by a UAV. By minimizing differences in difficulty level among UAVs, we can ensure equilibrium in task allocation. For a single UAV, the ant colony initialized genetic algorithm (ACIGA) has been proposed to plan its path and obtain its difficulty level. For multiple UAVs, the K-means clustering algorithm has been improved based on difficulty levels to achieve balanced task allocation. Simulation experiments demonstrated that the difficulty level could effectively reflect the difficulty of tasks and that the proposed algorithms could enable UAVs to achieve balanced task allocation.
Testing prototypes with users is essential when developing new products. This qualitative study examines and organizes users’ (children) feedback on physical toy prototypes at various fidelity levels; the data collected comes from a multidisciplinary design studio class tasked with developing wooden toys for the company PlanToys. In this research, we use an organizational framework to categorize stakeholders’ feedback, which includes affirmative feedback, convergent critical feedback, and divergent critical feedback. In our analysis, the children’s feedback was compared by prototype fidelity in terms of both form and function, as well as by the feedback categorization. Findings suggest that form fidelity may bias children toward giving more divergent feedback, that function fidelity may have little impact on feedback given, that children tend to give more types of feedback on lower fidelity than higher fidelity models, and that play value, form, interaction, and function were the highest reported categories of feedback. Based on our observations, we encourage toy designers to be aware of their goals for testing with children before beginning to prototype. This paper is a resource for designers to understand how to prototype for children, as well as resource for researchers studying how specific end users influence the product design process.
Recently, the field of robotics development and control has been advancing rapidly. Even though humans effortlessly manipulate everyday objects, enabling robots to interact with human-made objects in real-world environments remains a challenge despite years of dedicated research. For example, typing on a keyboard requires adapting to various external conditions, such as the size and position of the keyboard, and demands high accuracy from a robot to be able to use it properly. This paper introduces a novel hierarchical reinforcement learning algorithm based on the Deep Deterministic Policy Gradient (DDPG) algorithm to address the dual-arm robot typing problem. In this regard, the proposed algorithm employs a Convolutional Auto-Encoder (CAE) to deal with the associated complexities of continuous state and action spaces at the first stage, and then a DDPG algorithm serves as a strategy controller for the typing problem. Using a dual-arm humanoid robot, we have extensively evaluated our proposed algorithm in simulation and real-world experiments. The results showcase the high efficiency of our approach, boasting an average success rate of 96.14% in simulations and 92.2% in real-world settings. Furthermore, we demonstrate that our proposed algorithm outperforms DDPG and Deep Q-Learning, two frequently employed algorithms in robotic applications.
We prove that any bounded degree regular graph with sufficiently strong spectral expansion contains an induced path of linear length. This is the first such result for expanders, strengthening an analogous result in the random setting by Draganić, Glock, and Krivelevich. More generally, we find long induced paths in sparse graphs that satisfy a mild upper-uniformity edge-distribution condition.
In response to the complex and challenging task of long-distance inspection of small-diameter and variable-diameter mine holes, this paper presents a design for an adaptive small-sized mine hole robot. First, focusing on the environment of small-diameter mine holes, the paper analyzes the robot’s functions and overall structural framework. A two-wheeled wall-pressing robot with good mobility, arranged in a straight line, is designed. Furthermore, an adaptive variable-diameter method is devised, which involves constructing an adaptive variable-diameter model and proposing a control method based on position and force estimators, enabling the robot to perceive external forces. Lastly, to verify the feasibility of the structural design and adaptive variable-diameter method, performance tests and analyses are conducted on the robot’s mobility and adaptive variable-diameter capabilities. Experimental results demonstrate that the robot can move within small-diameter mine holes at any inclination angle, with a maximum horizontal crawling speed of 3.96 m/min. By employing the adaptive variable-diameter method, the robot can smoothly navigate convex platform obstacles and slope obstacles in mine holes with diameters ranging from 70 mm to 100 mm, achieving the function of adaptive variable-diameter within 2 s. Thus, it can meet the requirements of moving inside mine holes under complex conditions such as steep slopes and small and variable diameters.