To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
With the emergence of complex systems composed of AI-powered autonomous robots, the argument has emerged that robots that can behave autonomously should be given legal personality and be the subject of legal acts and responsibilities. However, these arguments do not depart from the “humanism” that is the root cause of the problems that current legal theory faces in a complex scientific and technological society. This article attempts to provide a possible solution to the legal problems that arise in today’s complex scientific and technological society by introducing the concept of “legal being” and the novel legal system that enables strategic attribution of legal liability in complex human–robot interactions.
This chapter focuses on the potential impact of anthropomorphic robots on the safety of humans. Regarding the legal protection of human lives, the rise of anthropomorphic robots could pose at least two threats, the “epistemological threat” and the “patient threat.” The epistemological threat concerns humans’ epistemological limitations. Humans could have difficulty in distinguishing humans from robots owing to the way in which humans gather information about the external world. The patient threat concerns the possibility of prioritizing robots in situations in which human life is in danger owing to the recognition of robots as moral patients. This chapter also discusses the place of robots in the hierarchy of legally protected values and ways to mitigate the threat to human superiority by anthropomorphic robots.
The integration of assisted living technologies in the home is rapidly accelerating. As socially assistive robots (SARs) often operate in the private sphere of life, sometimes in symbiotic relations with the people they assist, they may give rise to privacy concerns. This chapter investigates the potential privacy and data protection issues arising from the increasing deployment of assisted living technologies (AAL) in general and SARs in particular. It addresses privacy concerns related to human–robot interactions, including conversational interfaces, audio- and video-based assistive technologies, and analyzes them within the European context. Since the wide range of privacy concerns resulting from using SARs raises particular challenges for the design process, this article zooms in on the Privacy by Design concept introduced in the European General Data Protection Regulation (GDPR). Since communication and interaction with robots in therapeutic and care contexts impact data protection, these privacy concerns pose challenges that must be considered in a life cycle starting during the robot design and finalizing the implementation in care settings, including home care.
Social robots present a novel category of socially interactive technology. There is increasing interest in how people behave toward social robots, how robots can change human behaviors, and what are the factors that influence this interaction. This is a complex relationship between the robot’s physical embodiment, social behaviors, and capabilities, and also the human factor. There are differences in how people behave toward robots and this chapter takes a look at the role of an individual’s cultural background and the factors interwoven with what we generally define as culture, and how those are a factor in holistically understanding how robots are perceived.
Robots are an increasingly common feature in public spaces. From regulations permitting broader drone use in public airspace, and autonomous vehicle testing on public roads, to laws permitting or restricting the presence of delivery robots on sidewalks – law often precipitates the introduction of new robotic systems into shared spaces. Laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology affects public space and the people who inhabit that space. This begs the questions: How should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? With a focus on the Canadian legal system, and drawing upon insights from the interdisciplinary field of law and geography, this chapter argues that the laws that regulate robots deployed in public space will affect the public nature of that space, potentially to the benefit of some human inhabitants of the space over others. For this reason, special considerations should apply to the regulation of robots that will operate in public space. In particular, the entry of a robotic system into a public space should never be prioritized over communal access to and use of that space by people. And, where a robotic system serves to make a space more accessible, lawmakers should avoid permitting differential access to that space through the regulation of that robotic system.
We humans are biased – and our robotic creations are biased, too. Bias is a natural phenomenon that drives our perceptions and behavior, including when it comes to socially expressive robots that have humanlike features. Recognizing that we embed bias, knowingly or not, within the design of such robots is crucial to studying its implications for people in modern societies. In this chapter, I consider the multifaceted question of bias in the context of humanoid, AI-enabled, and expressive social robots: Where does bias arise, what does it look like, and what can (or should) we do about it. I offer observations on human–robot interaction (HRI) along two parallel tracks: (1) robots designed in bias-conscious ways and (2) robots that may help us tackle bias in the human world. I outline a curated selection of cases for each track drawn from the latest HRI research and positioned against social, legal, and ethical factors. I also propose a set of critical next steps to tackle the challenges and opportunities on bias within HRI research and practice.
Recent robotics, AI, and human–robot interaction techniques increasingly improve the capability of social robots. Yet, there seems to be the lack of important capabilities of social robots for their more substantial use. This chapter specifically focus on mobile social robots that would be used for service industry. We introduce some of the technical advancement about the basic social capabilities for mobile social robots, such as how social robots should consider human personal space, approach people, crossing with human pedestrians, and harmonized with crowd of people. Then, we discuss the moral-related issues with social robots, particularly, robot abuse problem. The fact that children treat social robots aggressively and even violently indicates the lack of peer respect for them. Here, we discuss what the future social robots would need to be equipped, and argue for the needs of “moral interaction” capability. Two capabilities, peer respect and peer pressure, are discussed. That is, social robots would need to elicit people to consider them as a kind of moral recipient (peer respect), and elicit people to behave good (peer pressure) in a way similar to “human eyes” would work. We introduce some of recent research works for moral interaction capability. Finally, we discuss the future implications on law and regulations if much more moral interaction capabilities will be equipped for social robots in the future.
This chapter discusses the topic of ethics, law, and policy as related to human interaction with robots which are humanoid in appearance, expressive, and AI enabled. The term “robot ethics” (or roboethics) is generally concerned with ethical problems that occur when humans and robots interact in various social contexts. For example, whether robots pose a threat to humans in warfare, the use of robots as caregivers, or the use of robots which make decisions that could impact historically disadvantaged populations. In each case, the focus of the discussion is predominantly on how to design robots that act ethically toward humans (some refer to this issue as “machine ethics”). However, the topic of robot ethics could also refer to the ethical issues associated with human behavior toward robots especially as robots become active members of society. It is this latter and less investigated view of robot ethics that the chapter focuses on, and specifically whether robots that are humanoid in appearance, AI enabled, and expressive will be the subject of discrimination based on the robot’s perceived race, gender, or ethnicity. This is an emerging topic of interest among scholars within law, robotics, and social science and there is evidence to suggest that biases and other negative reactions which may be expressed toward people in social contexts may also be expressed toward robots that are entering society. For these and other reasons presented within the chapter, a discussion of the ethical treatment of robots is an important and timely topic for human–robot interaction.
Chapter 5 explores the interconnected ecosystem enabling feature-rich smart contracts. It first covers oracles, which provide external data to blockchains, outlining use cases, design considerations, and business decisions around oracle solutions. It then discusses interoperability, explaining atomic swaps and various cross-chain bridge designs such as lock/mint, liquidity pools, and zkBridge for trustless transfers. Next, it examines the ecosystem for mitigating Miner Extractable Value (MEV), categorizing solutions into auctions, time/content-based ordering, and application-specific designs. It also highlights other vital components such as user-friendly wallets, performant RPC nodes, governance mechanisms for collective decision-making, and privacy-preserving techniques such as zero-knowledge proofs. By delving into these key building blocks, this chapter offers readers a comprehensive understanding of the dynamic smart contract ecosystem. It emphasizes how components such as oracles, bridges, MEV mitigation, governance, and privacy-preservation enable richer functionality, interoperability, fairness, and user experience, shaping decentralized applications’ future.
This chapter introduces the problem of regulating human–robot interaction (HRI) according to the rule of law in the convergence of the Web of Data, the Internet of Things, and Industry 5.0. It explains some strategies fleshed out in the EU H2020 Project OPTIMAI, a data-driven platform for zero-defect manufacturing (ZDM) to deploy a smart industry ecosystem. The chapter defines the notions of legal governance and smart legal ecosystems as a mindset and as a toolkit to foster and regulate HRI in iterative cycles.
This chapter reviews progress in the field of artificial intelligence, and considers the special case of the android: a human-like robot that people would accept as similar to humans in how they perform and behave in society. An android as considered here does not have the purpose to deceive humans into believing that the android is a human. Instead, the android self-identifies as a non-human with its own integrity as a person. To make progress on android intelligence, artificial intelligence research needs to develop computer models of how people engage in relationships, how people explain their experience in terms of stories and how people reason about the things in life that are most significant and meaningful to them. A functional capacity for religious reasoning is important because the intelligent android needs to understand its role and its relationships with other persons. Religious reasoning is taken here not to mean matters of specific confessional faith and belief according to established doctrines but about the cognitive processes involved in negotiating significant values and relationships with tangible and intangible others.