To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Even the most market-oriented approaches to regulating AI-enabled robots presume some governmental regulator to collaborate in setting outcome goals. And the more advanced an AI-enabled robot becomes, the greater the need for oversight. For the past several decades, regulatory oversight boards have grown in use to promote the quality, transparency, and accountability of regulatory rules and policy. And recently, leading administrative law voices in the United States and the European Union have called for the creation of an oversight board to monitor regulator use of AI entities. How do we determine if these boards are worth the risks they create? To answer this question, this chapter uses the context of AI-enabled robots, which are increasingly prominent in homes, business, and education, to explain both when regulatory oversight boards are valuable as well as when they can frustrate society’s efforts to reduce the ill effects of emerging smart robots. Regulatory oversight boards create value in this context by conducting impact assessments of regulatory policies with an eye to the technological advancements and social context relevant to AI technologies such as robots, and oversight boards can promote regulatory foresight. However, oversight boards themselves pose risks. By influencing the methodological approach used by regulators, errors made by oversight boards can have outsized impacts. To determine whether any given oversight board is worth the risk it creates, this chapter sets out a simple cost-based approach for comparing the risks and benefits of regulatory oversight boards. This approach then is applied to emerging regulatory oversight boards looking at robots entering society.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
This chapter introduces the construct of anthropomorphism and highlights its relevance for human–robot interaction (HRI) research. This chapter reviews existing definitions of anthropomorphism and distinguishes it from anthropomorphization. It further discusses established theoretical models of anthropomorphism and their respective empirical support (or lack thereof). Moreover, we address consequences of anthropomorphism, especially for HRI. We shed light on different ways to measure anthropomorphism in HRI, discussing advantages and disadvantages of such measurement approaches, respectively. Finally, the present overview offers reflections on the added value of taking into account anthropomorphism and anthropomorphization in HRI research.
In recent years, the embodiment of AI in the form of a robot brings forth new challenges in privacy and transparency. A cognitive robot must be able to integrate multiple tasks in its performance that requires collecting a great amount of data and the use of various AI techniques. As robotics technology continues to advance, socially assistive humanoid robots will play a more central role in interacting with humans. However, can we trust robots in social contexts? How can we design embedded AI robots such that they are more transparent and trustworthy? and what technical, legal, and ethical frameworks might we adopt to build a trust relationship with robots? This chapter discusses these fundamental questions concerning privacy and transparency in human–robot interaction. It will then propose possible ethical and regulatory responses to address them.
With technological advancements occurring at a rapid pace in the field of computers, robotics, and artificial intelligence, major changes have taken place in the robotics industry. These changes have led to what some have termed the “robotics revolution” that has had a major impact on social organizations, the economy, and as discussed within this chapter, human rights for industry and service workers. The emergence of AI-enabled robotics has begun to change the world in major ways challenging the law within nation states and also internationally among nations. In that context, the use of intelligent service and industrial robots has broad applications for the large and small industries that use robots as a source of labor. For example, intelligent service robots are used in the fields of healthcare and medicine, transportation, and for care of the elderly and children. Further, police and security services also use robots for crowd control and for surveillance purposes. However, while developments in robotics have provided numerous benefits to society, they have also brought forth many issues that challenge social, moral, and professional norms within society. As a result, the ever-increasing growth and development of robotic technology in various industries is challenging current legal schemes in fundamental ways, one of which is human rights law. As an example, the use of industrial and service robots can lead to employment insecurity, threats to the health and safety of workers, and privacy concerns. Further, the use of robots in industry and for the delivery of services can be inconsistent with other human rights such as the right to health and safety, the right to equality of opportunity, the right to employment and fair working conditions, the right to life, the right to association, prohibition against discrimination, and equality, which are all supported in international and regional human rights documents.
The usage of robots and artificial intelligence is expanding and changing every day. These exciting developments, especially in areas such as engineering, industry, education, and health, have begun to influence the legal world and have become the grounds for many important discussions on the future of law and technology. One of these debates is the question of whether robot judges can take part in a trial, which is the subject of this chapter. Although this problem was previously described as a “distant dream,” there are important examples of this issue on the way to becoming a reality today. Considering developments of AI-enabled and humanoid robots, the following question is posed: “Can robot judges replace human judges?” As a current example, in the “Internet Courts” in China, a robot judge looks like a humanoid robot in the image of a woman using a 3D image inspired by human judges. For this reason, it is important to consider the positive and negative aspects of the possible consequences of the development in the legal world of robot judges that is likely to be widespread in the future, to ensure that the law does not fall behind technological developments.
This chapter reflects on the emerging uses of Emotional Artificial Intelligence (EAI) Systems in cars and the future regulatory implications for human–machine interactions in vehicles. The car has sociocultural importance as an everyday context of technology deployment, and we begin by reflecting on the ways the automotive sector is changing to incorporate EAI more widely. This is primarily due to shifts in safety legislation, alongside advances in biometrics and vehicle automation. This change is creating hybrid environments of human/nonhuman interactions, where emotion sensing systems foster increasingly entangled relationships with humans and cars. These laws, such as the proposed EU Artificial Intelligence Act and EU Vehicle Safety Regulation, are driving design of more trustworthy, privacy preserving, and safe AI systems. We reflect on legal implications arising from sensing human in-cabin behaviors and expressions, alongside risks posed by AI systems. We conclude by reflecting on the challenges of designing for trustworthy, human-centered interactions.
In 2017 Microsoft founder Bill Gates recommended taxing robots to slow the pace of automation. It has been estimated that up to 47 percent of U.S. jobs are at risk by advancements in artificial intelligence that has increased the rate of automation. While employment changes due to automation are not new, advances in artificial intelligence embedded within robots threaten many more jobs much more quickly than historic automation did. The chapter discusses how accelerated automation presents a revenue problem for governments. The revenue problem exists because the tax system is designed to tax labor more heavily, as labor is less likely to be able to avoid taxation. Capital investment, on the other hand, is taxed more lightly because capital is mobile and can escape taxation. When capital becomes labor, as in robotic automation, the bottom falls out of the system. With this background in mind, the Tax Cuts and Jobs Act (TCJA), enacted in 2017, significantly cut the U.S. corporate tax rate, from 35 percent to 21 percent. In addition, TCJA increased tax benefits for purchasing equipment (which would include automation in the form of robots), significantly enhancing bonus depreciation. The 2017 tax legislation continued and deepened the existing tax bias toward automation. This chapter explores policy options for solving the revenue problem.
Humanlike robots, based on their behavior and physical appearance, are becoming an increasingly important part of society often interacting with individuals in a wide variety of social contexts. One emerging class of robots that socialize with humans are robots capable of expressing emotions, are humanoid in appearance, and anthropomorphized by users. For such robots I propose that how humans interact with them is represented as a four-way process depending on the following: (1) The context of the situation surrounding the interaction; (2) The effort that users make to comprehend the robotic technology especially in a particular context; (3) The process of the adaptation of robotic technology (by users or groups of users) to incorporate robots into their lives, practices, and work routines; and (4) The transformation of the technology and its subsequent meaning to the user. One aspect of the above points is that humans receive significant benefits by interacting with robots in different contexts. For example, by applying (or appropriating) the robot’s abilities to perform various tasks, humans may increase their own physical and intellectual abilities. However, unlike human creativity and innovation, which are a product of human cognition, robot creativity is based on the use of algorithms and software in which robots’ appropriate data is used to perform tasks. Further, a robot’s ability to express emotion and a personality may influence humans in a variety of ways; for example, the conditions under which they appropriate the robot’s labor or creative output for a particular reason. As I argue in this chapter, “robot appropriation” can lead to legal constraints and regulations between human and robot. On this point, I note that the continental legal order, to which the Russian legal system belongs, has developed a robust method for the conscious exercise of law. Based on the Russian legal system and the approach taken by other jurisdictions, an AI-enabled robot can only receive what are described as independent rights that are different in substance from the rights granted to natural persons. Taking a broad scope in this chapter, I propose that regulations are required for various human–robot interactions, and I discuss several examples of this approach.
This chapter focuses on the growing inclusion of social robots in therapy from the perspective of unresolved legal and ethical issues that include risks to patient autonomy, human dignity and trust, the potentially life-threatening effects of inaccurate or malfunctioning technology, diminished privacy due to the reliance on enormous amounts of personal (sensitive health) data, new challenges to data security due to the cyber–physical nature of robots, and the problem of how to obtain informed consent to medical treatments that depend on opaque AI decision-making. From this broad spectrum, the chapter focuses on the protection of the health and safety of patients and care recipients under EU law. A more detailed analysis shows that neither the Medical Device Regulation nor the proposal for an Artificial Intelligence Act adequately address the risks to patient health and safety that arise from human–machine interaction. Against this backdrop, the chapter provides recommendations as to which aspects should be regulated in the future and argues for a public discussion about the extent to which we, as a society, should replace human therapists with AI-enabled robotic technology.
In this chapter we review the status of human–robot interaction (HRI) including current research directions within robotics that may impact issues of law, policy, and regulations. While the focus of this book is on HRI experienced in social contexts, to provide a broad review of the legal and policy issues impacted by HRI, we discuss different areas of robotics that require various levels of human interaction and supervisory control of robots. We note that robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence (AI), which are under human supervisory control but becoming more autonomous. Further, we note that research on human interaction with robots is a rapidly evolving field and specialized robots under human teleoperation have proven successful in hazardous environments and for medical and other applications. There is also a noticeable trend for more humanoid-appearing and AI-enabled robots interacting with humans in social contexts, and for this class of robots we discuss emerging issues of law, regulations, and policy.
Could robots be recognized as legal persons? Should they? Much of the discussion of these topics is distorted by fictional representations of what form true artificial intelligence (AI) might take – in particular that it would be of human-level intellect and be embodied in humanoid form. Such robots are the focus of this volume, with the possibility that external appearance and its echoes in science fiction may shape debate over their “rights.” Most legal systems would be able to grant some form of personality, yet early considerations of whether they should conflate two discrete rationales. The first is instrumental, analogous to the economic reasons why corporations are granted personality. The second is inherent, linked to the manner in which human personality is recognized. Neither is sufficient to justify legal personality for robots today. A third reason, which may become more pressing in the medium term, is tied to the possibility of AI systems that far surpass humans in terms of ability. In the event that such entities are created, the question may shift from whether we recognize them under the law, to whether they recognize us.
Modern law has developed based on the fundamental principle of respecting each person as an autonomous individual. However, the meaning of autonomous is not necessarily unambiguous even in jurisprudence, which has resulted in its use in various contexts. In this chapter, I examine the differences between personal autonomy and machine autonomy. Then I attempt to reconstruct the concept by exploring legal issues concerning privacy and freedom of expression when people use robots. As this chapter discusses, personal autonomy and machine autonomy differ considerably from each other, even when using the same concept of autonomy. Given the state of human–robot interaction (HRI) technology, it is appropriate to understand machine autonomy as having instrumental value as a means of serving personal autonomy. Furthermore, it is argued that the development of HRI will likely call for the implementation of a system of checks and balances among multiple robots to ensure autonomy as collective self-regulation for groups of robots connected to each other through information communication networks, rather than ensuring the autonomy of individual robots.
This chapter discusses the use of AI ethics standardizations for robot governance. Specifically, the chapter considers challenges to the regulation of AI-enabled technology due to slow legislative processes that have not been able to keep pace with the rapid speed of technological advances. In addition to considering the regulation of critical AI technologies, the chapter also argues for a regulatory framework that relies on nonbinding and flexible AI ethics standards to ensure that stakeholders manage ethical, legal, and social implication (ELSI) risks that are inherent in daily human–robot interactions. By including AI ethics standards into the development process for humanoid and expressive robots, robot developers will be able to include principles of responsible innovation and research without conflicting with “hard laws” enacted for robot regulation. In this chapter, through two case studies, I explore the approach of ethical robot design, examine its potential and limitations, and demonstrate the utility of “ethically aligned design” and “social system design” frameworks in implementing legal human–robot interaction (L-HRI).
This chapter introduces issues of law, policy, and regulations for human interaction with robots that are AI enabled, expressive, humanoid in appearance, and that are anthropomorphized by users. These features are leading to a class of robots that are beginning to pose unique challenges to courts, legislators, and the robotics industry as they consider how the behavior of robots operating with sophisticated social skills and increasing levels of intelligence should be regulated. In this chapter we introduce basic terms, definitions, and concepts which relate to human interaction with AI-enabled and social robots and we review some of the regulations, statutes, and case law which apply to such robots and we do so specifically in the context of human–robot interaction. Our goal in this chapter is to provide a conceptual framework for the chapters which follow focusing on human interaction with robots that are becoming more like us in form and behavior.
This chapter on Human–Robot Interaction (HRI) focuses on the ways humans respond to and interact with social robots. It begins by delineating recent research into HRI and the factors influencing user interaction with social robots. The main interest of the chapter is on anthropomorphism, that is, the observed human tendency to assign human traits and characteristics to technology, and the implications this has for robot design. After critically analyzing anthropomorphic robot design and its implications for HRI, the chapter discusses the role of ethics in shaping the development of technology in general and social robots in particular. From the European Ethics Guidelines for Trustworthy AI – human agency, transparency, communication, and individual and societal well-being are presented here as concepts and principles of importance for the design of present and future social robots.
Based primarily on the law of the Russian Federation, this chapter reviews the current state of intersectoral relations between different legal schemes that apply to the regulation of robots. To do so, the chapter discusses the complementarity and consistent attempt of legal scholars to integrate legal processes occurring within civil, administrative, and criminal law into a single comprehensive framework for robot regulation. As discussed in this chapter, the connection between the civil law principles of indemnification, liability insurance, and the consideration of criminal law provisions is essential for the establishment of an effective regulatory system for robotic devices, as well as for the construction of norms for the regulation of robots. The chapter argues that there are currently no well-accepted mechanisms in civil law to hold the developers for robots’ software accountable for resulting harm, which is a serious omission given the growing autonomy of robotic devices. Further, the chapter argues that it is essential to recognize controlled and semicontrolled robots as sources of increased danger to individuals and that the responsibility for the damage caused by controlled and semicontrolled robots should be assigned to the robot owners. The chapter concludes that the main criteria distinguishing civil law torts from criminal law offenses when considering harms to individuals resulting from interaction with robots are the degree of public danger and the extent of damage caused by the robotic technology.
This chapter briefly reviews various ways in which ethics have been implemented in AI-enabled devices such as humanoid and expressive robots, and how care for other forms of embedded AI is an important part of creating trustable and ethical AI systems. We look at nudging in particular, how robots experienced in social contexts could implement nudging, and the implications of nudging for four fundamental ethical values that underpin trust. We discuss this topic using examples and by presenting important questions that creators of AI-based nudging systems should ask themselves, before and after creating such systems.