To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces social scientific perspectives and methods applicable to observing the relationship between artificial intelligence (AI) and religion. It discusses the contributions that anthropological and sociological approaches can make to this entanglement of two modern social phenomena while also drawing attention to the inherent biases and perspectives that both fields bring with them due to their histories. Examples of research on religion and AI are highlighted, especially when they demonstrate agile and new methodologies for engaging with AI in its many applications; including but not limited to online worlds, multimedia formats, games, social media and the new spaces made by technological innovation such as the innovations such as the platforms underpinning the gig economy. All these AI-enabled spaces can be entangled with religious and spiritual conceptions of the world. This chapter also aims to expand upon the relationship between AI and religion as it is perceived as a general concept or object within human society and civilisation. It explains how both anthropology and sociology can provide frameworks for conceptualising that relationship and give us ways to account for our narratives of secularisation – informed by AI development – that see religion as a remnant of a prior, less rational stage of human civilisation.
The use of care robots can reduce the demands for manpower in long-term care facilities. Further, care robots serve the needs of both the elders residing in care facilities and the staff of the facilities. This chapter considers the following issues for care robots. Whether long-term care robots should be required to meet the high standards for the use of medical devices found in current regulations. How should standards of use be developed for care robots based on the characteristics of the robots? For this question, I note that in Japan, a public–private partnership has shown success in the regulation of care robots. In addition, with care robots, how should we protect the privacy of elders and their relatives or friends in contact with care robots given that the elderly may have reduced cognitive ability. And lastly, what legal and ethical concerns apply to the design of the interfaces between care robots and elders?
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot has the ability to form, act on, and explain its moral decisions. If such a robot falls short of the basic moral standards expected by society, labeling it as a criminal can serve criminal law’s function of censuring wrongful conduct and ease the emotional harm suffered by human victims. Moreover, imposing criminal liability on robots could have significant instrumental value in certain cases, such as in identifying culpable humans. However, this does not exempt the manufacturers, trainers, or owners of the robots from any potential criminal liability.
Even the most market-oriented approaches to regulating AI-enabled robots presume some governmental regulator to collaborate in setting outcome goals. And the more advanced an AI-enabled robot becomes, the greater the need for oversight. For the past several decades, regulatory oversight boards have grown in use to promote the quality, transparency, and accountability of regulatory rules and policy. And recently, leading administrative law voices in the United States and the European Union have called for the creation of an oversight board to monitor regulator use of AI entities. How do we determine if these boards are worth the risks they create? To answer this question, this chapter uses the context of AI-enabled robots, which are increasingly prominent in homes, business, and education, to explain both when regulatory oversight boards are valuable as well as when they can frustrate society’s efforts to reduce the ill effects of emerging smart robots. Regulatory oversight boards create value in this context by conducting impact assessments of regulatory policies with an eye to the technological advancements and social context relevant to AI technologies such as robots, and oversight boards can promote regulatory foresight. However, oversight boards themselves pose risks. By influencing the methodological approach used by regulators, errors made by oversight boards can have outsized impacts. To determine whether any given oversight board is worth the risk it creates, this chapter sets out a simple cost-based approach for comparing the risks and benefits of regulatory oversight boards. This approach then is applied to emerging regulatory oversight boards looking at robots entering society.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
This chapter explores the intersection of Hindu philosophy and practice with the development of artificial intelligence (AI). The chapter first introduces aspects of technological growth in Hindu contexts, including the reception of ‘Western’ ideas about AI in Hindu communities before describing key elements of the Hindu traditions. It then shows how AI technologies can be conceived of from a Hindu perspective and moves from there to the philosophical contributions Hinduism offers for global reflection on AI. Specifically, the chapter describes openings and contentions for AI in Hindu rituals. The focus is the use of robotics and/or AI in Hindu pūjā (worship of gods) and the key practice of darśan (mutual seeing) with the divine. Subsequently, the chapter investigates how Hindu philosophers have engaged the distinctive qualities of human beings and their investigation into body, minds and consciousness/awareness. The chapter concludes by raising questions for future research.
This chapter introduces the construct of anthropomorphism and highlights its relevance for human–robot interaction (HRI) research. This chapter reviews existing definitions of anthropomorphism and distinguishes it from anthropomorphization. It further discusses established theoretical models of anthropomorphism and their respective empirical support (or lack thereof). Moreover, we address consequences of anthropomorphism, especially for HRI. We shed light on different ways to measure anthropomorphism in HRI, discussing advantages and disadvantages of such measurement approaches, respectively. Finally, the present overview offers reflections on the added value of taking into account anthropomorphism and anthropomorphization in HRI research.
Online reviews have a significant impact on the purchasing decisions of potential consumers. Positive reviews often sway buyers, even when faced with higher prices. This phenomenon has given rise to a deceptive industry dedicated to crafting counterfeit reviews. Companies frequently indulge in procuring bulk fake reviews, employing them to tarnish their rivals’ reputations or artificially bolster their credibility. These spurious reviews materialize through automated systems or compensated individuals. Thus, detecting fake reviews is becoming increasingly important due to their deceptive nature, as they are extremely difficult for humans to identify. To address this issue, current work has focused on machine learning and deep learning techniques to identify fake reviews. However, they have several limitations, including a lack of sufficient training data, inconsistency in providing accurate solutions across different datasets, concept drift, and inability to address new methods that evolved to create fake reviews over time. The objective of this review paper is to find the gaps in the existing research in the field of fake review detection and provide future directions. This paper provides the latest, comprehensive overview and analysis of research efforts focusing on various techniques employed so far, distinguishing characteristics utilized, and the existing datasets used.
In recent years, the embodiment of AI in the form of a robot brings forth new challenges in privacy and transparency. A cognitive robot must be able to integrate multiple tasks in its performance that requires collecting a great amount of data and the use of various AI techniques. As robotics technology continues to advance, socially assistive humanoid robots will play a more central role in interacting with humans. However, can we trust robots in social contexts? How can we design embedded AI robots such that they are more transparent and trustworthy? and what technical, legal, and ethical frameworks might we adopt to build a trust relationship with robots? This chapter discusses these fundamental questions concerning privacy and transparency in human–robot interaction. It will then propose possible ethical and regulatory responses to address them.
With technological advancements occurring at a rapid pace in the field of computers, robotics, and artificial intelligence, major changes have taken place in the robotics industry. These changes have led to what some have termed the “robotics revolution” that has had a major impact on social organizations, the economy, and as discussed within this chapter, human rights for industry and service workers. The emergence of AI-enabled robotics has begun to change the world in major ways challenging the law within nation states and also internationally among nations. In that context, the use of intelligent service and industrial robots has broad applications for the large and small industries that use robots as a source of labor. For example, intelligent service robots are used in the fields of healthcare and medicine, transportation, and for care of the elderly and children. Further, police and security services also use robots for crowd control and for surveillance purposes. However, while developments in robotics have provided numerous benefits to society, they have also brought forth many issues that challenge social, moral, and professional norms within society. As a result, the ever-increasing growth and development of robotic technology in various industries is challenging current legal schemes in fundamental ways, one of which is human rights law. As an example, the use of industrial and service robots can lead to employment insecurity, threats to the health and safety of workers, and privacy concerns. Further, the use of robots in industry and for the delivery of services can be inconsistent with other human rights such as the right to health and safety, the right to equality of opportunity, the right to employment and fair working conditions, the right to life, the right to association, prohibition against discrimination, and equality, which are all supported in international and regional human rights documents.
Artificial intelligence (AI) is presented as a portal to more liberative realities, but its broad implications for society and certain groups in particular require more critical examination. This chapter takes a specifically Black theological perspective to consider the scepticism within Black communities around narrow applications of AI as well as the more speculative ideas about these technologies, for example general AI. Black theology’s perpetual push towards Black liberation, combined with womanism’s invitation to participate in processes that reconstitute Black quality of life, have perfectly situated Black theological thought for discourse around artificial intelligence. Moreover, there are four particular categories where Black theologians and religious scholars have already broken ground and might be helpful to religious discourse concerning Blackness and AI. Those areas are: white supremacy, surveillance and policing, consciousness and God. This chapter encounters several scholars and perspectives within the field of Black theology and points to potential avenues for future theological areas of concern and exploration.
The usage of robots and artificial intelligence is expanding and changing every day. These exciting developments, especially in areas such as engineering, industry, education, and health, have begun to influence the legal world and have become the grounds for many important discussions on the future of law and technology. One of these debates is the question of whether robot judges can take part in a trial, which is the subject of this chapter. Although this problem was previously described as a “distant dream,” there are important examples of this issue on the way to becoming a reality today. Considering developments of AI-enabled and humanoid robots, the following question is posed: “Can robot judges replace human judges?” As a current example, in the “Internet Courts” in China, a robot judge looks like a humanoid robot in the image of a woman using a 3D image inspired by human judges. For this reason, it is important to consider the positive and negative aspects of the possible consequences of the development in the legal world of robot judges that is likely to be widespread in the future, to ensure that the law does not fall behind technological developments.
Chapter 7 highlights key concepts in Decentralized Finance (DeFi) and compares it to traditional finance. It discusses major DeFi applications such as decentralized exchanges, lending/borrowing platforms, derivatives, prediction markets, and stablecoins. DeFi offers advantages, including open access, transparency, programmability, and composability. It enables peer-to-peer financial transactions without intermediaries, unlocking financial inclusion, efficiency gains, and innovation. However, risks such as smart contract vulnerabilities, price volatility, regulatory uncertainty, and lack of accountability persist. As DeFi matures, enhanced governance, security audits, regulation, and insurance will be vital to address these challenges. DeFi is poised to reshape finance if balanced with prudence. Important metrics to track growth include total value locked, trading volumes, active users, and loans outstanding. Research tools such as Dune Analytics, DeFi Llama, and DeFi Pulse provide data-driven insights. Overall, DeFi represents a profoundly transformative blockchain application, but responsible evolution is key. The chapter compares DeFi to traditional finance and analyzes major applications, benefits, risks, and metrics in this emerging field.