To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Large-scale atmospheric circulation patterns, so-called weather regimes, modulate the occurrence of extreme events such as heatwaves or extreme precipitation. In their role as mediators between long-range teleconnections and local impacts, weather regimes have demonstrated potential in improving long-term climate projections as well as sub-seasonal to seasonal forecasts. However, existing methods for identifying weather regimes are not specifically designed to capture the relevant physical processes responsible for variations in the impact variable in question. This paper introduces a novel probabilistic machine learning method, RMM-VAE, for identifying weather regimes targeted to a local-scale impact variable. Based on a variational autoencoder architecture, the method combines non-linear dimensionality reduction with a prediction task and probabilistic clustering in one coherent architecture. The new method is applied to identify circulation patterns over the Mediterranean region targeted to precipitation over Morocco and compared to three existing approaches: two established linear methods and another machine-learning approach. The RMM-VAE method identifies regimes that are more predictive of the target variable compared to the two linear methods, both in terms of terciles and extremes in precipitation, while also improving the reconstruction of the input space. Further, the regimes identified by the RMM-VAE method are also more robust and persistent compared to the alternative machine learning method. The results demonstrate the potential benefit of the new method for use in various climate applications such as sub-seasonal forecasting, and illustrate the trade-offs involved in targeted clustering.
In the context of the ongoing biodiversity crisis, understanding forest ecosystems, their tree species composition, and especially the successional stages of their development is crucial. They collectively shape the biodiversity within forests and thereby influence the ecosystem services that forests provide, yet this information is not readily available on a large scale. Remote sensing techniques offer promising solutions for obtaining area-wide information on tree species composition and their successional stages. While optical data are often freely available in appropriate quality over large scales, obtaining light detection and ranging (LiDAR) data, which provide valuable information about forest structure, is more challenging. LiDAR data are mostly acquired by public authorities across several years and therefore heterogeneous in quality. This study aims to assess if heterogeneous LiDAR data can support area-wide modeling of forest successional stages at the tree species group level. Different combinations of spectral satellite data (Sentinel-2) and heterogeneous airborne LiDAR data, collected by the federal government of Rhineland-Palatinate, Germany, were utilized to model up to three different successional stages of seven tree species groups. When incorporating heterogeneous LiDAR data into random forest models with spatial variable selection and spatial cross-validation, significant accuracy improvements of up to 0.23 were observed. This study shows the potential of not dismissing initially seemingly unusable heterogeneous LiDAR data for ecological studies. We advocate for a thorough examination to determine its usefulness for model enhancement. A practical application of this approach is demonstrated, in the context of mapping successional stages of tree species groups at a regional level.
This chapter reflects on the emerging uses of Emotional Artificial Intelligence (EAI) Systems in cars and the future regulatory implications for human–machine interactions in vehicles. The car has sociocultural importance as an everyday context of technology deployment, and we begin by reflecting on the ways the automotive sector is changing to incorporate EAI more widely. This is primarily due to shifts in safety legislation, alongside advances in biometrics and vehicle automation. This change is creating hybrid environments of human/nonhuman interactions, where emotion sensing systems foster increasingly entangled relationships with humans and cars. These laws, such as the proposed EU Artificial Intelligence Act and EU Vehicle Safety Regulation, are driving design of more trustworthy, privacy preserving, and safe AI systems. We reflect on legal implications arising from sensing human in-cabin behaviors and expressions, alongside risks posed by AI systems. We conclude by reflecting on the challenges of designing for trustworthy, human-centered interactions.
In 2017 Microsoft founder Bill Gates recommended taxing robots to slow the pace of automation. It has been estimated that up to 47 percent of U.S. jobs are at risk by advancements in artificial intelligence that has increased the rate of automation. While employment changes due to automation are not new, advances in artificial intelligence embedded within robots threaten many more jobs much more quickly than historic automation did. The chapter discusses how accelerated automation presents a revenue problem for governments. The revenue problem exists because the tax system is designed to tax labor more heavily, as labor is less likely to be able to avoid taxation. Capital investment, on the other hand, is taxed more lightly because capital is mobile and can escape taxation. When capital becomes labor, as in robotic automation, the bottom falls out of the system. With this background in mind, the Tax Cuts and Jobs Act (TCJA), enacted in 2017, significantly cut the U.S. corporate tax rate, from 35 percent to 21 percent. In addition, TCJA increased tax benefits for purchasing equipment (which would include automation in the form of robots), significantly enhancing bonus depreciation. The 2017 tax legislation continued and deepened the existing tax bias toward automation. This chapter explores policy options for solving the revenue problem.
While we call programs that are new and exciting ‘artificial intelligence’ (AI), the ultimate goal – to produce an artificial general intelligence that can equal to human intelligence – always seems to be in the future. AI can, thus, be viewed as a millenarian project. Groups predicting the second coming of Christ or some other form of salvation have flourished in times of societal stress, as they promise a solution to current problems that is delivered from outside. Today, we project both our hopes and our fears onto AI. Utopian visions range from the personally soteriological prospect of uploading our brains to a vision of a world in which AI has found solutions to our problems. Dystopian scenarios involve the creation of a superintelligent AI that slips from our control or is used as a weapon by malicious actors. Will AI save us or destroy us? Probably neither, but as we shape the trajectory of its future, we also shape our own.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
Artificial intelligence (AI) as an object and term remains enmeshed in our imaginaries, narratives, institutions and aspirations. AI has that in common with the other object of discussion in this Cambridge Companion: religion. But beyond such similarities in form and reception, we can also speak to how entangled these two objects have been, and are yet still becoming, with each other. This introductory chapter explores the difficulty of definitions and the intricacies of the histories of these two domains and their entanglements. It initially explores this relationship through the religious narratives and tropes that have had a role to play in the formation of the field of AI, in its discursive modes. It examines the history of AI and religion through the language and perspectives of some of the AI technologists and philosophers who have employed the term ‘religion’ in their discussions of the technology itself. Further, this chapter helps to set the scene for the larger conversation on religion and AI of this volume by demonstrating some of the tensions and lacunae that the following chapters address in greater detail.
Humanlike robots, based on their behavior and physical appearance, are becoming an increasingly important part of society often interacting with individuals in a wide variety of social contexts. One emerging class of robots that socialize with humans are robots capable of expressing emotions, are humanoid in appearance, and anthropomorphized by users. For such robots I propose that how humans interact with them is represented as a four-way process depending on the following: (1) The context of the situation surrounding the interaction; (2) The effort that users make to comprehend the robotic technology especially in a particular context; (3) The process of the adaptation of robotic technology (by users or groups of users) to incorporate robots into their lives, practices, and work routines; and (4) The transformation of the technology and its subsequent meaning to the user. One aspect of the above points is that humans receive significant benefits by interacting with robots in different contexts. For example, by applying (or appropriating) the robot’s abilities to perform various tasks, humans may increase their own physical and intellectual abilities. However, unlike human creativity and innovation, which are a product of human cognition, robot creativity is based on the use of algorithms and software in which robots’ appropriate data is used to perform tasks. Further, a robot’s ability to express emotion and a personality may influence humans in a variety of ways; for example, the conditions under which they appropriate the robot’s labor or creative output for a particular reason. As I argue in this chapter, “robot appropriation” can lead to legal constraints and regulations between human and robot. On this point, I note that the continental legal order, to which the Russian legal system belongs, has developed a robust method for the conscious exercise of law. Based on the Russian legal system and the approach taken by other jurisdictions, an AI-enabled robot can only receive what are described as independent rights that are different in substance from the rights granted to natural persons. Taking a broad scope in this chapter, I propose that regulations are required for various human–robot interactions, and I discuss several examples of this approach.
This chapter focuses on the growing inclusion of social robots in therapy from the perspective of unresolved legal and ethical issues that include risks to patient autonomy, human dignity and trust, the potentially life-threatening effects of inaccurate or malfunctioning technology, diminished privacy due to the reliance on enormous amounts of personal (sensitive health) data, new challenges to data security due to the cyber–physical nature of robots, and the problem of how to obtain informed consent to medical treatments that depend on opaque AI decision-making. From this broad spectrum, the chapter focuses on the protection of the health and safety of patients and care recipients under EU law. A more detailed analysis shows that neither the Medical Device Regulation nor the proposal for an Artificial Intelligence Act adequately address the risks to patient health and safety that arise from human–machine interaction. Against this backdrop, the chapter provides recommendations as to which aspects should be regulated in the future and argues for a public discussion about the extent to which we, as a society, should replace human therapists with AI-enabled robotic technology.
This chapter addresses some of the scientific, philosophical and theological arguments brought to bear on the debates surrounding human–robot relationships. Noting that we define robots through our relationships with them, it shows how factors such as emotion and agency can indicate things such as a theory of mind that condition users to expect reciprocal relationships that model a sense of partnership. These factors are important in ‘lovotics’, or a trend in social robotics to produce robots that people want to develop relationships with. Such relationships, however, at least given current capabilities in robotics, will always fall short of conditioned expectations because robots, rather than being full partners, are largely reducible to the self or user. The chapter introduces the notions of anthropomorphism and anthropocentrism to demonstrate these critiques, and then moves on to consider alternative figurations of relationships – drawing in particular on articulations of relationality – that may enable us to rethink how we image and imagine robots.
Chapter 11 envisions the future potential of Web3 technologies in reshaping the web. It covers key areas such as generative AI, DeFi, mobile apps, cloud infrastructure, and the Metaverse. In DeFi, the focus is on scalability, interoperability, regenerative finance, decentralized identity, and its integration with social networks. The convergence of generative AI and Web3 is examined through case studies and applications, while mobile apps are explored as nodes for consensus algorithms, providing decentralized and secure networks. The impact of Web3 on cloud infrastructure includes decentralized storage, blockchain-based authentication and authorization, decentralized computing resources, and token-based incentives. Lastly, the chapter delves into the Metaverse, discussing decentralized ownership, token economies, identity and privacy considerations, interoperability, and decentralized governance. Through these explorations, the chapter highlights the transformative potential of Web3 in fostering decentralization, inclusivity, and innovation in the digital era.
In this chapter we review the status of human–robot interaction (HRI) including current research directions within robotics that may impact issues of law, policy, and regulations. While the focus of this book is on HRI experienced in social contexts, to provide a broad review of the legal and policy issues impacted by HRI, we discuss different areas of robotics that require various levels of human interaction and supervisory control of robots. We note that robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence (AI), which are under human supervisory control but becoming more autonomous. Further, we note that research on human interaction with robots is a rapidly evolving field and specialized robots under human teleoperation have proven successful in hazardous environments and for medical and other applications. There is also a noticeable trend for more humanoid-appearing and AI-enabled robots interacting with humans in social contexts, and for this class of robots we discuss emerging issues of law, regulations, and policy.
The global and historical entanglements between articifial intelligence (AI)/robotic technologies and Buddhism, as a lived religion and philosophical tradition, are significant. This chapter sets out three key sites of interaction between Buddhism and AI/robotics. First, Buddhism, as an ontological model of mind (and body) that describes the conditions for what constitutes artificial life. Second, Buddhism defines the boundaries of moral personhood and thus the nature of interactions between human and non-human actors. And finally, Buddhism can be used as an ethical framework to regulate and direct the development of AI/robotics technologies. It argues that Buddhism provides an approach to technology that is grounded in the interdependence of all things, and this gives rise to both compassion and an ethical commitment to alleviate suffering.
Could robots be recognized as legal persons? Should they? Much of the discussion of these topics is distorted by fictional representations of what form true artificial intelligence (AI) might take – in particular that it would be of human-level intellect and be embodied in humanoid form. Such robots are the focus of this volume, with the possibility that external appearance and its echoes in science fiction may shape debate over their “rights.” Most legal systems would be able to grant some form of personality, yet early considerations of whether they should conflate two discrete rationales. The first is instrumental, analogous to the economic reasons why corporations are granted personality. The second is inherent, linked to the manner in which human personality is recognized. Neither is sufficient to justify legal personality for robots today. A third reason, which may become more pressing in the medium term, is tied to the possibility of AI systems that far surpass humans in terms of ability. In the event that such entities are created, the question may shift from whether we recognize them under the law, to whether they recognize us.
Chapter 4 reviews frequently used machine learning evaluation procedures. In particular, it presents popular evaluation metrics for binary and multi-class classification (e.g., accuracy, precision/recall, ROC analysis), regression analysis (e.g., mean squared error, root mean squared error, R-squared error), clustering (e.g., Davies–Bouldin Index). It then reviews popular resampling approaches (e.g.,holdout, cross-validation) and statistical tests (e.g., the t-test and the sign test). It concludes with an explanation of why it is important to go beyond these well-known methods in order to achieve reliable evaluation results in all cases.
Modern law has developed based on the fundamental principle of respecting each person as an autonomous individual. However, the meaning of autonomous is not necessarily unambiguous even in jurisprudence, which has resulted in its use in various contexts. In this chapter, I examine the differences between personal autonomy and machine autonomy. Then I attempt to reconstruct the concept by exploring legal issues concerning privacy and freedom of expression when people use robots. As this chapter discusses, personal autonomy and machine autonomy differ considerably from each other, even when using the same concept of autonomy. Given the state of human–robot interaction (HRI) technology, it is appropriate to understand machine autonomy as having instrumental value as a means of serving personal autonomy. Furthermore, it is argued that the development of HRI will likely call for the implementation of a system of checks and balances among multiple robots to ensure autonomy as collective self-regulation for groups of robots connected to each other through information communication networks, rather than ensuring the autonomy of individual robots.