To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The hierarchical refinement approach in the previous two chapters requires a priori domain knowledge of the methods, action models, and heuristics used by RAE and UPOM. The topic of this chapter is to use machine learning techniques to synthesize planning heuristics and domain knowledge. It illustrates the "planning to learn" paradigm for learning domain-dependent heuristics to guide RAE and UPOM. Given methods and a sample function, UPOM generates near-optimal choices that are taken as targets by a deep Q-learning procedure. The chapter shows how to synthesize methods for tasks using hierarchical reinforcement techniques.
AI's next big challenge is to master the cognitive abilities needed by intelligent agents that perform actions. Such agents may be physical devices such as robots, or they may act in simulated or virtual environments through graphic animation or electronic web transactions. This book is about integrating and automating these essential cognitive abilities: planning what actions to undertake and under what conditions, acting (choosing what steps to execute, deciding how and when to execute them, monitoring their execution, and reacting to events), and learning about ways to act and plan. This comprehensive, coherent synthesis covers a range of state-of-the-art approaches and models –deterministic, probabilistic (including MDP and reinforcement learning), hierarchical, nondeterministic, temporal, spatial, and LLMs –and applications in robotics. The insights it provides into important techniques and research challenges will make it invaluable to researchers and practitioners in AI, robotics, cognitive science, and autonomous and interactive systems.
We suggest that foundation models are general purpose solutions similar to general purpose programmable microprocessors, where fine-tuning and prompt-engineering are analogous to coding for microprocessors. Evaluating general purpose solutions is not like hypothesis testing. We want to know how well the machine will perform on an unknown program with unknown inputs for unknown users with unknown budgets and unknown utility functions. This paper is based on an invited talk by John Mashey, “Lessons from SPEC,” at an ACL-2021 workshop on benchmarking. Mashey started by describing Standard Performance Evaluation Corporation (SPEC), a benchmark that has had more impact than benchmarks in our field because SPEC addresses an import commercial question: which CPU should I buy? In addition, SPEC can be interpreted to show that CPUs are 50,000 faster than they were 40 years ago. It is remarkable that we can make such statements without specifying the program, users, task, dataset, etc. It would be desirable to make quantitative statements about improvements of general purpose foundation models over years/decades without specifying tasks, datasets, use cases, etc.
This chapter introduces social scientific perspectives and methods applicable to observing the relationship between artificial intelligence (AI) and religion. It discusses the contributions that anthropological and sociological approaches can make to this entanglement of two modern social phenomena while also drawing attention to the inherent biases and perspectives that both fields bring with them due to their histories. Examples of research on religion and AI are highlighted, especially when they demonstrate agile and new methodologies for engaging with AI in its many applications; including but not limited to online worlds, multimedia formats, games, social media and the new spaces made by technological innovation such as the innovations such as the platforms underpinning the gig economy. All these AI-enabled spaces can be entangled with religious and spiritual conceptions of the world. This chapter also aims to expand upon the relationship between AI and religion as it is perceived as a general concept or object within human society and civilisation. It explains how both anthropology and sociology can provide frameworks for conceptualising that relationship and give us ways to account for our narratives of secularisation – informed by AI development – that see religion as a remnant of a prior, less rational stage of human civilisation.
This chapter explores the intersection of Hindu philosophy and practice with the development of artificial intelligence (AI). The chapter first introduces aspects of technological growth in Hindu contexts, including the reception of ‘Western’ ideas about AI in Hindu communities before describing key elements of the Hindu traditions. It then shows how AI technologies can be conceived of from a Hindu perspective and moves from there to the philosophical contributions Hinduism offers for global reflection on AI. Specifically, the chapter describes openings and contentions for AI in Hindu rituals. The focus is the use of robotics and/or AI in Hindu pūjā (worship of gods) and the key practice of darśan (mutual seeing) with the divine. Subsequently, the chapter investigates how Hindu philosophers have engaged the distinctive qualities of human beings and their investigation into body, minds and consciousness/awareness. The chapter concludes by raising questions for future research.
Artificial intelligence (AI) is presented as a portal to more liberative realities, but its broad implications for society and certain groups in particular require more critical examination. This chapter takes a specifically Black theological perspective to consider the scepticism within Black communities around narrow applications of AI as well as the more speculative ideas about these technologies, for example general AI. Black theology’s perpetual push towards Black liberation, combined with womanism’s invitation to participate in processes that reconstitute Black quality of life, have perfectly situated Black theological thought for discourse around artificial intelligence. Moreover, there are four particular categories where Black theologians and religious scholars have already broken ground and might be helpful to religious discourse concerning Blackness and AI. Those areas are: white supremacy, surveillance and policing, consciousness and God. This chapter encounters several scholars and perspectives within the field of Black theology and points to potential avenues for future theological areas of concern and exploration.
While we call programs that are new and exciting ‘artificial intelligence’ (AI), the ultimate goal – to produce an artificial general intelligence that can equal to human intelligence – always seems to be in the future. AI can, thus, be viewed as a millenarian project. Groups predicting the second coming of Christ or some other form of salvation have flourished in times of societal stress, as they promise a solution to current problems that is delivered from outside. Today, we project both our hopes and our fears onto AI. Utopian visions range from the personally soteriological prospect of uploading our brains to a vision of a world in which AI has found solutions to our problems. Dystopian scenarios involve the creation of a superintelligent AI that slips from our control or is used as a weapon by malicious actors. Will AI save us or destroy us? Probably neither, but as we shape the trajectory of its future, we also shape our own.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
Artificial intelligence (AI) as an object and term remains enmeshed in our imaginaries, narratives, institutions and aspirations. AI has that in common with the other object of discussion in this Cambridge Companion: religion. But beyond such similarities in form and reception, we can also speak to how entangled these two objects have been, and are yet still becoming, with each other. This introductory chapter explores the difficulty of definitions and the intricacies of the histories of these two domains and their entanglements. It initially explores this relationship through the religious narratives and tropes that have had a role to play in the formation of the field of AI, in its discursive modes. It examines the history of AI and religion through the language and perspectives of some of the AI technologists and philosophers who have employed the term ‘religion’ in their discussions of the technology itself. Further, this chapter helps to set the scene for the larger conversation on religion and AI of this volume by demonstrating some of the tensions and lacunae that the following chapters address in greater detail.
This chapter addresses some of the scientific, philosophical and theological arguments brought to bear on the debates surrounding human–robot relationships. Noting that we define robots through our relationships with them, it shows how factors such as emotion and agency can indicate things such as a theory of mind that condition users to expect reciprocal relationships that model a sense of partnership. These factors are important in ‘lovotics’, or a trend in social robotics to produce robots that people want to develop relationships with. Such relationships, however, at least given current capabilities in robotics, will always fall short of conditioned expectations because robots, rather than being full partners, are largely reducible to the self or user. The chapter introduces the notions of anthropomorphism and anthropocentrism to demonstrate these critiques, and then moves on to consider alternative figurations of relationships – drawing in particular on articulations of relationality – that may enable us to rethink how we image and imagine robots.
The global and historical entanglements between articifial intelligence (AI)/robotic technologies and Buddhism, as a lived religion and philosophical tradition, are significant. This chapter sets out three key sites of interaction between Buddhism and AI/robotics. First, Buddhism, as an ontological model of mind (and body) that describes the conditions for what constitutes artificial life. Second, Buddhism defines the boundaries of moral personhood and thus the nature of interactions between human and non-human actors. And finally, Buddhism can be used as an ethical framework to regulate and direct the development of AI/robotics technologies. It argues that Buddhism provides an approach to technology that is grounded in the interdependence of all things, and this gives rise to both compassion and an ethical commitment to alleviate suffering.
One of the ways in which artificial intelligence can be a useful tool in the scientific study of religion is in developing a computational model of how the human mind is deployed in spiritual practices. It is a helpful first step to develop a precise cognitive model using a well-specified cognitive architecture. So far, the most promising architecture for this purpose is the Interacting Cognitive Subsystems of Philip Barnard, which distinguishes between two modes of central cognition: intuitive and conceptual. Cognitive modelling of practices such as mindfulness and the Jesus Prayer involves a shift in central cognition from the latter to the former, though that is achieved in slightly different ways in different spiritual practices. The strategy here is to develop modelling at a purely cognitive level before attempting full computational implementation. There are also neuropsychological models of spiritual practices which could be developed into computational models.
This chapter comprehensively lays out all the possible ways that artificial intelligence (AI) might interact with Jewish sources as their relationship develops over the next many years. It divides the scope of the relationship into three parts. First, it engages with questions of moral agency and their potential interactions with Jewish law, and suggests that this path, while enticing, may not be particularly fruitful. Second, it suggests that Jewish historical sources generally distinguish human value from human uniqueness, and that there is therefore quite a bit of room to think of an AI as a person, if we so choose, without damaging the value of human beings. Finally, it considers how Jewish thought might respond to AI as a new height of human innovation, and how the human–AI relationship shares many characteristics with the God–human relationship as imagined in Jewish sources.
Technology has been an integral part of biological life since the inception of terrestrial life. Evolution is the process by which biological life seeks to transcend itself in pursuit of more robust life. This chapter examines transhumanism as the use of technological means to enhance human biological function. Transhumanists see human nature as a work in progress and suggest that by responsible use of science, technology and other rational means, we shall become beings with vastly greater capacities and unlimited potential. Transhumanism has religious implications.