We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
The integration of AI into information systems will affect the way users interface with these systems. This exploration of the interaction and collaboration between humans and AI reveals its potential and challenges, covering issues such as data privacy, credibility of results, misinformation, and search interactions. Later chapters delve into application domains such as healthcare and scientific discovery. In addition to providing new perspectives on and methods for developing AI technology and designing more humane and efficient artificial intelligence systems, the book also reveals the shortcomings of artificial intelligence technologies through case studies and puts forward corresponding countermeasures and suggestions. This book is ideal for researchers, students, and industry practitioners interested in enhancing human-centered AI systems and insights for future research.
Knowledge-infused learning directly confronts the opacity of current 'black-box' AI models by combining data-driven machine learning techniques with the structured insights of symbolic AI. This guidebook introduces the pioneering techniques of neurosymbolic AI, which blends statistical models with symbolic knowledge to make AI safer and user-explainable. This is critical in high-stakes AI applications in healthcare, law, finance, and crisis management. The book brings readers up to speed on advancements in statistical AI, including transformer models such as BERT and GPT, and provides a comprehensive overview of weakly supervised, distantly supervised, and unsupervised learning methods alongside their knowledge-enhanced variants. Other topics include active learning, zero-shot learning, and model fusion. Beyond theory, the book presents practical considerations and applications of neurosymbolic AI in conversational systems, mental health, crisis management systems, and social and behavioral sciences, making it a pragmatic reference for AI system designers in academia and industry.
This groundbreaking volume is designed to meet the burgeoning needs of the research community and industry. This book delves into the critical aspects of AI's self-assessment and decision-making processes, addressing the imperative for safe and reliable AI systems in high-stakes domains such as autonomous driving, aerospace, manufacturing, and military applications. Featuring contributions from leading experts, the book provides comprehensive insights into the integration of metacognition within AI architectures, bridging symbolic reasoning with neural networks, and evaluating learning agents' competency. Key chapters explore assured machine learning, handling AI failures through metacognitive strategies, and practical applications across various sectors. Covering theoretical foundations and numerous practical examples, this volume serves as an invaluable resource for researchers, educators, and industry professionals interested in fostering transparency and enhancing reliability of AI systems.
The last decade has seen an exponential increase in the development and adoption of language technologies, from personal assistants such as Siri and Alexa, through automatic translation, to chatbots like ChatGPT. Yet questions remain about what we stand to lose or gain when we rely on them in our everyday lives. As a non-native English speaker living in an English-speaking country, Vered Shwartz has experienced both amusing and frustrating moments using language technologies: from relying on inaccurate automatic translation, to failing to activate personal assistants with her foreign accent. English is the world's foremost go-to language for communication, and mastering it past the point of literal translation requires acquiring not only vocabulary and grammar rules, but also figurative language, cultural references, and nonverbal communication. Will language technologies aid us in the quest to master foreign languages and better understand one another, or will they make language learning obsolete?
AI's next big challenge is to master the cognitive abilities needed by intelligent agents that perform actions. Such agents may be physical devices such as robots, or they may act in simulated or virtual environments through graphic animation or electronic web transactions. This book is about integrating and automating these essential cognitive abilities: planning what actions to undertake and under what conditions, acting (choosing what steps to execute, deciding how and when to execute them, monitoring their execution, and reacting to events), and learning about ways to act and plan. This comprehensive, coherent synthesis covers a range of state-of-the-art approaches and models –deterministic, probabilistic (including MDP and reinforcement learning), hierarchical, nondeterministic, temporal, spatial, and LLMs –and applications in robotics. The insights it provides into important techniques and research challenges will make it invaluable to researchers and practitioners in AI, robotics, cognitive science, and autonomous and interactive systems.
Religion and artificial intelligence are now deeply enmeshed in humanity's collective imagination, narratives, institutions, and aspirations. Their growing entanglement also runs counter to several dominant narratives that engage with long-standing historical discussions regarding the relationship between the 'sacred” and the 'secular' - technology and science. This Cambridge Companion explores the fields of Religion and AI comprehensively and provides an authoritative guide to their symbiotic relationship. It examines established topics, such as transhumanism, together with new and emerging fields, notably, computer simulations of religion. Specific chapters are devoted to Judaism, Christianity, Islam, Hinduism, and Buddhism, while others demonstrate that entanglements between religion and AI are not always encapsulated through such a paradigm. Collectively, the volume addresses issues that AI raises for religions, and contributions that AI has made to religious studies, especially the conceptual and philosophical issues inherent in the concept of an intelligent machine, and social-cultural work on attitudes to AI and its impact on contemporary life. The diverse perspectives in this Companion demonstrate how all religions are now interacting with artificial intelligence.
Is Artificial Intelligence a more significant invention than electricity? Will it result in explosive economic growth and unimaginable wealth for all, or will it cause the extinction of all humans? Artificial Intelligence: Economic Perspectives and Models provides a sober analysis of these questions from an economics perspective. It argues that to better understand the impact of AI on economic outcomes, we must fundamentally change the way we think about AI in relation to models of economic growth. It describes the progress that has been made so far and offers two ways in which current modelling can be improved: firstly, to incorporate the nature of AI as providing abilities that complement and/or substitute for labour, and secondly, to consider demand-side constraints. Outlining the decision-theory basis of both AI and economics, this book shows how this, and the incorporation of AI into economic models, can provide useful tools for safe, human-centered AI.
This book is designed to provide in-depth knowledge on how search plays a fundamental role in problem solving. Meant for undergraduate and graduate students pursuing courses in computer science and artificial intelligence, it covers a wide spectrum of search methods. Readers will be able to begin with simple approaches and gradually progress to more complex algorithms applied to a variety of problems. It demonstrates that search is all pervasive in artificial intelligence and equips the reader with the relevant skills. The text starts with an introduction to intelligent agents and search spaces. Basic search algorithms like depth first search and breadth first search are the starting points. Then, it proceeds to discuss heuristic search algorithms, stochastic local search, algorithm A*, and problem decomposition. It also examines how search is used in playing board games, deduction in logic and automated planning. The book concludes with a coverage on constraint satisfaction.
AI appears to disrupt key private law doctrines, and threatens to undermine some of the principal rights protected by private law. The social changes prompted by AI may also generate significant new challenges for private law. It is thus likely that AI will lead to new developments in private law. This Cambridge Handbook is the first dedicated treatment of the interface between AI and private law, and the challenges that AI poses for private law. This Handbook brings together a global team of private law experts and computer scientists to deal with this problem, and to examine the interface between private law and AI, which includes issues such as whether existing private law can address the challenges of AI and whether and how private law needs to be reformed to reduce the risks of AI while retaining its benefits.
Deep Learning is becoming increasingly important in a technology-dominated world. However, the building of computational models that accurately represent linguistic structures is complex, as it involves an in-depth knowledge of neural networks, and the understanding of advanced mathematical concepts such as calculus and statistics. This book makes these complexities accessible to those from a humanities and social sciences background, by providing a clear introduction to deep learning for natural language processing. It covers both theoretical and practical aspects, and assumes minimal knowledge of machine learning, explaining the theory behind natural language in an easy-to-read way. It includes pseudo code for the simpler algorithms discussed, and actual Python code for the more complicated architectures, using modern deep learning libraries such as PyTorch and Hugging Face. Providing the necessary theoretical foundation and practical tools, this book will enable readers to immediately begin building real-world, practical natural language processing systems.
Privacy-preserving computing aims to protect the personal information of users while capitalizing on the possibilities unlocked by big data. This practical introduction for students, researchers, and industry practitioners is the first cohesive and systematic presentation of the field's advances over four decades. The book shows how to use privacy-preserving computing in real-world problems in data analytics and AI, and includes applications in statistics, database queries, and machine learning. The book begins by introducing cryptographic techniques such as secret sharing, homomorphic encryption, and oblivious transfer, and then broadens its focus to more widely applicable techniques such as differential privacy, trusted execution environment, and federated learning. The book ends with privacy-preserving computing in practice in areas like finance, online advertising, and healthcare, and finally offers a vision for the future of the field.
Distributional semantics develops theories and methods to represent the meaning of natural language expressions, with vectors encoding their statistical distribution in linguistic contexts. It is at once a theoretical model to express meaning, a practical methodology to construct semantic representations, a computational framework for acquiring meaning from language data, and a cognitive hypothesis about the role of language usage in shaping meaning. This book aims to build a common understanding of the theoretical and methodological foundations of distributional semantics. Beginning with its historical origins, the text exemplifies how the distributional approach is implemented in distributional semantic models. The main types of computational models, including modern deep learning ones, are described and evaluated, demonstrating how various types of semantic issues are addressed by those models. Open problems and challenges are also analyzed. Students and researchers in natural language processing, artificial intelligence, and cognitive science will appreciate this book.
Digital health translation is an important application of machine translation and multilingual technologies, and there is a growing need for accessibility in digital health translation design for disadvantaged communities. This book addresses that need by highlighting state-of-the-art research on the design and evaluation of assistive translation tools, along with systems to facilitate cross-cultural and cross-lingual communications in health and medical settings. Using case studies as examples, the principles of designing assistive health communication tools are illustrated. These are (1) detectability of errors to boost user confidence by health professionals; (2) customizability for health and medical domains; (3) inclusivity of translation modalities to serve people with disabilities; and (4) equality of accessibility standards for localised multilingual websites of health contents. This book will appeal to readers from natural language processing, computer science, linguistics, translation studies, public health, media, and communication studies. This title is available as open access on Cambridge Core.
Fully revised and updated, this third edition includes three new chapters on neural networks and deep learning including generative AI, causality, and the social, ethical and regulatory impacts of artificial intelligence. All parts have been updated with the methods that have been proven to work. The book's novel agent design space provides a coherent framework for learning, reasoning and decision making. Numerous realistic applications and examples facilitate student understanding. Every concept or algorithm is presented in pseudocode and open source AIPython code, enabling students to experiment with and build on the implementations. Five larger case studies are developed throughout the book and connect the design approaches to the applications. Each chapter now has a social impact section, enabling students to understand the impact of the various techniques as they learn them. An invaluable teaching package for undergraduate and graduate AI courses, this comprehensive textbook is accompanied by lecture slides, solutions, and code.
Space and time representation in language is important in linguistics and cognitive science research, as well as artificial intelligence applications like conversational robots and navigation systems. This book is the first for linguists and computer scientists that shows how to do model-theoretic semantics for temporal or spatial information in natural language, based on annotation structures. The book covers the entire cycle of developing a specification for annotation and the implementation of the model over the appropriate corpus for linguistic annotation. Its representation language is a type-theoretic, first-order logic in shallow semantics. Each interpretation model is delimited by a set of definitions of logical predicates used in semantic representations (e.g., past) or measuring expressions (e.g., counts or k). The counting function is then defined as a set and its cardinality, involving a universal quantification in a model. This definition then delineates a set of admissible models for interpretation.
The Handbook of Augmented Reality Training Design Principles is for anyone interested in using augmented reality and other forms of simulation to design better training. It includes eleven design principles aimed at training recognition skills for combat medics, emergency department physicians, military helicopter pilots, and others who must rapidly assess a situation to determine actions. Chapters on engagement, creating scenario-based training, fidelity and realism, building mental models, and scaffolding and reflection use real-world examples and theoretical links to present approaches for incorporating augmented reality training in effective ways. The Learn, Experience, Reflect framework is offered as a guide to applying these principles to training design. This handbook is a useful resource for innovative design training that leverages the strengths of augmented reality to create an engaging and productive learning experience.
On social media, new forms of communication arise rapidly, many of which are intense, dispersed, and create new communities at a global scale. Such communities can act as distinct information bubbles with their own perspective on the world, and it is difficult for people to find and monitor all these perspectives and relate the different claims made. Within this digital jungle of perspectives on truth, it is difficult to make informed decisions on important things like vaccinations, democracy, and climate change. Understanding and modeling this phenomenon in its full complexity requires an interdisciplinary approach, utilizing the ample data provided by digital communication to offer new insights and opportunities. This interdisciplinary book gives a comprehensive view on social media communication, the different forms it takes, the impact and the technology used to mine it, and defines the roadmap to a more transparent Web.
Every day we interact with machine learning systems offering individualized predictions for our entertainment, social connections, purchases, or health. These involve several modalities of data, from sequences of clicks to text, images, and social interactions. This book introduces common principles and methods that underpin the design of personalized predictive models for a variety of settings and modalities. The book begins by revising 'traditional' machine learning models, focusing on adapting them to settings involving user data, then presents techniques based on advanced principles such as matrix factorization, deep learning, and generative modeling, and concludes with a detailed study of the consequences and risks of deploying personalized predictive systems. A series of case studies in domains ranging from e-commerce to health plus hands-on projects and code examples will give readers understanding and experience with large-scale real-world datasets and the ability to design models and systems for a wide range of applications.