We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
The integration of AI into information systems will affect the way users interface with these systems. This exploration of the interaction and collaboration between humans and AI reveals its potential and challenges, covering issues such as data privacy, credibility of results, misinformation, and search interactions. Later chapters delve into application domains such as healthcare and scientific discovery. In addition to providing new perspectives on and methods for developing AI technology and designing more humane and efficient artificial intelligence systems, the book also reveals the shortcomings of artificial intelligence technologies through case studies and puts forward corresponding countermeasures and suggestions. This book is ideal for researchers, students, and industry practitioners interested in enhancing human-centered AI systems and insights for future research.
Knowledge-infused learning directly confronts the opacity of current 'black-box' AI models by combining data-driven machine learning techniques with the structured insights of symbolic AI. This guidebook introduces the pioneering techniques of neurosymbolic AI, which blends statistical models with symbolic knowledge to make AI safer and user-explainable. This is critical in high-stakes AI applications in healthcare, law, finance, and crisis management. The book brings readers up to speed on advancements in statistical AI, including transformer models such as BERT and GPT, and provides a comprehensive overview of weakly supervised, distantly supervised, and unsupervised learning methods alongside their knowledge-enhanced variants. Other topics include active learning, zero-shot learning, and model fusion. Beyond theory, the book presents practical considerations and applications of neurosymbolic AI in conversational systems, mental health, crisis management systems, and social and behavioral sciences, making it a pragmatic reference for AI system designers in academia and industry.
This groundbreaking volume is designed to meet the burgeoning needs of the research community and industry. This book delves into the critical aspects of AI's self-assessment and decision-making processes, addressing the imperative for safe and reliable AI systems in high-stakes domains such as autonomous driving, aerospace, manufacturing, and military applications. Featuring contributions from leading experts, the book provides comprehensive insights into the integration of metacognition within AI architectures, bridging symbolic reasoning with neural networks, and evaluating learning agents' competency. Key chapters explore assured machine learning, handling AI failures through metacognitive strategies, and practical applications across various sectors. Covering theoretical foundations and numerous practical examples, this volume serves as an invaluable resource for researchers, educators, and industry professionals interested in fostering transparency and enhancing reliability of AI systems.
The last decade has seen an exponential increase in the development and adoption of language technologies, from personal assistants such as Siri and Alexa, through automatic translation, to chatbots like ChatGPT. Yet questions remain about what we stand to lose or gain when we rely on them in our everyday lives. As a non-native English speaker living in an English-speaking country, Vered Shwartz has experienced both amusing and frustrating moments using language technologies: from relying on inaccurate automatic translation, to failing to activate personal assistants with her foreign accent. English is the world's foremost go-to language for communication, and mastering it past the point of literal translation requires acquiring not only vocabulary and grammar rules, but also figurative language, cultural references, and nonverbal communication. Will language technologies aid us in the quest to master foreign languages and better understand one another, or will they make language learning obsolete?
Models for TAMP problems are complex and challenging to develop. The high-dimensional sensory-motor space and the required integration of metric and symbolic state variables augment the challenges. Machine learning addresses these challenges at both the acting level and the planning level. But ML in robotics faces specific problems: lack of massive data; experiments needed for RL are scarce, very expensive, and difficult to reproduce; realistic sensory-motor simulators remain computationally costly; and expert human input for RL, e.g., for specifying or shaping reward functions or giving advices, is scarce and costly. The functions learned tend to be narrow: transfer of learned behaviors and models across environments and tasks is challenging. This chapter presents approaches for learning reactive sensory-motor skills using deep RL algorithms and methods for learning heuristics to guide a TAMP planner avoiding computation on unlikely feasible movements.
Acting with robots and sensory-motor devices demands the combined capabilities of reasoning both on abstract actions and on concrete motion and manipulation steps. In the robotics literature, this is referred to as "task-aware planning," i.e., planning beyond motion and manipulation. In the AI literature, it is referred to as "combined task and motion planning" (TAMP). This class of TAMP problems, which includes task, motion, and manipulation planning, is the topic of this part. The challenge in TAMP is the integration of symbolic models for task planning with metric models for motion and manipulation. This part introduces the representations and techniques for achieving and controlling motion, navigation, and manipulation actions in robotics. It discusses motion and manipulation planning algorithms, and their integration with task planning in TAMP problems. It covers learning for the combined task and motion-manipulation problems.
Acting, planning and learning are critical cognitive functions for an autonomous actor. Other functions, such as perceiving, monitoring, and goal reasoning, are also needed and can be essential in many applications. This chapter briefly surveys a few such functions and their links to acting, planning, and learning. Section 24.1 discusses perceiving and information gathering: how to model and control perception actions in order to recognize the state of the world and detect objects, events, and activities relevant to the actor while performing its tasks. It discusses semantic mapping and anchoring sensor data to symbols. Section 24.2 is about monitoring, that is, detecting and interpreting discrepancies between predictions and observations, anticipating what needs be monitored, and controlling monitoring actions. Goal reasoning in Section 24.3 is about assessing the relevance of current goals, from observed evolutions, failures, and opportunities to achieve a higher-level assigned mission.
AI acting systems, or actors – which may be embodied in physical devices such as robots or in abstract procedures such as web-based service agents – require several cognitive functions, three of which are acting, planning, and learning. Acting is more than just the sensory-motor execution of low-level commands: there is a need to decide how to perform a task, given the context and changes in the environment. Planning involves choosing and organizing actions that can achieve a goal and is done using abstract actions that the agent will need to decide how to perform. Learning is important for acquiring knowledge about expected effects, which actions to perform and how to perform them, and how to plan; and acting and planning can be used to aid learning. This chapter introduces the scientific and technical challenges of developing these three cognitive functions and the ethical challenge of doing such development responsibly.
In this chapter, we propose different approaches to planning with nondeterministic models. We describe three techniques for planning with nondeterministic state transition systems: And/Or graph search (Section 12.1), planning based on determinization techniques (Section 12.2), and planning via symbolic model checking (Section 12.3). We then present techniques for planning by synthesis of input/output automata (Section 12.4). We finally briefly discuss techniques for behavior tree generation (Section 12.5).
This chapter is about planning with hierarchical refinement methods. A plan guides the acting engine RAE with informed choices about the best methods for the task and context at hand. We consider an optimizing planner to find methods maximizing a utility function. In principle, the planner may rely on an exact dynamic programming optimization procedure. An approximation approach is more adapted to the online guidance of an actor. We describe a Monte Carlo tree search planner, called UPOM, parameterized for rollout depth and number of rollouts. It relies on a heuristic function for estimating the remainder of a rollout when the depth is bounded. UPOM is an anytime planner used in a receding horizon manner. This chapter relies on chapters 8, 9, and 14. It presents refinement planning domains and outlines the approach. Section 15.2 proposes utility functions and an optimization procedure. The planner is developed in Section 15.3.
This chapter is about representing state-transition systems and using them in acting. The first section gives formal definitions of state-transition systems and planning problems, and a simple acting algorithm. The second section describes state-variable representations of state-transition systems, and the third section describes several acting procedures that use this representation. The fourth section describes classical representation, an alternative to state-variable representation that is often used in the AI planning literature.
The chapters in Part I are about acting, planning, and learning using deterministic state-transition (or "classical planning") models. The relative ease of constructing and using such models can make them desirable even though most real-world environments do not satisfy all of their underlying assumptions. The chapters in this part also introduce several concepts that will be used throughout the book, such as state-variable representation.
This part of the book is about planning, acting, and learning approaches in which time is explicit. It describes several algorithms and methods for handling durative and concurrent activities with respect to a predicted dynamics. Acting with temporal models raises dispatching and temporal controllability issues that rely heavily on planning concepts.
Nondeterministic models, like probabilistic models (see Part III), drop the assumption that an action applied in a state leads to only one state. The main difference with probabilistic models is that nondeterministic models do not have information about the probability distribution of transitions. In spite of this, the main motivation for acting, planning, and learning using nondeterministic models is the same as that of probabilistic approaches, namely, the need to model uncertainty: most often, the future is never entirely predictable without uncertainty. Nondeterministic models might be thought to be a special case of probabilistic models with a uniform probability distribution. This is not the case. In nondeterministic models we do not know that the probability distribution is uniform; we simply do not have any information about the distribution.