We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
HTN planning algorithms require a set of HTN methods that provide knowledge about potential problem-solving strategies. Typically these methods are written by a domain expert, but this chapter is about some ways to learn HTN methods from examples. It describes how to learn HTN methods in learning-by-demonstration situations in which a learner is given examples of plans for various tasks, and also in situations where the learner is given only the plans and must infer what tasks the plans accomplish. The chapter also speculates briefly about prospects for a “planning-to-learn” approach in which a learner generates its own examples using a classical planner.
Learning for nondeterministic models can take advantage of most of the techniques developed for probabilistic models (Chapter 10). Indeed, note that in reinforcement learning (RL), probabilities of action transitions are not needed, so RL techniques can be applied to nondeterministic models too. For instance, we can use the algorithms for Q-learning, parametric Q-learning, and deep Q-learning. However, these algorithms do not give explicit description models of actions. In this chapter, we therefore discuss some intuitions and also some challenges of how the techniques for learning deterministic action specifications could be extended to deal with nondeterministic models. Note, however, that learning lifted action schemas in nondeterministic models is still an open problem.
Temporal models are quite rich, allowing concurrency and temporal constraints to be handled. But the development of the temporal models is a bottleneck, to be eased with machine learning techniques. In this chapter, we first briefly address the problem of learning heuristics for temporal planning (Section 19.1). We then consider the issue of learning durative action schema and temporal methods (Section 19.2). The chapter outlines the proposed approaches, based on techniques seen earlier in the book, without getting into detailed descriptions of the corresponding procedures.
This chapter addresses the issues of acting with temporal models . It presents methods for handling dynamic controllability (Section 18.1), dispatching (Section 18.2), and execution and refinement of a temporal plan (Section 18.3). It proposes methods for acting with a reactive temporal refinement engine (Section 18.4), planning with Monte Carlo rollouts (Section 18.5), and integrating planning and acting (Section 18.6).
In this chapter we introduce different representations and techniques for acting with nondeterministic models: nondeterministic state transition systems (Section 11.1), automata (Section 11.2), behavior trees (Section 11.3), and Petri nets (Section 11.4).
In the past, techniques for natural language translation were not very relevant for acting and planning systems. However, with the recent advent of large language models and their various multimodal extensions into foundation models, this is no longer the case. This last part introduces large language models and their potential benefits in acting, planning, and learning. It discusses the perceiving, monitoring, and goal reasoning functions for deliberation.
Learning to act with probabilistic models is the area of reinforcement learning (RL), the topic of this chapter. RL in some ways parallels the adaptation mechanisms of natural beings to their environment, relying on feedback mechanisms and extending the homeostasis regulations to complex behaviors. With continual learning, an actor can cope with a continually changing environment.This chapter first introduces the main principles of reinforcement learning. It presents a simple Q-learning RL algorithm. It shows how to generalize a learned relation with a parametric representation. it introduces neural network methods, which play a major in learning and are needed for deep RL (Section 10.5) and policy-based RL (Section 10.6). The issues of aided reinforcement learning with shaped rewards, imitation learning, and inverse reinforcement learning are addressed next. Section 10.8 is about probabilistic planning and RL.
This part of the book is devoted to acting, planning, and learning with operational models of actions expressed with a hierarchical task-oriented representation. Operational models are valuable for acting. They allow for detailed descriptions of complex actions handling dynamic environments with exogenous events. The representation relies on hierarchical refinement methods that describe alternative ways to handle tasks and react to events. A method can be any complex algorithm, decomposing a task into subtasks and primitive actions. Subtasks are refined recursively. Actions trigger the execution of sensory-motor procedures in closed loops that query and change the world stochastically.
Task and motion planning (TAMP) problems combine abstract causal relations from preconditions to effects with computational geometry, kinematics, and dynamics. This chapter is about the integration of planning for motion/manipulation with planning for abstract actions. It introduces the main sampling-based algorithms for motion planning. Manipulation planning is subsequently introduced. A few approaches specific to TAMP are then presented.
This chapter is about planning approaches with explicit time in the descriptive and operational models of actions, as well as in the models of the expected evolution of the world not caused by the actor. It describes a planning algorithm that handles durative and concurrent activities with respect to a predicted dynamics. Section 17.1 presents a knowledge representation for modeling actions and tasks with temporal variables using temporal refinement methods. Temporal plans and planning problems are defined as chronicles, i.e., collections of assertions and tasks with explicit temporal constraints. A planning algorithm with temporal refinement methods is developed in Section 17.2. The basic techniques for managing temporal and domain constraints are then presented in Section 17.3.
This chapter is about two key aspects of learning with deterministic models: learning heuristics to speed up the search for a solution plan and the automated synthesis of the model itself. We discuss how to learn heuristics for exploring parts of the search space that are more likely to lead to solutions. We then address the problem of how to learn a deterministic model, with a focus on learning action schemas.
The motivations for acting and planning with probabilistic models are about handling uncertainty in a quantitative way, with optimal or near-optimal decisions. The future is never entirely and precisely predictable. Uncertainty can be due to exogenous events in the environment, from nature and other actors, to noisy sensing and information gathering actions, to possible failures and outcomes of imprecise or intrinsically nondeterministic actions. Models are necessarily incomplete. Knowledge about open environments is partial. Part of what may happen can be only be modeled with uncertainty. Even in closed predictable environments, complete deterministic models may be too complex to develop. The three chapters in Part III tackle acting, planning, and learning in a probabilistic setting.
This chapter is about planning techniques for solving MDP problems. It presents algorithms that seeks optimal or near-optimal solution policies for a domain. Most of the chapter is focused on indefinite-horizon goal reachability domains that have positive costs and a safe solution; they may have dead ends, but those are avoidable. The chapter presents dynamic programming algorithms, heuristics search methods and their heuristics, linear programming methods, and online and Monte Carlo tree search techniques.
In probabilistic models, an action can have several possible outcomes that are not equally likely; their distribution can be estimated relying on statistics of past observations. The purpose is to act optimally with respect to an optimization criterion of the estimated likelihood of action effects and their cost. The usual formal probabilistic models are Markov decision processes (MDPs). An MDP is a nondeterministic state-transition system with a probability distribution and a cost distribution. The probability distribution defines how likely it is to get to a state 𝑠′ when an action 𝑎 is performed in a state 𝑠. The chapter presents MDPs in flat then structured state-space representations. Section 8.3 covers modeling issues of a probabilistic domain with MDPs and variants such as the stochastic shortest path model (SSP) or the constrained MDP (C-MDP) model. Section 8.4 focuses on acting with MDPs. Partially observable MDPs and other extended models are discussed in Section 8.5.