To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Up until now it has been assumed everywhere that the current state is known. What if the state is not known? In this case, information regarding the state is obtained from sensors during the execution of a plan. This situation arises in most applications that involve interaction with the physical world. For example, in robotics it is virtually impossible for a robot to precisely know its state, except in some limited cases. What should be done if there is limited information regarding the state? A classical approach is to take all of the information available and try to estimate the state. In robotics, the state may include both the map of its environment and the robot configuration. If the estimates are sufficiently reliable, then we may safely pretend that there is no uncertainty in state information. This enables many of the planning methods introduced so far to be applied with little or no adaptation.
The more interesting case occurs when state estimation is altogether avoided. It may be surprising, but many important tasks can be defined and solved without ever requiring that specific states are sensed, even though a state space is defined for the planning problem. To achieve this, the planning problem will be expressed in terms of an information space. Information spaces serve the same purpose for sensing problems as the configuration spaces of Chapter 4 did for problems that involve geometric transformations. Each information space represents the place where a problem that involves sensing uncertainty naturally lives.
As in Part II, it also seems appropriate to give two names to Part III. It is officially called decision-theoretic planning, but it can also be considered as planning under uncertainty. All of the concepts in Parts I and II avoided models of uncertainties. Chapter 8 considered plans that can overcome some uncertainties, but there was no explicit modeling of uncertainty.
In this part, uncertainties generally interfere with two aspects of planning:
Predictability: Due to uncertainties, it is not known what will happen in the future when certain actions are applied. This means that future states are not necessarily predictable.
Sensing: Due to uncertainties, the current state is not necessarily known. Information regarding the state is obtained from initial conditions, sensors, and the memory of previously applied actions.
These two kinds of uncertainty are independent in many ways. Each has a different effect on the planning problem.
Making a single decision
Chapter 9 provides an introduction to Part III by presenting ways to represent uncertainty in the process of making a single decision. The view taken in this chapter is that uncertainty can be modeled as interference from another decision maker. A special decision maker called nature will be introduced. The task is to make good decisions, in spite of actions applied by nature. Either worst-case or probabilistic models can be used to characterize nature's decision-making process. Some planning problems might involve multiple rational decision makers.
This chapter provides a continuous-time counterpart to the state transition equation, xk+1 = f(xk, uk), which was crucial in Chapter 2. On a continuous state space, X (assumed to be a smooth manifold), it will be defined as ẋ = f(x, u), which intentionally looks similar to the discrete version. It will still be referred to as a state transition equation. It will also be called a system (short for control system), which is a term used in control theory. There are no obstacle regions in this chapter. Obstacles will appear again when planning algorithms are covered in Chapter 14. In continuous time, the state transition function f(x, u) yields a velocity as opposed to the next state. Since the transitions are no longer discrete, it does not make sense to talk about a “next” state. Future states that satisfy the differential constraints are obtained by integration of the velocity. Therefore, it is natural to specify only velocities. This relies on the notions of tangent spaces and vector fields, as covered in Section 8.3.
This chapter presents many example models that can be used in the planning algorithms of Chapter 14. Section 13.1 develops differential constraints for the case in which X is the C-space of one or more bodies. These constraints commonly occur for wheeled vehicles (e.g., a car cannot move sideways). To represent dynamics, constraints on acceleration are needed.