To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article describes the scientific contributions of Milton Sobel. It motivates his research by considering his family background, his war experiences, and his mentors and fellow students at Columbia University. His research in sequential analysis, selection, ranking, group testing, and probabilistic combinatorics are highlighted.
Recently, Pellerey, Shaked, and Zinn [6] introduced a discrete-time analogue of the nonhomogeneous Poisson process. The purpose of this article is to provide some results for stochastic comparisons of the epoch times and the interepoch times of those processes. Also, we show the relationships between these processes and discrete record values and we provide several results for discrete weak record values.
In this article, we consider an insurance risk model where the claim and premium processes follow some time series models. We first consider the model proposed in Gerber [2,3]; then a model with dependent structure between premium and claim processes modeled by using Granger's causal model is considered. By using some martingale arguments, Lundberg-type upper bounds for the ruin probabilities under both models are obtained. Some special cases are discussed.
In this article, we obtain error bounds for exponential approximations to the classes of weighted residual and equilibrium lifetime distributions with monotone weight functions. These bounds are obtained for the class of distributions with increasing (decreasing) hazard rate and mean residual life functions.
We investigate some new properties of mean inactivity time (MIT) order and increasing MIT (IMIT) class of life distributions. The preservation property of MIT order under increasing and concave transformations, reversed preservation properties of MIT order, and IMIT class of life distributions under the taking of maximum are developed. Based on the residual life at a random time and the excess lifetime in a renewal process, stochastic comparisons of both IMIT and decreasing mean residual life distributions are conducted as well.
The concept of generalized order statistics was introduced as a unified approach to a variety of models of ordered random variables. The purpose of this article is to investigate conditions on the distributions and the parameters on which the generalized order statistics are based to establish the likelihood ratio ordering of general p-spacings and the hazard rate and the dispersive orderings of (normalizing) simple spacings from two samples. We thus strengthen and complement some results in Franco, Ruiz, and Ruiz [7] and Belzunce, Mercader, and Ruiz [5]. This article is a continuation of Hu and Zhuang [10].
We consider funding an interest-bearing warranty reserve with contributions after each sale. The problem for the manufacturer is to determine the initial level of the reserve fund and the amount to be put in after each sale, so as to ensure that the reserve fund covers all of the warranty liabilities with a prespecified probability over a fixed period of time. We assume a nonhomogeneous Poisson sales process, random warranty periods, and a constant failure rate for items under warranty. We derive the mean and variance of the reserve level as a function of time and provide a robust heuristic to aid the manufacturer in its decision.
This article is concerned with a loading-dependent model of cascading failure proposed recently by Dobson, Carreras, and Newman [6]. The central problem is to determine the distribution of the total number of initial components that will have finally failed. A new approach based on a closed connection with epidemic modeling is developed. This allows us to consider a more general failure model in which the additional loads caused by successive failures are arbitrarily fixed (instead of being constant as in [6]). The key mathematical tool is provided by the partial joint distributions of order statistics for a sample of independent uniform (0,1) random variables.
We consider the sojourn time V in the M/D/1 processor sharing (PS) queue and show that P(V > x) is of the form Ce−γx as x becomes large. The proof involves a geometric random sum representation of V and a connection with Yule processes, which also enables us to simplify Ott's [21] derivation of the Laplace transform of V. Numerical experiments show that the approximation P(V > x) ≈ Ce−γx is excellent even for moderate values of x.
The main purpose of Chapter 11 was to introduce information space (I-space) concepts and to provide illustrative examples that aid in understanding. This chapter addresses planning under sensing uncertainty, which amounts to planning in an I-space. Section 12.1 covers general-purpose algorithms, for which it will quickly be discovered that only problems with very few states can be solved because of the explosive growth of the I-space. In Chapter 6, it was seen that general-purpose motion planning algorithms apply only to simple problems. Ways to avoid this were either to develop sampling-based techniques or to focus on a narrower class of problems. It is intriguing to apply sampling-based planning ideas to I-spaces, but as of yet this idea remains largely unexplored. Therefore, the majority of this chapter focuses on planning algorithms designed for narrower classes of problems. In each case, interesting algorithms have been developed that can solve problems that are much more complicated than what could be solved by the general-purpose algorithms. This is because they exploit some structure that is specific to the problem.
An important philosophy when dealing with an I-space is to develop an I-map that reduces its size and complexity as much as possible by obtaining a simpler derived I-space. Following this, it may be possible to design a special-purpose algorithm that efficiently solves the new problem by relying on the fact that the I-space does have the full generality. This idea will appear repeatedly throughout the chapter.
Chapter 3 only covered how to model and transform a collection of bodies; however, for the purposes of planning it is important to define the state space. The state space for motion planning is a set of possible transformations that could be applied to the robot. This will be referred to as the configuration space, based on Lagrangian mechanics and the seminal work of Lozano-Pérez [659, 663, 660], who extensively utilized this notion in the context of planning (the idea was also used in early collision avoidance work by Udupa [947]). The motion planning literature was further unified around this concept by Latombe's book [591]. Once the configuration space is clearly understood, many motion planning problems that appear different in terms of geometry and kinematics can be solved by the same planning algorithms. This level of abstraction is therefore very important.
This chapter provides important foundational material that will be very useful in Chapters 5 to 8 and other places where planning over continuous state spaces occurs. Many of the concepts introduced in this chapter come directly from mathematics, particularly from topology. Therefore, Section 4.1 gives a basic overview of topological concepts. Section 4.2 uses the concepts from Chapter 3 to define the configuration space. After reading this, you should be able to precisely characterize the configuration space of a robot and understand its structure.
So far in Part II it has been assumed that a continuous path sufficiently solves a motion planning problem. In many applications, such as computer-generated animation and virtual prototyping, there is no need to challenge this assumption because models in a virtual environment usually behave as designed. In applications that involve interaction with the physical world, future configurations may not be predictable. A traditional way to account for this in robotics is to use the refinement scheme that was shown in Figure 1.19 to design a feedback control law that attempts to follow the computed path as closely as possible. Sometimes this is satisfactory, but it is important to recognize that this approach is highly decoupled. Feedback and dynamics are neglected in the construction of the original path; the computed path may therefore not even be usable.
Section 8.1 motivates the consideration of feedback in the context of motion planning. Section 8.2 presents the main concepts of this chapter, but only for the case of a discrete state space. This requires less mathematical concepts than the continuous case, making it easier to present feedback concepts. Section 8.3 then provides the mathematical background needed to extend the feedback concepts to continuous state spaces (which includes C-spaces). Feedback motion planning methods are divided into complete methods, covered in Section 8.4, and sampling-based methods, covered in Section 8.5.
Motivation
For most problems involving the physical world, some form of feedback is needed.
This chapter provides introductory concepts that serve as an entry point into other parts of the book. The planning problems considered here are the simplest to describe because the state space will be finite in most cases. When it is not finite, it will at least be countably infinite (i.e., a unique integer may be assigned to every state). Therefore, no geometric models or differential equations will be needed to characterize the discrete planning problems. Furthermore, no forms of uncertainty will be considered, which avoids complications such as probability theory. All models are completely known and predictable.
There are three main parts to this chapter. Sections 2.1 and 2.2 define and present search methods for feasible planning, in which the only concern is to reach a goal state. The search methods will be used throughout the book in numerous other contexts, including motion planning in continuous state spaces. Following feasible planning, Section 2.3 addresses the problem of optimal planning. The principle of optimality, or the dynamic programming principle, [86] provides a key insight that greatly reduces the computation effort in many planning algorithms. The value-iteration method of dynamic programming is the main focus of Section 2.3. The relationship between Dijkstra's algorithm and value iteration is also discussed. Finally, Sections 2.4 and 2.5 describe logic-based representations of planning and methods that exploit these representations to make the problem easier to solve; material from these sections is not needed in later chapters.
After Chapter 13, it seems that differential constraints arise nearly everywhere. For example, they may arise when wheels roll, aircraft fly, and when the dynamics of virtually any mechanical system is considered. This makes the basic model used for motion planning in Part II invalid for many applications because differential constraints were neglected. Formulation 4.1, for example, was concerned only with obstacles in the C-space.
This chapter incorporates the differential models of Chapter 13 into sampling-based motion planning. The detailed modeling (e.g., Lagrangian mechanics) of Chapter 13 is not important here. This chapter works directly with a given system, expressed as ẋ = f(x, u). The focus is limited to sampling-based approaches because very little can be done with combinatorial methods if differential constraints exist. However, if there are no obstacles, then powerful analytical techniques may apply. This subject is complementary to motion planning with obstacles and is the focus of Chapter 15.
Section 14.1 provides basic definitions and concepts for motion planning under differential constraints. It is particularly important to explain the distinctions made in literature between nonholonomic planning, kinodynamic planning, and trajectory planning, all of which are cases of planning under differential constraints. Another important point is that obstacles may be somewhat more complicated in phase spaces, which were introduced in Section 13.2. Section 14.2 introduces sampling over the space of action trajectories, which is an essential part of later planning algorithms.
This chapter serves as a building block for modeling and solving planning problem that involve more than one decision maker. The focus is on making a single decision in the presence of other decision makers that may interfere with the outcome. The planning problems in Chapters 10 to 12 will be viewed as a sequence of decision-making problems. The ideas presented in this chapter can be viewed as making a one-stage plan. With respect to Chapter 2, the present chapter reduces the number of stages down to one and then introduces more sophisticated ways to model a single stage. Upon returning to multiple stages in Chapter 10, it will quickly be seen that many algorithms from Chapter 2 extend nicely to incorporate the decision-theoretic concepts of this chapter.
Since there is no information to carry across stages, there will be no need for a state space. Instead of designing a plan for a robot, in this chapter we will refer to designing a strategy for a decision maker (DM). The planning problem reduces down to a decision-making problem. In later chapters, which describe sequential decision making, planning terminology will once again be used. It does not seem appropriate yet in this chapter because making a single decision appears too degenerate to be referred to as planning.
A consistent theme throughout Part III will be the interaction of multiple DMs.
This chapter provides important background material that will be needed for Part II. Formulating and solving motion planning problems require defining and manipulating complicated geometric models of a system of bodies in space. Section 3.1 introduces geometric modeling, which focuses mainly on semi-algebraic modeling because it is an important part of Chapter 6. If your interest is mainly in Chapter 5, then understanding semi-algebraic models is not critical. Sections 3.2 and 3.3 describe how to transform a single body and a chain of bodies, respectively. This will enable the robot to “move.” These sections are essential for understanding all of Part II and many sections beyond. It is expected that many readers will already have some or all of this background (especially Section 3.2, but it is included for completeness). Section 3.4 extends the framework for transforming chains of bodies to transforming trees of bodies, which allows modeling of complicated systems, such as humanoid robots and flexible organic molecules. Finally, Section 3.5 briefly covers transformations that do not assume each body is rigid.
Geometric modeling
A wide variety of approaches and techniques for geometric modeling exist, and the particular choice usually depends on the application and the difficulty of the problem. In most cases, there are generally two alternatives: 1) a boundary representation, and 2) a solid representation. Suppose we would like to define a model of a planet. Using a boundary representation, we might write the equation of a sphere that roughly coincides with the planet's surface.
Planning is a term that means different things to different groups of people. Robotics addresses the automation of mechanical systems that have sensing, actuation, and computation capabilities (similar terms, such as autonomous systems are also used). A fundamental need in robotics is to have algorithms that convert high-level specifications of tasks from humans into low-level descriptions of how to move. The terms motion planning and trajectory planning are often used for these kinds of problems. A classical version of motion planning is sometimes referred to as the Piano Mover's Problem. Imagine giving a precise computer-aided design (CAD) model of a house and a piano as input to an algorithm. The algorithm must determine how to move the piano from one room to another in the house without hitting anything. Most of us have encountered similar problems when moving a sofa or mattress up a set of stairs. Robot motion planning usually ignores dynamics and other differential constraints and focuses primarily on the translations and rotations required to move the piano. Recent work, however, does consider other aspects, such as uncertainties, differential constraints, modeling uncertainties, and optimality. Trajectory planning usually refers to the problem of taking the solution from a robot motion planning algorithm and determining how to move along the solution in a way that respects the mechanical limitations of the robot.
Control theory has historically been concerned with designing inputs to physical systems described by differential equations.
Part II makes the transition from discrete to continuous state spaces. Two alternative titles are appropriate for this part: 1) motion planning, or 2) planning in continuous state spaces. Chapters 3–8 are based on research from the field of motion planning, which has been building since the 1970s; therefore, the name motion planning is widely known to refer to the collection of models and algorithms that will be covered. On the other hand, it is convenient to also think of Part II as planning in continuous spaces because this is the primary distinction with respect to most other forms of planning.
In addition, motion planning will frequently refer to motions of a robot in a 2D or 3D world that contains obstacles. The robot could model an actual robot, or any other collection of moving bodies, such as humans or flexible molecules. A motion plan involves determining what motions are appropriate for the robot so that it reaches a goal state without colliding into obstacles. Recall the examples from Section 1.2.
Many issues that arose in Chapter 2 appear once again in motion planning. Two themes that may help to see the connection are as follows.
Implicit representations
A familiar theme from Chapter 2 is that planning algorithms must deal with implicit representations of the state space. In motion planning, this will become even more important because the state space is uncountably infinite.