23 results
Excavation of a Neolithic House at Yarnbury, near Grassington, North Yorkshire
- Alex Gibson, Wolfgang Neubauer, Sebastian Flöry, Roland Filzwieser, Erich Nau, Petra Schneidhofer, Guglielmo Strapazzon, Cathy Batt, David Greenwood, Philippa Bradley, Dana Challinor
-
- Journal:
- Proceedings of the Prehistoric Society / Volume 83 / December 2017
- Published online by Cambridge University Press:
- 06 March 2017, pp. 189-212
- Print publication:
- December 2017
-
- Article
- Export citation
-
Landscape geophysical survey around the small upland ‘henge’ at Yarnbury, Grassington, North Yorkshire revealed few anthropogenic features around the enclosure but did identify a small rectangular structure in the same field. Sample trenching of this feature, radiocarbon and archaeomagnetic dating proved it to be an earlier Neolithic post and wattle structure of a type that is being increasingly recognised in Ireland and the west of Britain. It is the first to be recognised in the Yorkshire Dales and it is argued that the Dales may have been pivotal in the Neolithic for east–west trade as well as pastoral upland agriculture.
List of Algorithms
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp ix-x
-
- Chapter
- Export citation
Appendix B - Strongly Connected Components of a Graph
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 318-320
-
- Chapter
- Export citation
-
Summary
Let G = (V,E) be a directed graph. A strongly connected component of G is a subset C of V such that every vertex of C is reachable from every other vertex of C. The relation ∼ on vertices can be defined as follows: υ ∼ υ' iff either υ = υ' or υ is reachable from υ' and υ' is reachable from υ. It is an equivalence relation on V. It partitions V into equivalence classes, each being a strongly connected component of G. Furthermore, the set of strongly connected components of G is a directed acyclic graph that has an edge from C to C' when there is a vertex in C' reachable from a vertex in C.
The Tarjan algorithm [560] finds in a single depth-first traversal of G its strongly connected components.Each vertex is visited just once.Hence the traversal organizesG as a spanning forest.Some subtrees of this forest are the strongly connected components of G. During the traversal, the algorithm associates two integers to each new vertex υ it meets:
• index(υ): the order in which υ is met in the traversal, and
• low(υ) = min﹛index(υ') |υ' reachable from υ﹜
It is shown that index(υ)=low(υ) iff υ and all its successors in a traversal subtree are a strongly connected component of G.
This is implemented in Algorithm B.1 as a recursive procedure with a stack mechanism. At the end of a recursion on a vertex υ, if the condition index(υ)=low(υ) holds, then υ and all the vertices above υ in the stack (i.e., those below υ in the depth-first traversal tree) constitute a strongly connected component of G.
With the appropriate initialization (i ← 0, stack ← ∅ and index undefined everywhere), Tarjan(υ) is called once for every υ ∈ V such that index(υ) is undefined. The algorithm run in 0(|V| + |E|). It finds all the strongly connected components of G in the reverse order of the topological sort of the DAG formed by the components, that is, if (C,C') is an edge of this DAG, then C' will be found before C.
7 - Other Deliberation Functions
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 276-309
-
- Chapter
- Export citation
-
Summary
As discussed in Section 1.3, there is more to deliberation than planning and acting. This point is particularly clear in robotics, as shown in the survey by Ingrand and Ghallab [294]. Here,we briefly cover a few deliberation functions, other than planning and acting, that may be needed by an actor.We discuss in Section 7.1 deliberation on sensing tasks:how to model them and control them to recognize the state of theworld and detect objects, events, and activities in the environment that are relevant to the actor, for and while performing its own actions. Section 7.2 is about monitoring and goal reasoning, that is, detecting and interpreting discrepancies between predictions and observations, anticipating what needs be monitored, controlling monitoring actions, and assessing the relevance of commitments and goals from observed evolutions, failures, and opportunities. Learning and model acquisition techniques in planning and acting are surveyed in Section 7.3; we cover in particular reinforcement learning and learning from demonstration approaches.
This chapter surveys also approaches for handling hybrid models that have continuous and discrete components (Section 7.4), which are needed in domains where part of the dynamics is naturally expressed with continuous differential equations.We finally devote Section 7.5 to representations for expressing ontologies, which can be essential for modeling a domain; we discuss their use in planning and acting.
PERCEIVING
Deliberation is mostly needed for an actor facing a diversity of situations in an open environment. Such an actor generally has partial knowledge about the initial state of world and its possible evolution. It needs to be able to perceive what is relevant for its activity and to deliberate about its perception, while acting and perceiving.
Reasoning on perception leads to several problems,among which for example those of:
• Reliability: how reliable are sensing and information gathering actions? What verification and confirmation steps are needed to confirm that the sensed value of a state variable is correct? How to assess the distribution of values if uncertainty is explicitly modeled?
• Observability: how to acquire information about non observable state variables from the observable ones? How to balance costly observations with approximate estimates?
• Persistence: How long can one assume that a state variable keeps its previous value as long as new observations do no contradict it?
Index
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 351-354
-
- Chapter
- Export citation
2 - Deliberation with Deterministic Models
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 19-73
-
- Chapter
- Export citation
-
Summary
Having considered the components of an actor and their relation to the actor's environment, we now need to develop some representational and algorithmic tools for performing the actor's deliberation functions. In this chapter, we develop a simple kind of descriptive model for use in planning, describe some planning algorithms that can use this kind of model, and discuss some ways for actors to use those algorithms.
This chapter is organized as follows. Section 2.1 develops state-variable representations of planning domains. Sections 2.2 and 2.3 describe forward-search planning algorithms, and heuristics to guide them. Sections 2.4 and 2.5 describe backward-search and plan-space planning algorithms. Section 2.6 describes some ways for an actor to use online planning. Sections 2.7 and 2.8 contain the discussion and historical remarks, and the student exercises.
STATE-VARIABLE REPRESENTATION
The descriptive models used by planning systems are often called planning domains. However, it is important to keep in mind that a planning domain is not an a priori definition of the actor and its environment. Rather, it is necessarily an imperfect approximation that must incorporate trade-offs among several competing criteria: accuracy, computational performance, and understandability to users.
State-Transition Systems
In this chapter, we use a simple planning-domain formalism that is similar to a finitestate automaton:
Definition 2.1. A state-transition system (also called a classical planning domain) is a triple Σ= (S,A, γ) or 4-tuple Σ = (S,A, γ, cost), where
• S is a finite set of states in which the system may be.
• A is a finite set of actions that the actor may perform.
• γ : S × A → S is a partial function called the prediction function or state transition function. If (s, a) is in γ 's domain (i.e., γ (s, a) is defined), then a is applicable in s, with γ (s, a) being the predicted outcome.Otherwise a is inapplicable in s.
• cost : S × A → [0,∞) is a partial function having the same domain as γ. Although we call it the cost function, its meaning is arbitrary: it may represent monetary cost, time, or something else that one might want tominimize. If the cost function isn't given explicitly (i.e., if Σ = (S,A, γ)), then cost(s, a) = 1 whenever γ (s, a) is defined.
Bibliography
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 321-350
-
- Chapter
- Export citation
8 - Concluding Remarks
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 310-312
-
- Chapter
- Export citation
-
Summary
In this book, we have studied computational reasoning principles and mechanisms to support choosing and performing actions.Here are some observations about the current status of work on those topics.
Extensive work has been done on automated planning, ranging from classical planning techniques to extended approaches dealing with temporal, hierarchical, nondeterministic, and probabilistic models. The field has progressed tremendously, and a strong community of scientists is continually producing new results, technology, and tools.
Issues related to acting have also attracted much attention, and the state of the art is broad and rich, but it is quite fragmented.The relationships among different approaches have not yet been studied in depth, and a unifying and formal account of acting is not available in the same way as it is in the field of automated planning.
Furthermore, the problems of how to generate plans and how to perform synthesized actions have been mainly studied separately, and a better understanding is needed of the relationships between planning and acting. One of the usual assumptions in research on planning is that actions are directly executable, and this assumption is used even in the work on interleaving online planning and execution. In most cases, however, acting cannot be reduced to the direct execution of atomic commands that have been chosen by a planner. Significant deliberation is needed for an actor to perform what is planned.
In this book, we have addressed the state of the art from a unifying perspective.We have presented techniques for doing planning with deterministic,hierarchical, temporal, nondeterministic, and probabilistic models, and have discussed approaches for reacting to events and refining actions into executable commands. In doing this, we have distinguished between two kinds of models:
• Descriptivemodels of actions specify the actor's “know what.”They describe which state or set of possible statesmay result from performing an action.They are used by the actor to reason about which actions may achieve its objectives.
• Operational models of actions specify the actor's “know how.” They describe how to perform an action, that is,what commands to execute in the current context and how to organize them to achieve the action's intended effects. The actor relies on operational models to perform the actions that it has decided to perform.
3 - Deliberation with Refinement Methods
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 74-113
-
- Chapter
- Export citation
-
Summary
Chapter 2 concentrated mostly on planning with descriptive action models. Although it described some ways for an actor to receive guidance from such a planner, it did not describe the operational models that an actor might need to perform the planned actions. In the current chapter, we present a formalism for operational models and describe how to use these models for deliberative acting.
Section 3.1 describes a formalism for operational models based on refinement methods. A method specifies how to accomplish a task (an abstract activity of some kind) by refining it into other activities that are less abstract. These activities may include other tasks that will need further refinement and commands that can be sent to the execution platform. Section 3.2 describes an acting procedure, RAE, that uses a collection of refinement methods to generate and traverse a refinement tree similar to the one in Figure 1.2. It recursively refines abstract activities into less abstract activities, ultimately producing commands to the execution platform.
If we modify the refinement methods by replacing the commands with descriptive models, the modified methods can also be used for planning. The basic idea is to augment the acting procedure with predictive lookahead of the possible outcome of commands that can be chosen. Section 3.3 describes a planner, SeRPE, that does this. Section 3.4 describes how to integrate such a planner into acting procedures.
Although the formalism in this chapter removes many of the simplifying assumptions that we made in Chapter 2, it still incorporates some assumptions that do not always hold in practical applications. Section 3.5 discusses these and also includes historical remarks.
OPERATIONAL MODELS
In this section, we present a formalism for operational models of actions and describe how to use these models for deliberative acting. This formalism weakens or removes several of the simplifying assumptions that we made in Section 2.1.1:
• Dynamic environment. The environment is not necessarily static.Our operational models deal with exogenous events, that is,events due to other causes than the actor's actions.
• Imperfect information. In Section 2.1.1, we assumed that the actor had perfect information about its environment. In reality, it is rare for an actor to be able to know the current value of every state variable and to maintain this knowledge while the world evolves.
Preface
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp xv-xviii
-
- Chapter
- Export citation
-
Summary
This book is about methods and techniques that a computational agent can use for deliberative planning and acting, that is, for deciding both which actions to perform and how to perform them, to achieve
some objective. The study of deliberation has several scientific and engineering motivations.
Understanding deliberation is an objective for most cognitive sciences. In artificial intelligence research, this is done by modeling deliberation through computational approaches both to enable it and to allow it to be explained. Furthermore, the investigated capabilities are better understood by mapping concepts and theories into designed systems and experiments to test empirically,measure,and qualify the proposed models.
The engineering motivation for studying deliberation is to build systems that exhibit deliberation capabilities and develop technologies that address socially useful needs.A technological system needs deliberation capabilities if it must autonomously perform a set of tasks that are too diverse – ormust be done in environments that are too diverse – to engineer those tasks into innate behaviors. Autonomy and diversity of tasks and environments is a critical feature in many applications, including robotics (e.g., service and personal robots; rescue and exploration robots; autonomous space stations, satellites, or vehicles), complex simulation systems (e.g., tutoring, training or entertainment), or complex infrastructure management (e.g., industrial or energy plants, transportation networks, urban facilities).
MOTIVATION AND COVERAGE
The coverage of this book derives from the view we advocated for in our previous work [231], which we now briefly summarize.
Automated planning is a rich technical field, which benefits from the work of an active and growing research community. Some areas in this field are extensively explored and correspond to a number of already mature techniques. However, there are other areas in which further investigation is critically needed if automated planning is to have a wider impact on a broader set of applications. One of the most important such areas, in our view is the integration of planning and acting.This book covers several different kinds of models and approaches – deterministic, hierarchical, temporal,nondeterministic and probabilistic – and for each of them, we discuss not only the techniques themselves but also how to use them in the integration of planning and acting.
6 - Deliberation with Probabilistic Models
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 217-275
-
- Chapter
- Export citation
-
Summary
In this chapter, we explore various approaches for using probabilistic models to handle the uncertainty and nondeterminism in planning and acting problems. These approaches are mostly based on dynamic programming optimization methods for Markov decision processes.We explain the basic principles and develop heuristic search algorithms for stochastic shortest-path problems. We also propose several sampling algorithms for online probabilistic planning and discuss how to augment with probabilistic models refinement methods for acting. The chapter also discusses the critical issue of specifying a domain with probabilistic models.
Our motivations for using probabilistic models, and our main assumptions, are briefly introduced next. Section 6.2 defines stochastic shortest-path problems and basic approaches for solving them. Different heuristic search algorithms for these problems are presented and analyzed in Section 6.3.Online probabilistic planning approaches are covered in Section 6.4.Refinement methods for actingwith probabilisticmodels are presented in Section 6.5. Sections 6.6 and 6.7 are devoted to factored representations and domain modeling issues with probabilistic models, respectively. The main references are given in the discussion and historical remarks Section 6.8. The chapter ends with exercises.
INTRODUCTION
Some of the motivations for deliberation with probabilistic models are similar to those introduced in Chapter 5 for addressing nondeterminism: the future is never entirely predictable, models are necessarily incomplete, and, even in predictable environments, complete deterministic models are often too complex and costly to develop. In addition, probabilistic planning considers that the possible outcomes of an action are not equally likely. Sometimes, one is able to estimate the likelihood of each outcome, relying for example on statistics of past observations. Probabilistic planning addresses those cases in which it is desirable to seek plans optimized with respect to the estimated likelihood of the effects of their actions.
The usual formal model of probabilistic planning is that of Markov decision processes (MDPs). An MDP is a nondeterministic state-transition system together with a probability distribution and a cost distribution.The probability distribution defines how likely it is to get to a state s' when an action a is performed in a state s.
A probabilistic state-transition system is said to be Markovian if the probability distribution of the next state depends only on the current state and not on the sequence of states that preceded it.
Dedication
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp v-vi
-
- Chapter
- Export citation
4 - Deliberation with Temporal Models
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 114-154
-
- Chapter
- Export citation
-
Summary
This chapter is about planning and acting approaches in which time is explicit in the descriptive and operational models of actions, as well as in the models of the expected evolution of the world. It describes several algorithms and computation methods for handling durative and concurrent activities with respect to a predicted dynamics.
The first section addresses the need of making time explicit in the deliberation of an actor. A knowledge representation for modeling actions with temporal variables is presented in Section 4.2. It relies on an extension of the refinement methods introduced earlier, which are seen here as chronicles, that is, collections of assertions and tasks with explicit temporal constraints.A planning algorithm with temporal refinement methods is developed in Section 4.3.The basic techniques formanaging temporal constraints and the controllability of temporal plans are then presented in Section 4.4.Acting problems with temporal domain models are discussed, considering different types of operational models, in Section 4.5. The chapter concludes with a discussion and historical remarks, followed by exercises.
INTRODUCTION
To perform an action, different kinds of resources may need to be borrowed (e.g., space, tools) or consumed (e.g., energy). Time is a resource required by every action, but it differs from other types of resources. It flows independently from whether any actions are being performed, and it can be shared ad infinitum by independent actors as long as their actions do not interfere with each other.
In previous chapters, we left time implicit in our models: an action produced an instantaneous transition from one state to the next.However, deliberative acting often requires explicit temporal models of actions.Rather than just specifying an action's preconditions and effects, temporal modelsmust specify what things an action requires and what events it will cause at various points during the action's performance. For example, moving a robot r1 from a loading dock d1 to a loading dock d2 does not require d2's availability at the outset but does require it shortly before r1 reaches d2.
Actions may, and sometimes must, overlap, even if their conditions and effects are not independent. As one example, r1 may move from d1 to d2 while r2 is moving from d2 to d1.As another, opening a door that has a knob and a spring latch that controls the knob requires two tightly synchronized actions: (i) pushing and maintaining the latch while (ii) turning the knob.
Frontmatter
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp i-iv
-
- Chapter
- Export citation
Appendix A - Search Algorithms
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 313-317
-
- Chapter
- Export citation
-
Summary
This appendix provides background information about several of the search algorithms used in this book. These are nondeterministic state-space search (Section A.1) and And/Or search (Section A.2).
NONDETERMINISTIC STATE-SPACE SEARCH
Many of the planning algorithms in this book have been presented as nondeterministic search algorithms and can be described as instances of Algorithm A.1, Nondeterministic-Search. In most implementations of these algorithms, line (iii) corresponds to trying several members of R sequentially in a trial-and-error fashion. The “nondeterministically choose” command is an abstraction that lets us ignore the precise order in which those values are tried. This enables us to discuss properties that are shared by a wide variety of algorithms that search the same space of partial solutions, even though those algorithms may visit different nodes of that space in different orders.
There are several theoretical models of nondeterministic choice that are more-orless equivalent mathematically [213, 464, 131]. The one that is most relevant for our purposes is the nondeterministic Turing machine model,which works roughly as follows.
Let ψ(P) be a process produced by calling Nondeterministic-Search on a search problem P. Whenever this process reaches line (iii), it replaces ψ(P) with |R| copies of ψ(P) running in parallel: one copy for each r ∈ R. Each process corresponds to a different execution trace of ψ(P), and each execution trace follows one of the paths in ψ(P)'s search tree (see Figure A.1). Each execution trace that terminates will either return failure or return a purported answer to P.
Two desirable properties for a search algorithm ψ are soundness and completeness, which are defined as follows:
• ψ is sound over a set of search problems P if for every P ∈ P and every execution trace of ψ(P), if the trace terminates and returns a value π ≠ failure, then π is a solution for P. This will happen if the solution test in line (i) is sound.
Algorithm A.1 Equivalent iterative and recursive versions of a generic nondeterministic search algorithm. The arguments include the search problem P and (in the recursive version) a partial solution π, the initial value of which should be the empty plan.
Table of Notation
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp xi-xii
-
- Chapter
- Export citation
5 - Deliberation with Nondeterministic Models
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 155-216
-
- Chapter
- Export citation
-
Summary
In this chapter we drop the unrealistic assumption of determinism, that is, the assumption that each action performed in one state leads deterministically to one state. This apparently simple extension introduces uncertainty in the model of the domain and requires new approaches to planning and acting. Deliberation must take into account that actions can lead to a set of states; plans are no longer sequences of actions, but conditional plans; solutions may have different strengths. Deliberative acting with nondeterministic models allows us to take into account uncertainty when actions are performed.
The main motivations for planning and acting with nondeterministic models are in Section 5.1. The planning problem is formalized in Section 5.2. In the subsequent three sections we present some different approaches to planning with nondeterministic models: And/Or graph search (Section 5.3), symbolic model checking (Section 5.4), and determinization techniques (Section 5.5). In Section 5.6,we present techniques that interleave planning and acting. In Section 5.7, we present planning techniques with refinement methods and nondeterministic models, and in Section 5.8 we show techniques for deliberative acting with input/output automata. Comparisons among different approaches and main references are given in the discussion and historical remarks in Section 5.9. The chapter ends with a few exercises.
INTRODUCTION AND MOTIVATION
Recall that in deterministic models, the prediction of the effects of an action is deterministic: only one state is predicted as the result of performing an action in a state (seeChapter 2, Section 2.1.1, assumption in item 3). Nondeterministic models predict alternative options: an action when applied in a state may result in one among several possible states. Formally, γ (s, a) returns a set of states rather than a single state. The extension allowed by nondeterministic models is important because it allows for modeling the uncertainty of the real world.
In some cases, using a deterministic or a nondeterministic model is a design choice. For instance, in the real world, the execution of an action may either succeed or fail. Despite this, in many cases, it still makes sense to model just the so-called nominal case (in which failure does not occur), monitor execution, detect failure when it occurs, and recover, for example, by replanning or by re-acting with some failure-recovery mechanism. In these cases, deterministic models can still be a convenient choice.
1 - Introduction
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp 1-18
-
- Chapter
-
- You have access Access
- Export citation
-
Summary
This chapter introduces informally the concepts and technical material developed in the rest of the book. It discusses in particular the notion of deliberation, which is at the core of the interaction between planning and acting. Section 1.1 motivates our study of deliberation from a computational viewpoint and delineates the scope of the book. We then introduce a conceptual view of an artificial entity, called an actor, capable of acting deliberately on its environment, and discuss our main assumptions.Deliberation models and functions are presented next. Section 1.4 describes two application domains that will be simplified into illustrative examples of the techniques covered in rest of the book.
PURPOSE AND MOTIVATIONS
First Intuition
What is deliberative acting? That is the question we are studying in this book.We address it by investigating the computational reasoning principles and mechanisms supporting how to choose and perform actions.
We use the word action to refer to something that an agent does, such as exerting a force, a motion, a perception or a communication, in order to make a change in its environment and own state. An agent is any entity capable of interacting with its environment. An agent acting deliberately is motivated by some intended objective. It performs one or several actions that are justifiable by sound reasoning with respect to this objective.
Deliberation for acting consists of deciding which actions to undertake and how to perform them to achieve an objective. It refers to a reasoning process, both before and during acting, that addresses questions such as the following:
• If an agent performs an action, what will the result be?
• Which actions should an agent undertake,and how should the agent perform the chosen actions to produce a desired effect?
Such reasoning allows the agent to predict, to decide what to do and how do it, and to combine several actions that contribute jointly to the objective. The reasoning consists in using predictive models of the agent's environment and capabilities to simulate what will happen if the agent performs an action. Let us illustrate these abstract notions intuitively.
Contents
- Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris, Dana Nau, University of Maryland, College Park, Paolo Traverso
-
- Book:
- Automated Planning and Acting
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016, pp vii-viii
-
- Chapter
- Export citation
Automated Planning and Acting
- Malik Ghallab, Dana Nau, Paolo Traverso
-
- Published online:
- 05 August 2016
- Print publication:
- 09 August 2016
-
Autonomous AI systems need complex computational techniques for planning and performing actions. Planning and acting require significant deliberation because an intelligent system must coordinate and integrate these activities in order to act effectively in the real world. This book presents a comprehensive paradigm of planning and acting using the most recent and advanced automated-planning techniques. It explains the computational deliberation capabilities that allow an actor, whether physical or virtual, to reason about its actions, choose them, organize them purposefully, and act deliberately to achieve an objective. Useful for students, practitioners, and researchers, this book covers state-of-the-art planning techniques, acting techniques, and their integration which will allow readers to design intelligent systems that are able to act effectively in the real world.