To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This part of the book is devoted to acting, planning, and learning with operational models of actions expressed with a hierarchical task-oriented representation. Operational models are valuable for acting. They allow for detailed descriptions of complex actions handling dynamic environments with exogenous events. The representation relies on hierarchical refinement methods that describe alternative ways to handle tasks and react to events. A method can be any complex algorithm, decomposing a task into subtasks and primitive actions. Subtasks are refined recursively. Actions trigger the execution of sensory-motor procedures in closed loops that query and change the world stochastically.
Task and motion planning (TAMP) problems combine abstract causal relations from preconditions to effects with computational geometry, kinematics, and dynamics. This chapter is about the integration of planning for motion/manipulation with planning for abstract actions. It introduces the main sampling-based algorithms for motion planning. Manipulation planning is subsequently introduced. A few approaches specific to TAMP are then presented.
This chapter is about planning approaches with explicit time in the descriptive and operational models of actions, as well as in the models of the expected evolution of the world not caused by the actor. It describes a planning algorithm that handles durative and concurrent activities with respect to a predicted dynamics. Section 17.1 presents a knowledge representation for modeling actions and tasks with temporal variables using temporal refinement methods. Temporal plans and planning problems are defined as chronicles, i.e., collections of assertions and tasks with explicit temporal constraints. A planning algorithm with temporal refinement methods is developed in Section 17.2. The basic techniques for managing temporal and domain constraints are then presented in Section 17.3.
This chapter is about two key aspects of learning with deterministic models: learning heuristics to speed up the search for a solution plan and the automated synthesis of the model itself. We discuss how to learn heuristics for exploring parts of the search space that are more likely to lead to solutions. We then address the problem of how to learn a deterministic model, with a focus on learning action schemas.
Chapter 6 ultimately analyzes the Quantified Self (QS) as a gateway to the notions of difference that continue to shape the tech sector and therefore the devices that derive from it. As it considers the structural inequality that still constrains technological innovation, this chapter also analyzes QS as a site more specifically connected to the forms of privilege that impact how entrepreneurial extracurricular labor becomes converted into business advantage. It emphasizes that the modalities of participation that have rendered QS a community of tech acolytes unevenly regulate who can benefit from the group’s role as an instrument of professional transfiguration, connection, and access.
The camera slowly scans Chris Dancy’s face, first focusing on a profile of his bespectacled eyes, then quickly switching to a frontal shot to examine his contemplative expression at close range. Seconds later, the angle shifts again, the panorama now filmed as though from behind Dancy’s shoulder. The foreground looks blurry to start with. But once the lens adjusts, the viewer clearly sees the nearby cityscape at which Dancy longingly gazes.
The motivations for acting and planning with probabilistic models are about handling uncertainty in a quantitative way, with optimal or near-optimal decisions. The future is never entirely and precisely predictable. Uncertainty can be due to exogenous events in the environment, from nature and other actors, to noisy sensing and information gathering actions, to possible failures and outcomes of imprecise or intrinsically nondeterministic actions. Models are necessarily incomplete. Knowledge about open environments is partial. Part of what may happen can be only be modeled with uncertainty. Even in closed predictable environments, complete deterministic models may be too complex to develop. The three chapters in Part III tackle acting, planning, and learning in a probabilistic setting.
This chapter is about planning techniques for solving MDP problems. It presents algorithms that seeks optimal or near-optimal solution policies for a domain. Most of the chapter is focused on indefinite-horizon goal reachability domains that have positive costs and a safe solution; they may have dead ends, but those are avoidable. The chapter presents dynamic programming algorithms, heuristics search methods and their heuristics, linear programming methods, and online and Monte Carlo tree search techniques.
Chapter 5 investigates “seamlessly” networked self-tracking tools as symbols of idealized professional mobility and looks to the Quantified Self (QS) as a forum that responds to and registers these business challenges and ambitions. Technologists tend to fetishize frictionless digital mobility. Conversations with digital professionals who participate in forums such as QS, however, indicate that the attractiveness of well-networked devices resonates less with the emerging realities of wearable technology or consumer “needs and wants” (a concern thematized in Chapter 3) than the ideals of lasting and sustainable tech sector careers that are otherwise punctuated by instability and breakdown. These are the additional entrepreneurial desires that motivate the making of self-tracking technology and become embedded in its design. QS also acts as a practical source of mutual aid that facilitates the desired connectivity and agility of working bodies. This chapter thus investigates QS as an interface that reconciles the technological fantasy and its repetitious recital with the difficulties tech executives face in their personal lives and professional work.
Chapter 3 details how business executives have interacted with the Quantified Self (QS) as a site that materializes a particular consumer “segment” and consumer “demand” in ways that accord with the binary and voyeuristic principles of consumer-centric design. QS offers visibility into ways technologists produce the distance they seek to see between themselves and their customers. However, the manner in which they interact with the forum also testifies to the involved role digital professionals frequently play in formulating consumer desire.
The participation of tech executives in collectives such as the Quantified Self (QS) belies their desire to occupy the position of a professional participant observer who is simply looking in on an emerging social scene as though from afar. Chapter 4 looks at the way technologists have leveraged QS to cultivate a professional identity of a digital devotee. In particular, it analyzes how the popular staging of QS as a space for private explorations of self-tracking makes it possible for technologists to recoup their business-driven engagements with the forum as hallmarks of personal – as well as of more general – passion for self-quantification, a display of which has become increasingly necessary for success in the tech sector. Innovation is often enough framed as a product of masculinized heroics and individual acts of daring. Examining QS as an instrument of professional development refocuses attention on the feminized modes of free and affective labor that continue to move the tech industry forward. As these chapters explore the forum both as a mechanism and as a mirror of these professional imperatives, they highlight the knottier role desire plays in the digital economy.
In the conclusion, I consider how the Quantified Self (QS) has evolved since I completed my research in 2017. The composition and social function of this collective have been partially reshaped by its original organizers who have continued to focus group activities on citizen science and academic research. Groups such as QS have also become affected by the COVID-19 pandemic, which has altered the nature and function of in-person and post-work socializing in the commercial sphere more broadly. Nevertheless, the industry practices, challenges, and promises refracted through the QS interface in this book remain germane as they speak to some of the central dynamics that continue to impact the self-tracking market, if now in a different guise.
In probabilistic models, an action can have several possible outcomes that are not equally likely; their distribution can be estimated relying on statistics of past observations. The purpose is to act optimally with respect to an optimization criterion of the estimated likelihood of action effects and their cost. The usual formal probabilistic models are Markov decision processes (MDPs). An MDP is a nondeterministic state-transition system with a probability distribution and a cost distribution. The probability distribution defines how likely it is to get to a state 𝑠′ when an action 𝑎 is performed in a state 𝑠. The chapter presents MDPs in flat then structured state-space representations. Section 8.3 covers modeling issues of a probabilistic domain with MDPs and variants such as the stochastic shortest path model (SSP) or the constrained MDP (C-MDP) model. Section 8.4 focuses on acting with MDPs. Partially observable MDPs and other extended models are discussed in Section 8.5.
This chapter introduces stochastic gradient MCMC (SG-MCMC) algorithms, designed to scale Bayesian inference to large datasets. Beginning with the unadjusted Langevin algorithm (ULA), it extends to more sophisticated methods such as stochastic gradient Langevin dynamics (SGLD). The chapter emphasises controlling the stochasticity in gradient estimators and explores the role of control variates in reducing variance. Convergence properties of SG-MCMC methods are analysed, with experiments demonstrating their performance in logistic regression and Bayesian neural networks. It concludes by outlining a general framework for SG-MCMC and offering practical guidance for efficient, scalable Bayesian learning.