To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although many mobile robot systems are experimental in nature, systems devoted to specific practical applications are being developed and deployed. This chapter examines some of the tasks for which mobile robotic systems are beginning to appear and describes several existing experimental and production systems that have been developed.
For many tasks, a mobile robot needs to know “where it is” either on an ongoing basis or when specific events occur. A robot may need to know its location in order to be able to plan appropriate paths or to know if the current location is the appropriate place at which to perform some operation. Knowing “where the robot is” has many different connotations. In the strongest sense, “knowing where the robot is” involves estimating the location of the robot (in either qualitative or quantitative terms) with respect to some global representation of space: we refer to this as strong localization.
In this chapter we discuss local convergence, which describes the intuitive notion that a finite graph, seen from the perspective of a typical vertex, looks like a certain limiting graph. Local convergence plays a profound role in random graph theory. We give general definitions of local convergence in several probabilistic senses. We then show that local convergence in its various forms is equivalent to the appropriate convergence of subgraph counts. We continue by discussing several implications of local convergence, concerning local neighborhoods, clustering, assortativity, and PageRank. We further investigate the relation between local convergence and the size of the giant, making the statement that the giant is “almost local” precise.
The use of machine learning in robotics is a vast and growing area of research. In this chapter we consider a few key variations using: the use of deep neural networks, the applications of reinforcement learning and especially deep reinforcement learning, and the rapidly emerging potential for large language models.
In this chapter we investigate the local limit of the configuration model, we identify when it has a giant component and find its size and degree structure. We give two proofs, one based on a “the giant is almost local” argument, and another based on a continuous-time exploration of the connected components in the configuration model. Further results include its connectivity transition.
So far we have explored classifiers with decision boundaries that are linear, or, in the case of the multiclass logistic regression, a combination of linear segments. In this chapter, we will expand what we have learned so far to classifiers that are capable of learning nonlinear decision boundaries. The classifiers that we will discuss here are called feed-forward neural networks, and are a generalization of both logistic regression and the perceptron. Despite the more complicated structures presented, we show that the key building blocks remain the same: the network is trained by minimizing a cost function. This minimization is implemented with backpropagation, which adapts the gradient descent algorithm introduced in the previous chapter to multilayer neural networks.
In this chapter we introduce the general setting of inhomogeneous random graphs that are generalizations of the Erdos–Rényi and generalized random graphs. In inhomogeneous random graphs, the status of edges is independent with unequal edge-occupation probabilities. While these edge probabilities are moderated by vertex weights in generalized random graphs, in the general setting they are described in terms of a kernel. The main results in this chapter concern the degree structure, the multi-type branching process local limits, and the phase transition in these inhomogeneous random graphs. We also discuss various examples, and indicate that they can have rather different structure.
This chapter covers the perceptron, the simplest neural network architecture. In general, neural networks are machine learning architectures loosely inspired by the structure of biological brains. The perceptron is the simplest example of such architectures: it contains a single artificial neuron. The perceptron will form the building block for the more complicated architectures discussed later in the book. However, rather than starting directly with the discussion of this algorithm, we will start with something simpler: a children’s book and some fundamental observations about machine learning. From these, we will formalize our first machine learning algorithm, the perceptron.
In this chapter, we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Still keeping things fairly simple, our network will consist of two fully connected layers: a hidden layer and an output layer. Between these layers, we will include dropout and a nonlinearity. Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Last, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum and L2 regularization.
Although the vast majority of mobile robotic systems involve a single robot operating alone in its environment, a growing number of researchers are considering the challenges and potential advantages of having a group of robots cooperate in order to complete some required task. For some specific robotic tasks, such as exploring an unknown planet [374], search and rescue [812], pushing objects [608], [513], [687], [821], or cleaning up toxic waste [609], it has been suggested that rather than send one very complex robot to perform the task it would more effective to send a number of smaller, simpler robots. Such a collection of robots is sometimes described as a swarm [81], a colony [255], or a collective [436], or the robots may be said to exhibit cooperative behavior [607].