To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This part contains two chapters concerning reduction of the dimension of the feature space, which plays a vital role in improving learning efficiency as well as prediction performance.
Chapter 3 covers the most prominent subspace projection approach, namely the classical principal component analysis (PCA), cf. Algorithm 3.1. Theorems 3.1 and 3.2 establish the optimality of PCA for both the minimum reconstruction error and maximum entropy criteria. The optimal error and entropy attainable by PCA are given in closed form. Algorithms 3.2, 3.3, and 3.4 describe the numerical procedures for the computation of PCA via the data matrix, scatter matrix, and kernel matrix, respectively.
Given a finite training dataset, the PCA learning model meets the LSP condition, thus the conventional PCA model can be kernelized. When a nonlinear kernel is adopted, it further extends to the kernel-PCA (KPCA) learning model. The KPCA algorithms can be presented in intrinsic space or empirical space (see Algorithms 3.5 and 3.6). For several real-life datasets, visualization via KPCA shows more visible data separability than that via PCA. Moreover, KPCA is closely related to the kernel-induced spectral space, which proves instrumental for error analysis in unsupervised and supervised applications.
Chapter 4 explores various aspects of feature selection methods for supervised and unsupervised learning scenarios. It presents several filtering-based and wrapper-based methods for feature selection, a popular method for dimension reduction.
The worker/wrapper transformation is a general-purpose technique for refactoring recursive programs to improve their performance. The two previous approaches to formalising the technique were based upon different recursion operators and different correctness conditions. In this paper we show how these two approaches can be generalised in a uniform manner by combining their correctness conditions, extend the theory with new conditions that are both necessary and sufficient to ensure the correctness of the worker/wrapper technique, and explore the benefits that result. All the proofs have been mechanically verified using the Agda system.
In this paper, we investigate the phenomenon of speed-up in the context of theories of truth. We focus on axiomatic theories of truth extending Peano arithmetic. We are particularly interested on whether conservative extensions of PA have speed-up and on how this relates to a deflationist account. We show that disquotational theories have no significant speed-up, in contrast to some compositional theories, and we briefly assess the philosophical implications of these results.
We consider a production–inventory control model with two reflecting boundaries, representing the finite storage capacity and the finite maximum backlog. Demands arrive at the inventory according to a Poisson process, their i.i.d. sizes having a common phase-type distribution. The inventory is filled by a production process, which alternates between two prespecified production rates ρ1 and ρ2: as long as the content level is positive, ρ1 is applied while the production follows ρ2 during time intervals of backlog (i.e., negative content). We derive in closed form the various cost functionals of this model for the discounted case as well as under the long-run-average criterion. The analysis is based on a martingale of the Kella–Whitt type and results for fluid flow models due to Ahn and Ramaswami.
In this paper, a new design of neural networks is introduced, which is able to generate oscillatory patterns in its output. The oscillatory neural network is used in a biped robot to enable it to learn to walk. The fundamental building block of the neural network proposed in this paper is O-neurons, which can generate oscillations in its transfer functions. O-neurons are connected and coupled with each other in order to shape a network, and their unknown parameters are found by a particle swarm optimization method. The main contribution of this paper is the learning algorithm that can combine natural policy gradient with particle swarm optimization methods. The oscillatory neural network has six outputs that determine set points for proportional-integral-derivative controllers in 6-DOF humanoid robots. Our experiment on the simulated humanoid robot presents smooth and flexible walking.
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks, such as computing the marginals, given evidence and learning from (partial) interpretations, have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on the conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce inference tasks to well-studied tasks, such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs expectation-maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state of the art in probabilistic logic programming, and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
This paper investigates worst-case analysis of a moving obstacle avoidance algorithm for unmanned vehicles in a dynamic environment in the presence of uncertainties and variations. Automatic worst-case search algorithms are developed based on optimization techniques, and illustrated by a Pioneer robot with a moving obstacle avoidance algorithm developed using the potential field method. The uncertainties in physical parameters, sensor measurements, and even the model structure of the robot are taken into account in the worst-case analysis. The minimum distance to a moving obstacle is considered as an objective function in automatic search process. It is demonstrated that a local nonlinear optimization method may not be adequate, and global optimization techniques are necessary to provide reliable worst-case analysis. The Monte Carlo simulation is carried out to demonstrate that the proposed automatic search methods provide a significant advantage over random sampling approaches.
The safety analysis of human–robot collisions has recently drawn significant attention, as robots are increasingly used in human environments. In order to understand the potential injury a robot could cause in case of an impact, such incidents should be evaluated before designing a robot arm based on biomechanical safety criteria. In recent literature, such incidents have been investigated mostly by experimental crash-testing. However, experimental methods are expensive, and the design parameters of the robot arm are difficult to change instantly. In order to solve this issue, we propose a novel robot-human collision model consisting of a 6-degree-of-freedom mass-spring-damper system for impact analysis. Since the proposed robot-human consists of a head, neck, chest, and torso, the relative motion among these body parts can be analyzed. In this study, collision analysis of impacts to the head, neck, and chest at various collision speeds are conducted using the proposed collision model. Then, the degree of injury is estimated by using various biomechanical severity indices. The reliability of the proposed collision model is verified by comparing the obtained simulation results with experimental results from literature. Furthermore, the basic requirements for the design of safer robots are determined.
This paper presents a new biped mechanism with low-cost easy-operation features. The mechanism is designed with functions for straight walking, changing direction, overcoming obstacle, and climbing stairs with only 7 DOFs (degrees of freedom). Dynamics of the biped mechanism are analyzed by means of simulations in the MSC.ADAMS environment. Simulation results in terms of motion torque, joint force, contact force, parts displacement, velocity, and acceleration are reported and analyzed to show the feasibility and efficiency of the proposed solution. In addition, with the simulation results, dynamical motion of the biped mechanism is investigated and its operation performances are characterized as well.
We propose a distributed algorithm for estimating the 3D pose (position and orientation) of multiple robots with respect to a common frame of reference when Global Positioning System is not available. This algorithm does not rely on the use of any maps, or the ability to recognize landmarks in the environment. Instead, we assume that noisy relative measurements between pairs of robots are intermittently available, which can be any one, or combination, of the following: relative pose, relative orientation, relative position, relative bearing, and relative distance. The additional information about each robot's pose provided by these measurements are used to improve over self-localization estimates. The proposed method is similar to a pose-graph optimization algorithm in spirit: pose estimates are obtained by solving an optimization problem in the underlying Riemannian manifold $(SO(3)\times{\mathcal R}^3)^{n(k)}$. The proposed algorithm is directly applicable to 3D pose estimation, can fuse heterogeneous measurement types, and can handle arbitrary time variation in the neighbor relationships among robots. Simulations show that the errors in the pose estimates obtained using this algorithm are significantly lower than what is achieved when robots estimate their pose without cooperation. Results from experiments with a pair of ground robots with vision-based sensors reinforce these findings. Further, simulations comparing the proposed algorithm with two state-of-the-art existing collaborative localization algorithms identify under what circumstances the proposed algorithm performs better than the existing methods. In addition, the question of trade-offs between cost (of obtaining a certain type of relative measurement) and benefit (improvement in localization accuracy) for various types of relative measurements is considered.
The spreading of transmissible infectious diseases is inevitably entangled with the dynamics of human population. Humans are the carrier of the pathogen, and the large-scale travel and commuting patterns that govern the mobility of modern societies are defining how epidemics and pandemics travel across the world. For a long time, the development of quantitative spatially explicit models able to shed light on the global dynamics of pandemic has been limited by the lack of detailed data on human mobility. In the last 10 years, however, these limits have been lifted by the increasing availability of data generated by new information technologies, thus triggering the development of computational (microsimulation) models working at a level of single individuals in spatially extended regions of the world. Microsimulations can provide information at very detailed spatial resolutions and down to the level of single individuals. In addition, computational implementations explicitly account for stochasticity, allowing the study of multiple realizations of epidemics with the same parameters' distribution. While on the one hand these capabilities represent the richness of microsimulation methods, on the other hand they face us with a huge amount of information that requires the use of specific data reduction methods and visual analytics.