To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It has been shown before that the Rapidly Exploring Random Tree (RRT) algorithm is probabilistically and resolution complete; and that the probability of finding a particular solution path can be related to the number of nodes. However, little analysis exists on the rate at which the tree covers the configuration space. In this paper, we present a stochastic difference equation which models how the tree covers the configuration space as a function of the number of nodes in the tree. Using two simplifying assumptions, appropriate for holonomic, kinematic systems in expansive configuration spaces, we derive closed-form solutions for the expected value and variance of configuration space coverage, which only depend on two easily computable parameters. Using a grid-based coverage measurement, we present experimental evidence supporting this model across a range of dimensions, obstacle densities, and parameter choices. Collecting data from 1000 RRTs, we provide evidence that configuration space coverage concentrates tightly around the expected coverage predicted by the model; and the results of the Chi-squared test suggest that the distribution of coverage across these runs is highly Gaussian. Together these results enable one to predict the expected coverage, along with a confidence interval, after a certain number of nodes have been added to the tree. We also applied the model to an example with extremely narrow passages and to a system with non-holonomic kinematics. The expected value prediction is still qualitatively accurate; but the rate constant is reduced and the variance is higher. Overall, in addition to its theoretical value, the model may find future application as an online measure of search-progress and problem difficulty, useful for adaptive variants of the basic RRT algorithm.
In this paper we study the maximum displacement for linear probing hashing. We use the standard probabilistic model together with the insertion policy known as First-Come-(First-Served). The results are of asymptotic nature and focus on dense hash tables. That is, the number of occupied cells n and the size of the hash table m tend to infinity with ratio n/m → 1. We present distributions and moments for the size of the maximum displacement, as well as for the number of items with displacement larger than some critical value. This is done via process convergence of the (appropriately normalized) length of the largest block of consecutive occupied cells, when the total number of occupied cells n varies.
We show that any n-vertex complete graph with edges coloured with three colours contains a set of at most four vertices such that the number of the neighbours of these vertices in one of the colours is at least 2n/3. The previous best value, proved by Erdős, Faudree, Gould, Gyárfás, Rousseau and Schelp in 1989, is 22. It is conjectured that three vertices suffice.
This paper proposes a repetitive control type optimal gait generation framework by executing learning control and parameter tuning. We propose a learning optimal control method of Hamiltonian systems unifying iterative learning control (ILC) and iterative feedback tuning (IFT). It allows one to simultaneously obtain an optimal feedforward input and tuning parameter for a plant system, which minimizes a given cost function. In the proposed method, a virtual constraint by a potential energy prevents a biped robot from falling. The strength of the constraint is automatically mitigated by the IFT part of the proposed method, according to the progress of trajectory learning by the ILC part.
Ontologies and rules are usually loosely coupled in knowledge representation formalisms. In fact, ontologies use open-world reasoning, while the leading semantics for rules use non-monotonic, closed-world reasoning. One exception is the tightly coupled framework of Minimal Knowledge and Negation as Failure (MKNF), which allows statements about individuals to be jointly derived via entailment from ontology and inferences from rules. Nonetheless, the practical usefulness of MKNF has not always been clear, although recent work has formalized a general resolution-based method for querying MKNF when rules are taken to have the well-founded semantics, and the ontology is modeled by a general oracle. That work leaves open what algorithms should be used to relate the entailments of the ontology and the inferences of rules. In this paper we provide such algorithms, and describe the implementation of a query-driven system, CDF-Rules, for hybrid knowledge bases combining both (non-monotonic) rules under the well-founded semantics and a (monotonic) ontology, represented by the Coherent Description Framework Type-1 ($\mathcal{ALCQ}$) theory.
We study monadic translations of the call-by-name (cbn) and call-by-value (cbv) fragments of the classical sequent calculus ${\overline{\lambda}\mu\tilde{\mu}}$ due to Curien and Herbelin, and give modular and syntactic proofs of strong normalisation. The target of the translations is a new meta-language for classical logic, named monadic λμ. This language is a monadic reworking of Parigot's λμ-calculus, where the monadic binding is confined to commands, thus integrating the monad with the classical features. Also, its μ-reduction rule is replaced by a rule expressing the interaction between monadic binding and μ-abstraction.
Our monadic translations produce very tight simulations of the respective fragments of ${\overline{\lambda}\mu\tilde{\mu}}$ within monadic λμ, with reduction steps of ${\overline{\lambda}\mu\tilde{\mu}}$ being translated in a 1–1 fashion, except for β steps, which require two steps. The monad of monadic λμ can be instantiated to the continuations monad so as to ensure strict simulation of monadic λμ within simply typed λ-calculus with β- and η-reduction. Through strict simulation, the strong normalisation of simply typed λ-calculus is inherited by monadic λμ, and then by cbn and cbv ${\overline{\lambda}\mu\tilde{\mu}}$, thus reproving strong normalisation in an elementary syntactical way for these fragments of ${\overline{\lambda}\mu\tilde{\mu}}$, and establishing it for our new calculus. These results extend to second-order logic, with polymorphic λ-calculus as the target, giving new strong normalisation results for classical second-order logic in sequent calculus style.
CPS translations of cbn and cbv ${\overline{\lambda}\mu\tilde{\mu}}$ with the strict simulation property are obtained by composing our monadic translations with the continuations-monad instantiation. In an appendix to the paper, we investigate several refinements of the continuations-monad instantiation in order to obtain in a modular way improvements of the CPS translations enjoying extra properties like simulation by cbv β-reduction or reduction of administrative redexes at compile time.
A small-scale experimental setup for autonomous target tracking of a surface vessel in the presence of obstacles is presented. The experiments are performed in simulated rough seas through wave, current, and wind generation in a small indoor pool. Absolute position of the agent and the target as well as the obstacle size and position are provided through an overhead camera by detecting color light emitting diodes installed on all objects. Ordinary differential equations with stable limit-cycle solutions are used to define transitional trajectories around obstacles based on the camera data. A sliding mode control law is implemented for real-time tracking control which is capable of rejecting large disturbances from the generated waves and wind. The sliding mode control signals are sent to wireless receivers on the autonomous vessel where a proportional integral speed controller maintains the commanded speed. A special scaling method is presented to show that the environmental forces are similar to those of moderate through high sea states. Several experiments are presented where the autonomous vessel catches and follows a target boat moving in arbitrary trajectories in both the presence and absence of obstacles.
The outcome set of a belief change operator is the set of outcomes that can be obtained with it. Axiomatic characterizations are reported for the outcome sets of the standard AGM contraction operators and eight types of base-generated contraction. These results throw new light on the properties of some of these operators.
Knowledge representation (KR) is an important area in artificial intelligence (AI) and is often related to specific domains. The representation of knowledge in domain-specific contexts makes it desirable to capture semantics as domain experts would. This motivates the development of semantics-preserving standards for KR within the given domain. In addition to the storage and analysis of information using such standards, the effect of globalization today necessitates the publishing of information on the Web. Thus, it is advisable to use formats that make the information easily publishable and accessible while developing KR standards. In this article, we propose such a standard called Quenching Markup Language (QuenchML). This follows the syntax of the eXtensible Markup Language and captures the semantics of the quenching domain within the heat treating of materials. We describe the development of QuenchML, a multidisciplinary effort spanning the realms of AI, database management, and materials science, considering various aspects such as ontology, data modeling, and domain-specific constraints. We also explain the usefulness of QuenchML in semantics-preserving information retrieval and in text mining guided by domain knowledge. Furthermore, we outline the significance of this work in software tools within the field of AI.
The competitive market forces companies to offer tailored products to meet specific customer needs. To avoid wasting time, design efforts generally address the configuration of existing solutions, without producing substantial design modifications. Configuration tools are used to achieve customized products starting from a common platform. Many approaches have been successfully proposed in literature to configure products. However, in the mechanical field they need further investigation in order to be efficiently linked to computer-aided design technologies. Research is focused on tools and methods to automatically produce geometrical models and improve the flexibility of the continuous product updating process. In this context, this paper aims to combine product configuration approaches with design automation techniques in order to support design activities of products to fulfill specific requirements. The approach is based on entities called configurable virtual prototypes. Three different domains are managed and connected via configurable virtual prototypes: product specifications, geometrical data, and product knowledge. In particular, geometry recognition rules are used to identify the parameterization of parts and the assembly mating constraints. The approach is exemplified through an industrial case study where a tool has been developed on the basis of the described method. Advantages of the system are shown in terms of achieved product configuration efficiency.
This paper reports on a computational model developed to study the effects of various modes of social learning on task coordination in teams through the mapping of distributed team competence, a significant aspect of efficient teamwork. The computational model emphasizes and operationalizes distinct modes of social learning, differentiated in terms of socialization opportunities. Simulation results demonstrate that computational models based on fundamental principles of social learning provide a robust approach to study task coordination in teams and can be used to explore ways to organize opportunities for social learning depending upon member retention, team structure, and the complexity of the design task.
This study examines the ability of a semantic space model to represent the meaning of noun compounds such as ‘information gathering’ or ‘heart disease.’ For a semantic space model to compute the meaning and the attributional similarity (or semantic relatedness) for unfamiliar noun compounds that do not occur in a corpus, the vector for a noun compound must be computed from the vectors of its constituent words using vector composition algorithms. Six composition algorithms (i.e., centroid, multiplication, circular convolution, predication, comparison, and dilation) are compared in terms of the quality of the computation of the attributional similarity for English and Japanese noun compounds. To evaluate the performance of the computation of the similarity, this study uses three tasks (i.e., related word ranking, similarity correlation, and semantic classification), and two types of semantic spaces (i.e., latent semantic analysis-based and positive pointwise mutual information-based spaces). The result of these tasks is that the dilation algorithm is generally most effective in computing the similarity of noun compounds, while the multiplication algorithm is best suited specifically for the positive pointwise mutual information-based space. In addition, the comparison algorithm works better for unfamiliar noun compounds that do not occur in the corpus. These findings indicate that in general a semantic space model, and in particular the dilation, multiplication, and comparison algorithms have sufficient ability to compute the attributional similarity for noun compounds.
Qualitative reasoning can generate ambiguous behaviors due to the lack of quantitative information. Despite many different research results focusing on ambiguities reduction, fundamentally it is impossible to totally remove ambiguities with only qualitative methods and to guarantee the consistency of results. This prevents the wide use of qualitative reasoning techniques in practical situations, particularly in conceptual design, where qualitative reasoning is considered intrinsically useful. To improve this situation, this paper initially investigates the origin of ambiguities in qualitative reasoning. Then it proposes a method based on intelligent interventions of the user who is able to detect ambiguities, to prioritize interventions on these ambiguities, and to reduce ambiguities based on the least commitment strategy. This interaction method breaks through the limit of qualitative reasoning in practical applications to conceptual design. The method was implemented as a new feature in a software tool called the Knowledge Intensive Engineering Framework in order to be tested and used for a printer design.
Configuration of large-scale applications in an engineering context requires a modeling environment that allows the design engineer to draft the configuration problem in a natural way and efficient methods that can process the modeled setting and scale with the number of components. Existing configuration methods in artificial intelligence typically perform quite well in certain subareas but are hard to use for general-purpose modeling without mathematical or logics background (the so-called knowledge acquisition bottleneck) and/or have scalability issues. As a remedy to this important issue both in theory and in practical applications, we use a standard modeling environment like the Unified Modeling Language that has been proposed by the configuration community as a suitable object-oriented formalism for configuration problems. We provide a translation of key concepts of class diagrams to inequalities and identify relevant configuration aspects and show how they are treated as an integer linear program. Solving an integer linear program can be done efficiently, and integer linear programming scales well to large configurations consisting of several thousands components and interactions. We conduct an empirical study in the context of package management for operating systems and for the Linux kernel configuration. We evaluate our methodology by a benchmark and obtain convincing results in support for using integer linear programming for configuration applications of realistic size and complexity.
This paper presents a new approach to matching occupancy grid maps by means of finding correspondences between a set of sparse features detected in the maps. The problem is stated here as a special instance of generic image registration. To cope with the uncertainty and ambiguity that arise from matching grid maps, we introduce a modified RANSAC algorithm which searches for a dynamic number of internally consistent subsets of feature pairings from which to compute hypotheses about the translation and rotation between the maps. By providing a (possibly multi-modal) probability distribution of the relative pose of the maps, our method can be seamlessly integrated into large-scale mapping frameworks for mobile robots. This paper provides a benchmarking of different detectors and descriptors, along extensive experimental results that illustrate the robustness of the algorithm with a 97% success ratio in loop-closure detection for ~1700 matchings between local maps obtained from four publicly available datasets.