To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper addresses the problem of implementing a Simultaneous Localization and Mapping (SLAM) algorithm combined with a non-reactive controller (such as trajectory following or path following). A general study showing the advantages of using predictors to avoid mapping inconsistences in autonomous SLAM architectures is presented. In addition, this paper presents a priority-based uncertainty map construction method of the environment by a mobile robot when executing a SLAM algorithm. The SLAM algorithm is implemented with an extended Kalman filter (EKF) and extracts corners (convex and concave) and lines (associated with walls) from the surrounding environment. A navigation approach directs the robot motion to the regions of the environment with the higher uncertainty and the higher priority. The uncertainty of a region is specified by a probability characterization computed at the corresponding representative points. These points are obtained by a Monte Carlo experiment and their probability is estimated by the sum of Gaussians method, avoiding the time-consuming map-gridding procedure. The priority is determined by the frame in which the uncertainty region was detected (either local or global to the vehicle's pose). The mobile robot has a non-reactive trajectory following controller implemented on it to drive the vehicle to the uncertainty points. SLAM real-time experiments in real environment, navigation examples, uncertainty maps constructions along with algorithm strategies and architectures are also included in this work.
Engineering risk methods and tools account for and make decisions about risk using an expected-value approach. Psychological research has shown that stakeholders and decision makers hold domain-specific risk attitudes that often vary between individuals and between enterprises. Moreover, certain companies and industries (e.g., the nuclear power industry and aerospace corporations) are very risk-averse whereas other organizations and industrial sectors (e.g., IDEO, located in the innovation and design sector) are risk tolerant and actually thrive by making risky decisions. Engineering risk methods such as failure modes and effects analysis, fault tree analysis, and others are not equipped to help stakeholders make decisions under risk-tolerant or risk-averse decision-making conditions. This article presents a novel method for translating engineering risk data from the expected-value domain into a risk appetite corrected domain using utility functions derived from the psychometric Engineering Domain-Specific Risk-Taking test results under a single-criterion decision-based design approach. The method is aspirational rather than predictive in nature through the use of a psychometric test rather than lottery methods to generate utility functions. Using this method, decisions can be made based upon risk appetite corrected risk data. We discuss development and application of the method based upon a simplified space mission design in a collaborative design-center environment. The method is shown to change risk-based decisions in certain situations where a risk-averse or risk-tolerant decision maker would likely choose differently than the expected-value approach dictates.
The balancing of robotic systems is an important issue, because it allows significant reduction of torques. However, the literature review shows that the balancing of robotic systems is performed without considering the traveling trajectory. Although in static balancing the gravity effects on the actuators are removed, and in complete balancing the Coriolis, centripetal, gravitational, and cross-inertia terms are eliminated, but it does not mean that the required torque to move the manipulator from one point to another point is minimum. In this paper, “optimal spring balancing” is presented for open-chain robotic system based on indirect solution of open-loop optimal control problem. Indeed, optimal spring balancing is an optimal trajectory planning problem in which states, controls, and all the unknown parameters associated with the springs must be determined simultaneously to minimize the given performance index for a predefined point-to-point task. For this purpose, on the basis of the fundamental theorem of calculus of variations, the necessary conditions for optimality are derived that lead to the optimality conditions associated with Pontryagin's minimum principle and an additional condition associated with the constant parameters. The obtained optimality conditions are developed for a two-link manipulator in detail. Finally, the efficiency of the suggested approach is illustrated by simulation for a two-link manipulator and a PUMA-like robot. The obtained results show that the proposed method has dominant superiority over the previous methods such as static balancing or complete balancing.
Decision making is a significant activity within industry and although much attention has been paid to the manner in which goals impact on how decision making is executed, there has been less focus on the impact decision making resources can have. This article describes an experiment that sought to provide greater insight into the impact that resources can have on how decision making is executed. Investigated variables included the experience levels of decision makers and the quality and availability of information resources. The experiment provided insights into the variety of impacts that resources can have upon decision making, manifested through the evolution of the approaches, methods, and processes used within it. The findings illustrated that there could be an impact on the decision-making process but not on the method or approach, the method and process but not the approach, or the approach, method, and process. In addition, resources were observed to have multiple impacts, which can emerge in different timescales. Given these findings, research is suggested into the development of resource-impact models that would describe the relationships existing between the decision-making activity and resources, together with the development of techniques for reasoning using these models. This would enhance the development of systems that could offer improved levels of decision support through managing the impact of resources on decision making.
Although type reconstruction for dependently typed languages is common in practical systems, it is still ill-understood. Detailed descriptions of the issues around it are hard to find and formal descriptions together with correctness proofs are non-existing. In this paper, we discuss a one-pass type reconstruction for objects in the logical framework LF, describe formally the type reconstruction process using the framework of contextual modal types, and prove correctness of type reconstruction. Since type reconstruction will find most general types and may leave free variables, we in addition describe abstraction which will return a closed object where all free variables are bound at the outside. We also implemented our algorithms as part of the Beluga language, and the performance of our type reconstruction algorithm is comparable to type reconstruction in existing systems such as the logical framework Twelf.
This study demonstrates how subtle signals taken from the early stages within a construction process can be used to diagnose potential problems within that process. For this study, the construction process is modeled as a quasi-Markov chain. A set of six different scenarios representing various common problems (e.g., small budget, complex project) is created and simulated by suitably defining the transition probabilities between nodes in the Markov chain. A Monte Carlo approach is used to parameterize a Bayesian estimator. By observing the time taken to pass the review gateway (as measured by number of hops between activity nodes), the system is able to determine with good accuracy the problem scenario that the construction process is suffering from.
We prove that for each positive integer k in the range 2≤k≤10 and for each positive integer k≡79 (mod 120) there is a k-step Fibonacci-like sequence of composite numbers and give some examples of such sequences. This is a natural extension of a result of Graham for the Fibonacci-like sequence.
Human fingers possess mechanical characteristics, which enable them to manipulate objects. In robotics, the study of soft fingertip materials for manipulation has been going on for a while; however, almost all previous researches have been carried on hemispherical shapes whereas this study concentrates on the use of hemicylindrical shapes. These shapes were found to be more resistant to elastic deformations for the same materials. The purpose of this work is to generate a modified nonlinear contact-mechanics theory for modeling soft fingertips, which is proposed as a power-law equation. The contact area of a hemicylindrical soft fingertip is proportional to the normal force raised to the power of γcy, which ranges from 0 to 1/2. Subsuming the Timoshenko and Goodier (S. P. Timoshenko and J. N. Goodier, Theory of Elasticity, 3rd ed. (McGraw-Hill, New York, 1970) pp. 414–420) linear contact theory for cylinders confirms the proposed power equation. We applied a weighted least-squares curve fitting to analyze the experimental data for different types of silicone (RTV 23, RTV 1701, and RTV 240). Our experimental results supported the proposed theoretical prediction. Results for human fingers and hemispherical soft fingers were also compared.
We investigate the Sandpile Model and Chip Firing Game and an extension of these modelson cycle graphs. The extended model consists of allowing a negative number of chips ateach vertex. We give the characterization of reachable configurations and of fixed pointsof each model. At the end, we give explicit formula for the number of their fixedpoints.
Information imprecision and uncertainty exist in many real world applications, and such information would be retrieved, processed, shared, reused, and aligned in the maximum automatic way possible. As a popular family of formally well-founded and decidable knowledge representation languages, fuzzy Description Logics (fuzzy DLs), which extend DLs with fuzzy logic, are very well suited to cover for representing and reasoning with imprecision and uncertainty. Thus, a requirement naturally arises in many practical applications of knowledge-based systems, in particular the Semantic Web, because DLs are the logical foundation of the Semantic Web. Currently, there have been lots of fuzzy extensions of DLs with Zadeh's fuzzy logic theory papers published, to investigate fuzzy DLs and more importantly serve as identifying the direction of fuzzy DLs study. In this paper, we aim at providing a comprehensive literature overview of fuzzy DLs, and we focus our attention on fuzzy extensions of DLs based on fuzzy set theory. Other relevant formalisms that are based on approaches like probabilistic theory or non-monotonic logics are covered elsewhere. In detail, we first introduce the existing fuzzy DLs (including the syntax, semantics, knowledge base, and reasoning algorithm) from the origin, development (from weaker to stronger in expressive power), some special techniques, and so on. Then, the other important issues on fuzzy DLs, such as reasoning, querying, applications, and directions for future research, are also discussed in detail. Also, we make a comparison and analysis.
Non-confluent and non-terminating {constructor-based term rewriting systems are useful for the purpose of specification and programming. In particular, existing functional logic languages use such kinds of rewrite systems to define possibly non-strict non-deterministic functions. The semantics adopted for non-determinism is call-time choice, whose combination with non-strictness is a non-trivial issue, addressed years ago from a semantic point of view with the Constructor-based Rewriting Logic (CRWL), a well-known semantic framework commonly accepted as suitable semantic basis of modern functional logic languages. A drawback of CRWL is that it does not come with a proper notion of one-step reduction, which would be very useful to understand and reason about how computations proceed. In this paper, we develop thoroughly the theory for the first-order version of let-rewriting, a simple reduction notion close to that of classical term rewriting, but extended with a let-binding construction to adequately express the combination of call-time choice with non-strict semantics. Let-rewriting can be seen as a particular textual presentation of term graph rewriting. We investigate the properties of let-rewriting, most remarkably their equivalence with respect to a conservative extension of the CRWL-semantics coping with let-bindings, and we show by some case studies that having two interchangeable formal views (reduction/semantics) of the same language is a powerful reasoning tool. After that, we provide a notion of let-narrowing, which is adequate for call-time choice as proved by soundness and completeness results of let-narrowing with respect to let-rewriting. Moreover, we relate those let-rewriting and let-narrowing relations (and hence CRWL) with ordinary term rewriting and narrowing, providing in particular soundness and completeness of let-rewriting with respect to term rewriting for a class of programs which are deterministic in a semantic sense.
The problem of the commutativity of algebraic (categorical) diagrams has attracted the attention of researchers for a long time. For example, the related notion of coherence was discussed in Mac Lane's homology book Mac Lane (1963), see also his AMS presidential address Mac Lane (1976). Researchers in category theory view this problem from a specific angle, and for them it is not just a question of convenient notation, though it is worth mentioning the important role that notation plays in the development of science (take, for example, the progress made after the introduction of symbolic notation in logics or matrix notation in algebra). In 1976, Peter Freyd published the paper ‘Properties Invariant within Equivalence Types of Categories’ (Freyd 1976), where the central role is played by the notion of a ‘diagrammatic property’. We may also recall the process of ‘diagram chasing’, and its applications in topology and algebra. But before we can use diagrams (and the principal property of a diagram is its commutativity), it is vital for us to be able to check whether a diagram is commutative.
We consider the problem of sums of dilates in groups of prime order. It is well known that sets with small density and small sumset in p behave like integer sets. Thus, given A ⊂ p of sufficiently small density, it is straightforward to show that
On the other hand, the behaviour for sets of large density turns out to be rather surprising. Indeed, for any ε > 0, we construct subsets of density 1/2–ε such that |A + λ A| ≤ (1–δ)p, showing that there is a very different behaviour for subsets of large density.
We show how logic programs with “delays” can be transformed to programs without delays in a way that preserves information concerning floundering (also known as deadlock). This allows a declarative (model-theoretic), bottom-up or goal-independent approach to be used for analysis and debugging of properties related to floundering. We rely on some previously introduced restrictions on delay primitives and a key observation which allows properties such as groundness to be analysed by approximating the (ground) success set.
We propose an alternative approach, based on diagram rewriting, for computations in bialgebras. We illustrate this graphical syntax by proving some properties of co-operations, including coassociativity and cocommutativity. This amounts to checking the confluence of some rewriting systems.