To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When waiting times cannot be observed directly, Little's law can be applied to estimate the average waiting time by the average number in system divided by the average arrival rate, but that simple indirect estimator tends to be biased significantly when the arrival rates are time-varying and the service times are relatively long. Here it is shown that the bias in that indirect estimator can be estimated and reduced by applying the time-varying Little's law (TVLL). If there is appropriate time-varying staffing, then the waiting time distribution may not be time-varying even though the arrival rate is time varying. Given a fixed waiting time distribution with unknown mean, there is a unique mean consistent with the TVLL for each time t. Thus, under that condition, the TVLL provides an estimator for the unknown mean wait, given estimates of the average number in system over a subinterval and the arrival rate function. Useful variants of the TVLL estimator are obtained by fitting a linear or quadratic function to arrival data. When the arrival rate function is approximately linear (quadratic), the mean waiting time satisfies a quadratic (cubic) equation. The new estimator based on the TVLL is a positive real root of that equation. The new methods are shown to be effective in estimating the bias in the indirect estimator and reducing it, using simulations of multi-server queues and data from a call center.
Adapting techniques from database theory in order to optimize Answer Set Programming (ASP) systems, and in particular the grounding components of ASP systems, is an important topic in ASP. In recent years, the Magic Set method has received some interest in this setting, and a variant of it, called Dynamic Magic Set, has been proposed for ASP. However, this technique has a caveat, because it is not correct (in the sense of being query-equivalent) for all ASP programs. In a recent work, a large fragment of ASP programs, referred to as super-coherent programs, has been identified, for which Dynamic Magic Set is correct. The fragment contains all programs which possess at least one answer set, no matter which set of facts is added to them. Two open question remained: How complex is it to determine whether a given program is super-coherent? Does the restriction to super-coherent programs limit the problems that can be solved? Especially the first question turned out to be quite difficult to answer precisely. In this paper, we formally prove that deciding whether a propositional program is super-coherent is Π3P-complete in the disjunctive case, while it is Π2P-complete for normal programs. The hardness proofs are the difficult part in this endeavor: We proceed by characterizing the reductions by the models and reduct models which the ASP programs should have, and then provide instantiations that meet the given specifications. Concerning the second question, we show that all relevant ASP reasoning tasks can be transformed into tasks over super-coherent programs, although this transformation is more of theoretical than practical interest.
The supplemental online material accompanying this article was not the final version. The final version is now published online. The publisher regrets the error.
In the analysis of logic programs, abstract domains for detecting sharing properties are widely used. Recently, the new domain ${\mathtt{ShLin}^{\omega}}$ has been introduced to generalize both sharing and linearity information. This domain is endowed with an optimal abstract operator for single-binding unification. The authors claim that the repeated application of this operator is also optimal for multibinding unification. This is the proof of such a claim.
The absolute accuracy of a small industrial robot is improved using a 30-parameter calibration model. The error model takes into account a full kinematic calibration and five compliance parameters related to the stiffness in joints 2, 3, 4, 5, and 6. The linearization of the Jacobian is performed to iteratively find the modeled error parameters. Two coordinate measurement systems are used independently: a laser tracker and an optical CMM. An optimized end-effector is developed specifically for each measurement system. The robot is calibrated using fewer than 50 configurations and the calibration efficiency validated in 1000 configurations using either the laser tracker or the optical CMM. A telescopic ballbar is also used for validation. The results show that the optical CMM yields slightly better results, even when used with the simple triangular plate end-effector that was developed mainly for the laser tracker.
We describe an opinion mining system which classifies the polarity of Spanish texts. We propose an NLP approach that undertakes pre-processing, tokenisation and POS tagging of texts to then obtain the syntactic structure of sentences by means of a dependency parser. This structure is then used to address three of the most significant linguistic constructions for the purpose in question: intensification, subordinate adversative clauses and negation. We also propose a semi-automatic domain adaptation method to improve the accuracy of our system in specific application domains, by enriching semantic dictionaries using machine learning methods in order to adapt the semantic orientation of their words to a particular field. Experimental results are promising in both general and specific domains.
This paper proposes a robot navigation scheme using wireless visual sensors deployed in an environment. Different from the conventional autonomous robot approaches, the scheme intends to relieve massive on-board information processing required by a robot to its environment so that a robot or a vehicle with less intelligence can exhibit sophisticated mobility. A three-state snake mechanism is developed for coordinating a series of sensors to form a reference path. Wireless visual sensors communicate internal forces with each other along the reference snake for dynamic adjustment, react to repulsive forces from obstacles, and activate a state change in the snake body from a flexible state to a rigid or even to a broken state due to kinematic or environmental constraints. A control snake is further proposed as a tracker of the reference path, taking into account the robot's non-holonomic constraint and limited steering power. A predictive control algorithm is developed to have an optimal velocity profile under robot dynamic constraints for the snake tracking. They together form a unified solution for robot navigation by distributed sensors to deal with the kinematic and dynamic constraints of a robot and to react to dynamic changes in advance. Simulations and experiments demonstrate the capability of a wireless sensor network to carry out low-level control activities for a vehicle.
Given a (multi)digraph H, a digraph D is H-linked if every injective function ι:V(H) → V(D) can be extended to an H-subdivision. In this paper, we give sharp degree conditions that ensure a sufficiently large digraph D is H-linked for arbitrary H. The notion of an H-linked digraph extends the classes of m-linked, m-ordered and strongly m-connected digraphs.
First, we give sharp minimum semi-degree conditions for H-linkedness, extending results of Kühn and Osthus on m-linked and m-ordered digraphs. It is known that the minimum degree threshold for an undirected graph to be H-linked depends on a partition of the (undirected) graph H into three parts. Here, we show that the corresponding semi-degree threshold for H-linked digraphs depends on a partition of H into as many as nine parts.
We also determine sharp Ore–Woodall-type degree-sum conditions ensuring that a digraph D is H-linked for general H. As a corollary, we obtain (previously undetermined) sharp degree-sum conditions for m-linked and m-ordered digraphs.
Let Δ ≥ 2 be a fixed integer. We show that the random graph ${\mathcal{G}_{n,p}}$ with $p\gg (\log n/n)^{1/\Delta}$ is robust with respect to the containment of almost spanning bipartite graphs H with maximum degree Δ and sublinear bandwidth in the following sense: asymptotically almost surely, if an adversary deletes arbitrary edges from ${\mathcal{G}_{n,p}}$ in such a way that each vertex loses less than half of its neighbours, then the resulting graph still contains a copy of all such H.
Push recovery is one of the most challenging problems for the current humanoid robots. The importance of push recovery can be well observed in the real environment. The critical issue for a humanoid is to maintain and recover its balance against any disturbances. In this research a new stereovision approach is proposed to estimate the robot deviation angle and consequently, the movement of center of mass of the robot is calculated. Then, two novel strategies have been devised to recover the balance of the humanoid which are called “knee strategy” and “knee-hip strategy.” Also, a mathematical model validates the efficiency of the proposed strategies as demonstrated in the paper. Experiments have been conducted on a humanoid robot and demonstrate that the predicted robot deviation angle, using stereovision technique, converges to the actual deviation angle. Stable regions of proposed strategies illustrate that the humanoid can recover its stability in a robust manner. Vision-based estimation also shows a higher correlation to actual deviation angle and a lower fluctuation compared with the output of the acceleration sensor.
Does the human brain have a central connective core, and, if so, how costly is it?
Noninvasive imaging data allow the construction of network maps of the human brain, recording its structural and functional connectivity. A number of studies have reported on various characteristic network attributes, such as a tendency toward local clustering, high global efficiency, the prevalence of specific network motifs, and a pronounced community structure with several anatomically and functionally defined modules and interconnecting hub regions (Bullmore & Sporns, 2009; van den Heuvel & Hulshoff Pol, 2010; Sporns, 2011). Hubs are of particular interest in studies of the brain since they may play crucial roles in integrative processes and global brain communication, thought to be essential for many aspects of higher brain function. Indeed, hubs have been shown to correspond to brain regions that exhibit complex physiological responses and maintain widespread and diverse connection profiles with other parts of the brain. We asked if, in addition to being highly connected, brain hubs would also exhibit a strong tendency to be mutually interconnected, forming what has been called a “rich club” (Colizza et al., 2006). Rich club organization is present in a network if sets of high-degree nodes exhibit denser mutual connections than predicted on the basis of the degree sequence alone. We investigated rich club organization in the human brain in datasets that recorded weighted projections among different anatomical regions of the cerebral cortex, recorded from several cohorts of healthy human volunteers (van den Heuvel & Sporns, 2011; van den Heuvel et al., 2012).
This paper presents a graph-based knowledge representation and reasoning language. This language benefits from an important syntactic operation, which is called a graph homomorphism. This operation is sound and complete with respect to logical deduction. Hence, it is possible to do logical reasoning without using the language of logic but only graphical, thus visual, notions. This paper presents the main knowledge constructs of this language, elementary graph-based reasoning mechanisms, as well as the graph homomorphism, which encompasses all these elementary transformations in one global step. We put our work in context by presenting a concrete semantic annotation application example.
A 1-ary sentential context is aggregative (according to a consequence relation) if the result of putting the conjunction of two formulas into the context is a consequence (by that relation) of the results of putting first the one formula and then the other into that context. All 1-ary contexts are aggregative according to the consequence relation of classical propositional logic (though not, for example, according to the consequence relation of intuitionistic propositional logic), and here we explore the extent of this phenomenon, generalized to having arbitrary connectives playing the role of conjunction; among intermediate logics, LC, shows itself to occupy a crucial position in this regard, and to suggest a characterization, applicable to a broader range of consequence relations, in terms of a variant of the notion of idempotence we shall call componentiality. This is an analogue, for the consequence relations of propositional logic, of the notion of a conservative operation in universal algebra.
I believe that, for reasons elaborated elsewhere (Beall, 2009; Priest, 2006a, 2006b), the logic LP (Asenjo, 1966; Asenjo & Tamburino, 1975; Priest, 1979) is roughly right as far as logic goes.1 But logic cannot go everywhere; we need to provide nonlogical axioms to specify our (axiomatic) theories. This is uncontroversial, but it has also been the source of discomfort for LP-based theorists, particularly with respect to true mathematical theories which we take to be consistent. My example, throughout, is arithmetic; but the more general case is also considered.
Logic programs under the stable model semantics, or answer-set programs, provide an expressive rule-based knowledge representation framework, featuring a formal, declarative and well-understood semantics. However, handling the evolution of rule bases is still a largely open problem. The Alchourrón, Gärdenfors and Makinson (AGM) framework for belief change was shown to give inappropriate results when directly applied to logic programs under a non-monotonic semantics such as the stable models. The approaches to address this issue, developed so far, proposed update semantics based on manipulating the syntactic structure of programs and rules.
More recently, AGM revision has been successfully applied to a significantly more expressive semantic characterisation of logic programs based on SE-models. This is an important step, as it changes the focus from the evolution of a syntactic representation of a rule base to the evolution of its semantic content.
In this paper, we borrow results from the area of belief update to tackle the problem of updating (instead of revising) answer-set programs. We prove a representation theorem which makes it possible to constructively define any operator satisfying a set of postulates derived from Katsuno and Mendelzon's postulates for belief update. We define a specific operator based on this theorem, examine its computational complexity and compare the behaviour of this operator with syntactic rule update semantics from the literature. Perhaps surprisingly, we uncover a serious drawback of all rule update operators based on Katsuno and Mendelzon's approach to update and on SE-models.
Singularity-free workspace is a very important criterion for the design of manipulators, especially for parallel manipulators which are well known for their limited workspace and complex singularities. This paper studies geometric parameters and dexterity measures that affect the size of a singularity-free joint space and proposes methods for the development of 6-DOF Stewart–Gough parallel manipulators that have better singularity-free joint space. With a local dexterity measure as the objective function, a systematic method is employed to search for the design with a maximal singularity-free joint space. The related workspaces are also investigated. It is shown that the workspace is not proportional to the size of the joint space and that manipulators with a larger singularity-free workspace usually have relatively poor dexterity.
This paper proposes a simple fuzzy sliding mode control to achieve the best trajectory tracking for the robot manipulator. In the core of the proposed method, by applying the feedback linearization technique, the known dynamics of the robot's manipulator is removed; then, in order to overcome the remaining uncertainties, a classic sliding mode control is designed. Afterward, by applying the TS fuzzy model, the classic sliding mode controller is converted to fuzzy sliding mode controller with very simple rule base. The mathematical analysis shows that the robot manipulator with the new proposed control in tracking the robot manipulator in presence of uncertainties has the globally asymptotic stability. Finally, to show the performance of the proposed method, the controller is simulated on a robot manipulator with two degrees of freedom as case study of the research. Simulation results demonstrate the superiority of the proposed control scheme in presence of the structured and unstructured uncertainties.
The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The control, the virtual stability of every subsystem and the stability of the entire robotic system are proved in this work. Then the computational complexity of the FAT is compared with the regressor-based approach. Despite the apparent advantage of the FAT in avoiding the regressor matrix, its computational complexity can result in difficulties in the implementation because of the representation of the dynamic matrices of the link subsystem by two large sparse matrices. In effect, the FAT-based adaptive VDC requires further work for improving the representation of the dynamic matrices of the target subsystem. Two case studies are simulated by Matlab/Simulink: a 2-R manipulator and a 6-DOF planar biped robot for verification purposes.