To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to resolve the redundancy of a wheeled mobile redundant manipulator comprising a two-wheel-drive mobile platform and a 6-degree-of-freedom manipulator, a physical-limits-constrained (PLC) minimum velocity norm (MVN) coordinating scheme (termed as PLC-MVN-C scheme) is proposed and investigated. Such a scheme can not only coordinate the mobile platform and the manipulator to fulfill the end-effector task and to achieve the desired optimal index (i.e., minimizing the norm of the rotational velocities of the wheels and the joint velocities of the manipulator) but also consider the physical limits of the robot (i.e., the joint-angle limits and joint-velocity limits of the manipulator as well as the rotational velocity limits of the wheels). The scheme is then reformulated as a quadratic program (QP) subject to equality and bound constraints, and is solved by a discrete QP solver, i.e., a numerical algorithm based on piecewise-linear projection equations (PLPE). Simulation results substantiate the efficacy and accuracy of such a PLC-MVN-C scheme and the corresponding discrete PLPE-based QP solver.
Human rhythmic movement is generated by central pattern generators (CPGs), and their application to robot control has attracted interest of many scientists. But the coupling relationship between the central nervous system and the CPG network with external inputs is still not unveiled. According to biological experiment results, the CPG network is controlled by the neural system; in other words, the interaction between the central nervous system and the CPG network can control human movement effectively. This paper offers a complex human locomotion model, which illustrates the coupling relationship between the central nervous system and the CPG network with proprioception. Based on Matsuoka's CPG model (K. Matsuoka, Biol. Cybern. 52(6), 367–376 (1985)), the stability and robustness of the CPG network are analyzed with external inputs. In order to simulate the coupling relationship, the Radial Basis Function (RBF) neural network is used to simulate the cerebral cortex, and the Credit-Assignment Cerebellar Model Articulation Controller algorithm is employed to realize the locomotion mode conversion. A seven-link biped robot is chosen to simulate the walking gait. The main discoveries include: (1) the output of a new CPG network, which is stable and robust, can be treated as proprioception. Proprioception provides the central nervous system with the information about all joint angles; (2) analysis on a new locomotion model reveals that the cerebral cortex can modulate CPG parameters, leading to adjustment in walking gait.
This paper presents a path planning algorithm for autonomous navigation of non-holonomic mobile robots in complex environments. The irregular contour of obstacles is represented by segments. The goal of the robot is to move towards a known target while avoiding obstacles. The velocity constraints, robot kinematic model and non-holonomic constraint are considered in the problem. The optimal path planning problem is formulated as a constrained receding horizon planning problem and the trajectory is obtained by solving an optimal control problem with constraints. Local minima are avoided by choosing intermediate objectives based on the real-time environment.
Odometric error modelling for mobile robots is the basis of pose tracking. Without bounds the odometric accumulative error decreases localisation precision after long-range movement, which is often not capable of being compensated for in real time. Therefore, an efficient approach to odometric error modelling is proposed in regard to different drive type mobile robots. This method presents a hypothesis that the motion path approximates a circular arc. The approximate functional expressions between the control input of odometry and non-systematic error as well as systematic error derived from odometric error propagation law. Further an efficient algorithm of pose tracking is proposed for mobile robots, which is able to compensate for the non-systematic and systematic error in real time. These experiments denote that the odometric error modelling reduces the accumulative error of odometry efficiently and improves the specific localisation process significantly during autonomous navigation.
The problem of finding a nontrivial factor of a polynomial $f(x)$ over a finite field ${\mathbb{F}}_q$ has many known efficient, but randomized, algorithms. The deterministic complexity of this problem is a famous open question even assuming the generalized Riemann hypothesis (GRH). In this work we improve the state of the art by focusing on prime degree polynomials; let $n$ be the degree. If $(n-1)$ has a‘large’ $r$-smooth divisor $s$, then we find a nontrivial factor of $f(x)$ in deterministic $\mbox{poly}(n^r,\log q)$ time, assuming GRH and that $s=\Omega (\sqrt{n/2^r})$. Thus, for $r=O(1)$ our algorithm is polynomial time. Further, for $r=\Omega (\log \log n)$ there are infinitely many prime degrees $n$ for which our algorithm is applicable and better than the best known, assuming GRH. Our methods build on the algebraic-combinatorial framework of $m$-schemes initiated by Ivanyos, Karpinski and Saxena (ISSAC 2009). We show that the $m$-scheme on $n$ points, implicitly appearing in our factoring algorithm, has an exceptional structure, leading us to the improved time complexity. Our structure theorem proves the existence of small intersection numbers in any association scheme that has many relations, and roughly equal valencies and indistinguishing numbers.
We prove the existence of a function $f :\mathbb{N} \to \mathbb{N}$ such that the vertices of every planar graph with maximum degree Δ can be 3-coloured in such a way that each monochromatic component has at most f(Δ) vertices. This is best possible (the number of colours cannot be reduced and the dependence on the maximum degree cannot be avoided) and answers a question raised by Kleinberg, Motwani, Raghavan and Venkatasubramanian in 1997. Our result extends to graphs of bounded genus.
Let $G(q)$ be a finite Chevalley group, where $q$ is a power of a good prime $p$, and let $U(q)$ be a Sylow $p$-subgroup of $G(q)$. Then a generalized version of a conjecture of Higman asserts that the number $k(U(q))$ of conjugacy classes in $U(q)$ is given by a polynomial in $q$ with integer coefficients. In [S. M. Goodwin and G. Röhrle, J. Algebra 321 (2009) 3321–3334], the first and the third authors of the present paper developed an algorithm to calculate the values of $k(U(q))$. By implementing it into a computer program using $\mathsf{GAP}$, they were able to calculate $k(U(q))$ for $G$ of rank at most five, thereby proving that for these cases $k(U(q))$ is given by a polynomial in $q$. In this paper we present some refinements and improvements of the algorithm that allow us to calculate the values of $k(U(q))$ for finite Chevalley groups of rank six and seven, except $E_7$. We observe that $k(U(q))$ is a polynomial, so that the generalized Higman conjecture holds for these groups. Moreover, if we write $k(U(q))$ as a polynomial in $q-1$, then the coefficients are non-negative.
Under the assumption that $k(U(q))$ is a polynomial in $q-1$, we also give an explicit formula for the coefficients of $k(U(q))$ of degrees zero, one and two.
In this paper, a rigid–flexible planar parallel manipulator (PPM) actuated by three linear ultrasonic motors for high-accuracy positioning is proposed. Based on the extended Hamilton's principle, a rigid–flexible dynamic model of the proposed PPM is developed utilizing exact boundary conditions. To derive an appropriate low-order dynamic model for the design of the controller, the assumed modes method is employed to discretize elastic motion. Then to investigate the interaction between the rigid and elastic motions, a proportional derivative feedback controller combined with a feed-forward-computed torque controller is developed to achieve motion tracking while attenuating the residual vibration. Then the controller is extended to incorporate an input shaper for the further suppression of residual vibration of flexible linkages. Computer simulations are presented as well as experimental results to verify the proposed dynamic model and controller. The input shaping method is verified to be effective in attenuating residual vibration in a highly coupled rigid–flexible PPM. The procedure employed for dynamic modeling and control analysis provides a valuable contribution into the vibration suppression of such a PPM.
The zeros of certain different sequences of orthogonal polynomials interlace in a well-defined way. The study of this phenomenon and the conditions under which it holds lead to a set of points that can be applied as bounds for the extreme zeros of the polynomials. We consider different sequences of the discrete orthogonal Meixner and Kravchuk polynomials and use mixed three-term recurrence relations, satisfied by the polynomials under consideration, to identify bounds for the extreme zeros of Meixner and Kravchuk polynomials.
This paper examines whether and to what extent data-driven learning (DDL) activities can improve the lexico-grammatical use of abstract nouns in L2 writing. A topic-based corpus was compiled to develop concordance learning activities, and 40 Chinese students majoring in English were randomly assigned to a control group or an experimental group. At the prewriting stage, both groups were given a list of five abstract nouns: the experimental group was provided with paper-based concordance lines to study the collocations of the words, while the control group was allowed to consult dictionaries for the usage of the words. The written texts of the pre-test, immediate post-test, and delayed post-test were analysed and compared between and within groups. The results showed that the written output by the experimental group, as compared with the control group, contained a higher variety of collocational and colligational patterns and had fewer linguistic errors in using the target abstract nouns. The post-experiment learning journals and questionnaires administered to the experimental group further confirmed that concordance activities encouraged usage-based learning, helped students notice the lexical collocations and prepositional colligations of the target words, and thus improved accuracy and complexity in their productive language. Despite these positive findings, potential problems of using concordance activities for independent learning were also reflected in the students’ written output and reported in the learning journals.
Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important subject of study in programming languages. Building on this work, recent advances in self-adjusting computation have developed techniques that enable programs to respond automatically and efficiently to dynamic changes in their inputs. Self-adjusting programs have been shown to be efficient for a reasonably broad range of problems, but the approach still requires an explicit programming style, where the programmer must use specific monadic types and primitives to identify, create, and operate on data that can change over time. We describe techniques for automatically translating purely functional programs into self-adjusting programs. In this implicit approach, the programmer need only annotate the (top-level) input types of the programs to be translated. Type inference finds all other types, and a type-directed translation rewrites the source program into an explicitly self-adjusting target program. The type system is related to information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well- typed self-adjusting programs and preserves the source program's input–output behavior, guaranteeing that translated programs respond correctly to all changes to their data. Using a cost semantics, we also prove that the translation preserves the asymptotic complexity of the source program.
Biology has contradictory relationships with randomness. First, it is a complex issue for an empirical science to ensure that apparently random events are truly random, this being further complicated by the loose definitions of unpredictability used in the discipline. Second, biology is made up of many different fields, which have different traditions and procedures for considering random events. Randomness is in many ways an inherent feature of evolutionary biology and genetics. Indeed, chance/Darwinian selection principles, as well as the combinatorial genetic lottery leading to gametes and fertilisation, rely, at least partially, on probabilistic laws that refer to random events. On the other hand, molecular biology has long been based on deterministic premises that have led to a focus on the precision of molecular interactions to explain phenotypes, and, consequently, to the relegation of randomness to the marginal status of ‘noise’. However, recent experimental results, as well as new theoretical frameworks, have challenged this view and may provide unifying explanations by acknowledging the intrinsic stochastic dimension of intracellular pathways as a biological parameter, rather than just as background noise. This should lead to a significant reappraisal of the status of randomness in the life sciences, and have important consequences on research strategies for theoretical and applied biology.
Under a variety of names, and in a more or less explicit form, the concept that we now call ‘probability’ must have taken shape in the mind of human beings since the dawn of thought, as a nuance added to the idea of chance (randomness) or unpredictability, though chance may not be exactly the right word. Some time later, the concepts of what we now describe as ‘statistics’ and ‘statistically stable’, moved away from the idea of ‘chance’ and came closer to something else, which was called ‘probability’ and has been fuzzily conceived as being, in some sense, abstract and ‘ideal’. Throughout history it has been felt that unpredictability can have degrees, and that it can be measured using probabilities.
We examine the construction of joint probabilities for non-commuting observables. We show that there are indications in standard quantum mechanics that imply the existence of conditional expectation values, which in turn implies the existence of a joint distribution. We also argue that the uncertainty principle has no bearing on the existence of joint distributions but only constrains the marginal distributions. In addition, we show that within classical probability theory there are mathematical quantities that are similar to quantum mechanical wave functions. This is shown by generalising a theorem of Khinchin on the necessary and sufficient conditions for a function to be a characteristic function.
We discuss some recent results related to the deduction of a suitable probabilistic model for the description of the statistical features of a given deterministic dynamics. More precisely, we motivate and investigate the computability of invariant measures and some related concepts. We also present some experiments investigating the limits of naive simulations in dynamics.
In this paper, we discuss the crucial but little-known fact that, as Kolmogorov himself claimed, the mathematical theory of probabilities cannot be applied to factual probabilistic situations. This is because it is nowhere specified how, for any given particular random phenomenon, we should construct, effectively and without circularity, the specific and stable distribution law that gives the individual numerical probabilities for the set of possible outcomes. Furthermore, we do not even know what significance we should attach to the simple assertion that such a distribution law “exists”. We call this problem Kolmogorov's aporia†.
We provide a solution to this aporia in this paper. To do this, we first propose a general interpretation of the concept of probability on the basis of an example, and then develop it into a non-circular and effective general algorithm of semantic integration for the factual probability law involved in a specific factual probabilistic situation. The development of the algorithm starts from the fact that the concept of probability, unlike a statistic, does not apply to naturally pre-existing situations but is a conceptual artefact that ensures, locally in space and time, a predictability that is more stable and definite than that permitted by primary statistical data.
The algorithm, which is constructed within a method of relativised conceptualisation, leads to a probability distribution expressed in rational numbers and involving a sort of quantification of the factual concept of probability. Furthermore, it also provides a definite meaning to the simple assertion that a factual probability law exists. We also show that the semantic integration algorithm is compatible with the weak law of large numbers.
The results we give provide a complete solution to Kolmogorov's aporia. They also define a concept of probability that is explicitly organised into a semantic, epistemological and syntactic whole. In a broader context, our results can be regarded as a strong, pragmatic and operational specification of Karl Popper's propensity interpretation of probabilities.
In this paper, I discuss the extent to which Kolmogorov drew upon von Mises' work in addressing the problem of why probability is applicable to events in the real world, which I refer to as the problem of the applicability of probability, or the applicability problem for short. In particular, I highlight the role of randomness in Kolmogorov's account, and I argue that this role differs significantly from the role that randomness plays in von Mises' account.
Quantum computation and quantum computational logics give rise to some non-standard probability spaces that are interesting from a formal point of view. In this framework, events represent quantum pieces of information (qubits, quregisters, mixtures of quregisters), while operations on events are identified with quantum logic gates (which correspond to dynamic reversible quantum processes). We investigate the notion of Shi–Aharonov quantum computational algebra. This structure plays the role for quantum computation that is played by σ-complete Boolean algebras in classical probability theory.
Statistical entropy was introduced by Shannon as a basic concept in information theory measuring the average missing information in a random source. Extended into an entropy rate, it gives bounds in coding and compression theorems. In this paper, I describe how statistical entropy and entropy rate relate to other notions of entropy that are relevant to probability theory (entropy of a discrete probability distribution measuring its unevenness), computer sciences (algorithmic complexity), the ergodic theory of dynamical systems (Kolmogorov–Sinai or metric entropy) and statistical physics (Boltzmann entropy). Their mathematical foundations and correlates (the entropy concentration, Sanov, Shannon–McMillan–Breiman, Lempel–Ziv and Pesin theorems) clarify their interpretation and offer a rigorous basis for maximum entropy principles. Although often ignored, these mathematical perspectives give a central position to entropy and relative entropy in statistical laws describing generic collective behaviours, and provide insights into the notions of randomness, typicality and disorder. The relevance of entropy beyond the realm of physics, in particular for living systems and ecosystems, is yet to be demonstrated.
In this paper we propose a quantum random number generator (QRNG) that uses an entangled photon pair in a Bell singlet state and is certified explicitly by value indefiniteness. While ‘true randomness’ is a mathematical impossibility, the certification by value indefiniteness ensures that the quantum random bits are incomputable in the strongest sense. This is the first QRNG setup in which a physical principle (Kochen–Specker value indefiniteness) guarantees that no single quantum bit that is produced can be classically computed (reproduced and validated), which is the mathematical form of bitwise physical unpredictability.
We discuss the effects of various experimental imperfections in detail: in particular, those related to detector efficiencies, context alignment and temporal correlations between bits. The analysis is very relevant for the construction of any QRNG based on beam-splitters. By measuring the two entangled photons in maximally misaligned contexts and using the fact that two bitstrings, rather than just one, are obtained, more efficient and robust unbiasing techniques can be applied. We propose a robust and efficient procedure based on XORing the bitstrings together – essentially using one as a one-time-pad for the other – to extract random bits in the presence of experimental imperfections, as well as a more efficient modification of the von Neumann procedure for the same task. We also discuss some open problems.