To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper investigates the active vertical motion of biped systems and its significance to the balance of biped robots, which have been commonly neglected by the use of a well-known model called the Linear Inverted Pendulum Model. The feasible step location is theoretically estimated by considering the active vertical movement on a simple point mass model. Based on the estimation, we present two new strategies, namely the flexion strategy and the extension strategy, to enable biped robots to restore balance through active upward and downward motions. The analytical results demonstrate that the robot is able to recover from much larger disturbances using our proposed methods. Simulations of the simple point mass model validate our analysis. Besides, prototype controllers that incorporate our proposed strategies have also been implemented on a simulated humanoid robot. Numerical simulations on both the simple point mass model and the realistic humanoid model prove the effectiveness of proposed strategies.
Autonomic computing is emerging as a significant new approach to the design of computer services. Its goal is the development of services that are able to manage themselves with minimal direct human intervention, and, in particular, are able to sense their environment and to tune themselves to meet end-user needs. However, the impact on performance of the interaction between multiple uncoordinated self-optimizing services is not yet well understood. We present some recent results on a non-cooperative load-balancing game which help to better understand the result of this interaction. In this game, users generate jobs of different services, and the jobs have to be processed on one of the servers of a computing platform. Each service has its own dispatcher which probabilistically routes jobs to servers so as to minimize the mean processing cost of its own jobs. We first investigate the impact of heterogeneity in the amount of incoming traffic routed by dispatchers and present a result stating that, for a fixed amount of total incoming traffic, the worst-case overall performance occurs when each dispatcher routes the same amount of traffic. Using this result we then study the so-called Price of Anarchy (PoA), an oft-used worst-case measure of the inefficiency of non-cooperative decentralized architectures. We give explicit bounds on the PoA for cost functions representing the mean delay of jobs when the service discipline is PS or SRPT. These bounds indicate that significant performance degradations can result from the selfish behavior of self-optimizing services. In practice, though, the worst-case scenario may occur rarely, if at all. Some recent results suggest that for the game under consideration the PoA is an overly pessimistic measure that does not reflect the performance obtained in most instances of the problem.
We study G-networks with positive and negative customers and signals. We consider two types of signals: they can make a subnetwork of queues operational or down. As signals are sent by queues after a customer service completion, one can model the availability of a sub-network of queues controlled by another network of queues. We prove that under classical assumptions for G-networks and assumptions on the rerouting probabilities when a subnetwork is not operational, the steady-state distribution, if it exists, has a product form steady state distribution. Some examples are given.
Mobile networks are universally used for personal communications, but also increasingly used in the Internet of Things and machine-to-machine applications in order to access and control critical services. However, they are particularly vulnerable to signaling storms, triggered by malfunctioning applications, malware or malicious behavior, which can cause disruption in the access to the infrastructure. Such storms differ from conventional denial of service attacks, since they overload the control plane rather than the data plane, rendering traditional detection techniques ineffective. Thus, in this paper we describe the manner in which storms happen and their causes, and propose a detection framework that utilizes traffic measurements and key performance indicators to identify in real-time misbehaving mobile devices. The detection algorithm is based on the random neural network which is a probabilistic computational model with efficient learning algorithms. Simulation results are provided to illustrate the effectiveness of the proposed scheme.
The cyclic coordinate descent (CCD) method is a popular loop closure method in protein structure modeling. It is a robotics algorithm originally developed for inverse kinematic applications. We demonstrate an effective method of building the backbone of protein structure models using the principle of CCD and a guiding trace. For medium-resolution 3-dimensional (3D) images derived using cryo-electron microscopy (cryo-EM), it is possible to obtain guiding traces of secondary structures and their skeleton connections. Our new method, constrained cyclic coordinate descent (CCCD), builds α-helices, β-strands, and loops quickly and fairly accurately along predefined traces. We show that it is possible to build the entire backbone of a protein fairly accurately when the guiding traces are accurate. In a test of 10 proteins, the models constructed using CCCD show an average of 3.91 Å of backbone root mean square deviation (RMSD). When the CCCD method is incorporated in a simulated annealing framework to sample possible shift, translation, and rotation freedom, the models built with the true topology were ranked high on the list, with an average backbone RMSD100 of 3.76 Å. CCCD is an effective method for modeling atomic structures after secondary structure traces and skeletons are extracted from 3D cryo-EM images.
Motivated by applications in areas such as cloud computing and information technology services, we consider GI/GI/1 queueing systems under workloads (arrival and service processes) that vary according to one discrete time scale and under controls (server capacity) that vary according to another discrete time scale. We take a stochastic optimal control approach and formulate the corresponding optimal dynamic control problem as a stochastic dynamic program. Under general assumptions for the queueing system, we derive structural properties for the optimal dynamic control policy, establishing that the optimal policy can be obtained through a sequence of convex programs. We also derive fluid and diffusion approximations for the problem and propose analytical and computational approaches in these settings. Computational experiments demonstrate the benefits of our theoretical results over standard heuristics.
This paper gives a general and logical analysis of the expert position in design research by which methods for innovative design can be derived from expert design practices. It first gives a framework for characterising accounts of design by the way in which they define and relate general, descriptive and prescribed types of design practices. Second, it analyses with this framework the expert position’s conservatism of prescribing existing expert design practices to non-expert designers. Third, it argues that the expert status of expert designers does not provide sufficient justification for prescribing expert design practices to non-expert designers; it is shown that this justification needs support by empirical testing. Fourth, it discusses validation of design methods for presenting an approach to this testing. One consequence of the need to empirically test the expert position is that its prescription has to be formulated in more detail. Another consequence is that it undermines the expert position since expert design practices are not anymore certain sources for deriving design methods with. Yet it also opens the expert position to other sources for developing design methods for innovation, such as the practices of contemporary designers and the insights of design researchers.
A large increase in the number and types of vehicles occurred due to the growth in population. This fact brings the need for efficient vehicle classification systems that can be used in traffic surveillance and intelligent transportation systems. In this study, a multi-type vehicle classification system based on Random Neural Networks (RNNs) and Bag-Of-Visual Words (BOVWs) is developed. A 10-fold cross-validation technique is used, with a large dataset, to assess the proposed approach. Moreover, the BOVW–RNN's classification performance is compared with LIVCS, a vehicle classification system based on RNNs. The results reveal that BOVW–RNN classification system produces more reliable and accurate classification results than LIVCS. The main contribution of this paper is that the developed system can serve as a framework for many vehicle classification systems.
The purpose of this paper is to analyze the so-called back-off technique of the IEEE 802.11 protocol in broadcast mode with waiting queues. In contrast to existing models, packets arriving when a station (or node) is in back-off state are not discarded, but are stored in a buffer of infinite capacity. As in previous studies, the key point of our analysis hinges on the assumption that the time on the channel is viewed as a random succession of transmission slots (whose duration corresponds to the length of a packet) and mini-slots during which the back-off of the station is decremented. These events occur independently, with given probabilities. The state of a node is represented by a two-dimensional Markov chain in discrete-time, formed by the back-off counter and the number of packets at the station. Two models are proposed both of which are shown to cope reasonably well with the physical principles of the protocol. The stability (ergodicity) conditions are obtained and interpreted in terms of maximum throughput. Several approximations related to these models are also discussed.
Motivated by applications to multi-antenna wireless networks, we propose a distributed and asynchronous algorithm for stochastic semidefinite programming. This algorithm is a stochastic approximation of a continuous-time matrix exponential scheme which is further regularized by the addition of an entropy-like term to the problem's objective function. We show that the resulting algorithm converges almost surely to an ɛ-approximation of the optimal solution requiring only an unbiased estimate of the gradient of the problem's stochastic objective. When applied to throughput maximization in wireless systems, the proposed algorithm retains its convergence properties under a wide array of mobility impediments such as user update asynchronicities, random delays and/or ergodically changing channels. Our theoretical analysis is complemented by extensive numerical simulations, which illustrate the robustness and scalability of the proposed method in realistic network conditions.
We consider a random permutation drawn from the set of 132-avoiding permutations of length n and show that the number of occurrences of another pattern σ has a limit distribution, after scaling by nλ(σ)/2, where λ(σ) is the length of σ plus the number of descents. The limit is not normal, and can be expressed as a functional of a Brownian excursion. Moments can be found by recursion.
The random neural network is a biologically inspired neural model where neurons interact by probabilistically exchanging positive and negative unit-amplitude signals that has superior learning capabilities compared to other artificial neural networks. This paper considers non-negative least squares supervised learning in this context, and develops an approach that achieves fast execution and excellent learning capacity. This speedup is a result of significant enhancements in the solution of the non-negative least-squares problem which regard (a) the development of analytical expressions for the evaluation of the gradient and objective functions and (b) a novel limited-memory quasi-Newton solution algorithm. Simulation results in the context of optimizing the performance of a disaster management problem using supervised learning verify the efficiency of the approach, achieving two orders of magnitude execution speedup and improved solution quality compared to state-of-the-art algorithms.
We study layered queueing systems comprised two interlacing finite M/M/• type queues, where users of each layer are the servers of the other layer. Examples can be found in file sharing programs, SETI@home project, etc. Let Li denote the number of users in layer i, i=1, 2. We consider the following operating modes: (i) All users present in layer i join forces together to form a single server for the users in layer j (j≠i), with overall service rate μjLi (that changes dynamically as a function of the state of layer i). (ii) Each of the users present in layer i individually acts as a server for the users in layer j, with service rate μj.
These operating modes lead to three different models which we analyze by formulating them as finite level-dependent quasi birth-and-death processes. We derive a procedure based on Matrix Analytic methods to derive the steady state probabilities of the two dimensional system state. Numerical examples, including mean queue sizes, mean waiting times, covariances, and loss probabilities, are presented. The models are compared and their differences are discussed.
This paper introduces a special collection of research contributions written to honour Professor Erol Gelenbe of his 70th birthday which was celebrated on September 20–25, 2015 at Imperial College in London where he holds the Dennis Gabor Professorship and is Head of the Intelligent Systems and Networks Group in the Department of Electrical and Electronic Engineering (see http://san.ee.ic.ac.uk/Gelenbe2015). All but three of the fourteen papers presented here were written by his students and descendants. These papers are centered on probability models related to computer systems and networks, an area where Erol plays a pioneering role in which he remains very active with many innovations and new directions that he has introduced in the last ten or twelve years. We first briefly review Erol's work in these areas, and then discuss each of the contributions that appear here together with their links to Erol's own published research.
Markov chains (MCs) are widely used to model systems which evolve by visiting the states in their state spaces following the available transitions. When such systems are composed of interacting subsystems, they can be mapped to a multi-dimensional MC in which each subsystem normally corresponds to a different dimension. Usually the reachable state space of the multi-dimensional MC is a proper subset of its product state space, that is, Cartesian product of its subsystem state spaces. Compact storage of the matrix underlying such a MC and efficient implementation of analysis methods using Kronecker operations require the set of reachable states to be represented as a union of Cartesian products of subsets of subsystem state spaces. The problem of partitioning the reachable state space of a three or higher dimensional system with a minimum number of partitions into Cartesian products of subsets of subsystem state spaces is shown to be NP-complete. Two algorithms, one merge based the other refinement based, that yield possibly non-optimal partitionings are presented. Results of experiments on a set of problems from the literature and those that are randomly generated indicate that, although it may be more time and memory consuming, the refinement based algorithm almost always computes partitionings with a smaller number of partitions than the merge-based algorithm. The refinement based algorithm is insensitive to the order in which the states in the reachable state space are processed, and in many cases it computes partitionings that are optimal.
The introduction of the class of queueing networks called G-networks by Gelenbe has been a breakthrough in the field of stochastic modeling since it has largely expanded the class of models which are analytically or numerically tractable. From a theoretical point of view, the introduction of the G-networks has lead to very important considerations: first, a product-form queueing network may have non-linear traffic equations; secondly, we can have a product-form equilibrium distribution even if the customer routing is defined in such a way that more than two queues can change their states at the same time epoch. In this work, we review some of the classes of product-forms introduced for the analysis of the G-networks with special attention to these two aspects. We propose a methodology that, coherently with the product-form result, allows for a modular analysis of the G-queues to derive the equilibrium distribution of the network.
Previous studies on emergency management of large-scale urban networks have commonly concentrated on system development to off-load intensive computations to remote cloud servers or improving communication quality during a disaster and ignored the effect of energy consumption of vehicles, which can play a vital role in large-scale evacuation owing to the disruptions in energy supply. Hence, in this paper we propose a cloud-enabled navigation system to direct vehicles to safe areas in the aftermath of a disaster in an energy and time efficient fashion. A G-network model is employed to mimic the behaviors and interactions between individual vehicles and the navigation system, and analyze the effect of re-routing decisions toward the vehicles. A gradient descent optimization algorithm is used to gradually reduce the evacuation time and fuel consumption of vehicles by optimizing the probabilistic choices of linked road segments at each intersection. The re-routing decisions arrive at the intersections periodically and will expire after a short period. When a vehicle reaches an intersection, if the latest re-routing decision has not expired, the vehicle will follow this advice, otherwise, the vehicle will stick to the shortest path to its destination. The experimental results indicate that the proposed algorithm can reduce the evacuation time and the overall fuel utilization especially when the number of evacuated vehicles is large.
This article presents how the compiler from the OCaml-Java project generates Java bytecode from OCaml sources. Targeting the Java Virtual Machine (JVM) is a technological challenge, but gives access to a platform where OCaml can leverage multiple cores and access numerous libraries. We present the main design choices regarding the runtime and the various optimizations performed by the compiler that are crucial to get decent performance on a JVM. The challenge is indeed not only to generate bytecode but to generate efficient bytecode, and to provide a runtime library whose memory footprint does not impede the efficiency of the garbage collector. We focus on the strategies that differ from the original OCaml compiler, as the constraints are quite different on the JVM when compared to native code. The level of performance reached by the OCaml-Java compiler is assessed through benchmarks, comparing with both the original OCaml implementation and the Scala language.
This paper focuses on fast terminal sliding mode control (FTSMC) of robot manipulators using wavelet neural networks (WNN) with guaranteed H∞ tracking performance. The FTSMC for trajectory tracking is employed to drive the tracking error of the system to converge to an equilibrium point in finite time. The tracking error arrives at the sliding surface in finite time and then converges to zero in finite time along the sliding surface. To deal with the case of uncertain and unknown robot dynamics, a WNN is proposed to fully compensate the robot dynamics. The online tuning algorithms for the WNN parameters are derived using Lyapunov approach. To attenuate the effect of approximation errors to a prescribed level, H∞ tracking performance is proposed. It is shown that the proposed WNN is able to learn the system dynamics with guaranteed H∞ tracking performance and finite time convergence for trajectory tracking. Finally, the simulation results are performed on a 3D-Microbot manipulator to show the effectiveness of the controller.
Since the 1980s, metaphor has been recognized as a pervasively diffused phenomenon in communication, absolutely not restricted to rhetoric and linguistic phenomena, involving structured concepts, relations, and matching ‘rules’. Metaphor resolution, that is metaphor understanding, as well as metaphor creation, has become an issue in automated processing and understanding of natural language as well as of mixed visual communication. It can be showed as a process of structure finding and mapping procedure between conceptual denotation–connotation structures necessary for interpretation. Creative abduction is then showed to be the pattern inference required to work out structure-mappings in corresponding nodes as present in metaphors. In this paper, we review some key issues (definitions, typologies, theoretical problems) involving the concept of ‘metaphor’ and survey some definitions and concepts emerging in contemporary debate on abductive inference. Finally, we argue that metaphor understanding process can be recognized as a fusion tractable problem, allowing the exploitation of frameworks and algorithms of such domain.