To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We prove that no stochastic domination exists between the effective resistance of a spherically symmetric random tree and that of a branching process in a varying environments tree if they grow according to the same law of distribution.
It is shown that on fairly weak conditions, the current solutions of a metaheuristic following the ant colony optimization paradigm, the graph-based ant system, converge with a probability that can be made arbitrarily close to unity to one element of the set of optimal solutions. The result generalizes a previous result by removing the very restrictive condition that both the optimal solution and its encoding are unique (this generalization makes the proof distinctly more difficult) and by allowing a wide class of implementation variants in the first phase of the algorithm. In this way, the range of application of the convergence result is considerably extended.
We consider a single-server queue with exponential service time and two types of arrivals: positive and negative. Positive customers are regular ones who form a queue and a negative arrival has the effect of removing a positive customer in the system. In many applications, it might be more appropriate to assume the dependence between positive arrival and negative arrival. In order to reflect the dependence, we assume that the positive arrivals and negative arrivals are governed by a finite-state Markov chain with two absorbing states, say 0 and 0′. The epoch of absorption to the states 0 and 0′ corresponds to an arrival of positive and negative customers, respectively. The Markov chain is then instantly restarted in a transient state, where the selection of the new state is allowed to depend on the state from which absorption occurred.
The Laplace–Stieltjes transforms (LSTs) of the sojourn time distribution of a customer, jointly with the probability that the customer completes his service without being removed, are derived under the combinations of service disciplines FCFS and LCFS and the removal strategies RCE and RCH. The service distribution of phase type is also considered.
We consider the problem of selecting a stopping time τ which determines when to exit an investment project when the project's cumulative profit up to time t is Xt, where {Xt : t ≥ 0} is a Brownian motion with drift μ and variance σ2. The profit rate μ never changes over time, but μ is not directly observable. Specifically, μ takes the value μH > 0 when in the high state and μL < 0 when in the low state, and the initial probability p0 that the project is in the high state is known. The decision-maker seeks to maximize the expected discounted profit up to time τ. Using the theory of stochastic differential equations, we show that it is optimal to exit only when the posterior probability Pt of being in the high state falls below a critical number p*, and we produce a simple, closed form for p*. Our most surprising comparative-statics result is that the expected discounted profit increases with |μL|, provided |μL| is large.
In the present article, we develop some efficient bounds for the distribution function of a two-dimensional scan statistic defined on a (double) sequence of independent and identically distributed (i.i.d.) binary trials. The methodology employed here takes advantage of the connection between the scan statistic problem and an equivalent reliability structure and exploits appropriate techniques of reliability theory to establish tractable bounds for the distribution of the statistic of interest. An asymptotic result is established and a numerical study is carried out to investigate the efficiency of the suggested bounds.
Biological Robotics covers the interface between robotics and behavioural biology. The first international conference in this area took place in Paris in 1990. This meeting led to the development of what is now called the Society for Adaptive Behaviour (SAB), which holds an international meeting every two years.
Two key aspects of most living organisms is their ability to detect and exploit natural sources of energy within their environment. We are currently developing a robotic system that will attempt to sustain itself by hunting and catching slugs on agricultural land. A microbial fuel cell will be used to convert the slug bio-mass to electricity thus providing the robot's energy supply. This paper outlines the requirements for such a predator and describes the entire robot. Data is also presented from trials of the robot hunting and catching slugs in a situation similar to that found in agricultural fields.
Thie first biologically inspired robots, the famous electromechanical tortoises, were designed and built in 1949 by W. Grey Walter. This paper reviews their origins in Walter's theories of the brain and the nature of life, and uses contemporary unpublished notes and photographs to assess their significance then and now.
The desert ant Cataglyphis is able to explore its desert habitat for hundreds of meters while foraging and return back to its nest precisely and in a straight line. With a body of less than 10 mm and a brain of less than one cubic millimeter they provide a challenge for hi-tech engineers. In this article, we give an overview of our attempts to model parts of its navigation behavior using robots and computer simulations. Inspired by the insect's navigation system we have developed mechanisms for path integration and visual piloting that were successfully employed on the mobile robot Sahabot 2.
This work introduces the concept of a brush-based tractor used for rescue in collapsed buildings or tunnels. The paper presents the bristle theory and traction experiments relating to a robot, which uses a sensor system for detecting the shapes of pipes or tunnel like voids within rubble. Traction experiments in the laboratory were used to investigate the characteristics of bristles and the performance of the brush units of different shapes. The experimental results are discussed in the paper and related to a single bristle theory with a view to giving guidance for the design of a future brush based rescue robot in debris.
This paper discusses robots that are operational within a human-inhabited environment. Specifically, we identify different roles that such robots can adopt, reflecting different human-robot relationships. We discuss three different roles of robots in a project where we develop a robot as a therapeutic tool for children with autism: the robot as a therapeutic playmate, the robot as a social mediator, and the robot as a model social agent. Implications of these roles that go beyond this particular project are discussed.
This paper demonstrates a new method in sampling environmental variables and underwater currents using special devices with a particular technique. The project involves developing a set of robots as the tools for data collection, and developing a system that can interpret the data collected. The ‘ocean' currents can be mapped using a flock of robots by dropping them into the ocean. As they descend, they record the desired sensory information such as salinity, temperature, pressure, the presence of specific chemicals, etc., along with other variables required for the system to function. Once at a predefined depth the robots start ascending and still continue collecting data until they reach the surface. Back on the surface, they communicate the collected data to a base computer in a ship or on land, through satellite or other means. The base computer collects the data from every recovered member of the flock and reconstructs the path they took underwater. Then using this path, it estimates the vectors of the oceanic currents. The environmental variables can be displayed in 3D for the sampled area.
In this paper we describe a prototype underwater robot capable of altering its density by heating oil. It is designed to gather information in the ocean, as a member of a flock of such robots. We also describe the forces acting on the robot, and suggest a preliminary control model.
This paper presents an investigation of odor localization by groups of autonomous mobile robots using principles of Swarm Intelligence. First, we describe a distributed algorithm by which groups of agents can solve the full odor localization task more efficiently than a single agent. Next, we demonstrate that a group of real robots under fully distributed control can successfully traverse a real odor plume, and that an embodied simulator can faithfully reproduce these real robots experiments. Finally, we use the embodied simulator combined with a reinforcement learning algorithm to optimize performance across group size, showing that it can be useful not only for improving real world odor localization, but also for quantitatively characterizing the influence of group size on task performance.
Pure Type Systems (PTS) come in two flavours: domain-free systems with untyped $\lambda$-abstractions (i.e. of the form $\lambda{x}.{M}$); and domain-free systems with typed $\lambda$-abstractions (i.e. of the form $\lambda{x}{A}{M}$). Both flavours of systems are related by an erasure function $\er{.}$ that removes types from $\lambda$-abstractions. Preservation of Equational Theory, which states the equational theories of both systems coincide through the erasure function, is a property of functional and normalizing PTSs. In this paper we establish that Preservation of Equational Theory fails for some non-normalizing PTSs, including the PTS with $\ast:\ast$. The gist of our argument is to exhibit a typable expression $Y_H$ whose erasure $\er{Y}$ is a fixpoint combinator, but which is not a fixpoint combinator itself.
An oligonucleotide probe is a short piece of single-stranded DNA complementary to the target gene whose expression is measured on the microarray by that probe. In most microarray applications, oligonucleotide probes are between 20 and 60 bases long. The probes are either spotted onto the array or synthesised in situ, depending on the microarray platform (Chapter 1).
Usually, oligonucleotide probes for microarrays are designed within several hundred bases of the 3′ end of the target gene sequence. So for a fixed oligonucleotide length, there are several hundred potential oligonucleotides, one for each possible starting base. Some of these oligonucleotides work better than others as probes on a microarray. This chapter describes methods for the computer selection of good oligonucleotide probes.
What Makes a Good Oligonucleotide Probe?
Good oligonucleotide probes have three properties: they are sensitive, specific and isothermal.
Asensitive probe is one that returns a strong signal when the complementary target is present in the sample. There are two factors that determine the sensitivity of a probe:
▪ The probe does not have internal secondary structure or bind to other identical probes on the array.
▪ The probe is able to access its complementary sequence in the target, which could potentially be unavailable as a result of secondary structure in the target.
A specific probe is one that returns a weak signal when the complementary target is absent from the sample; i.e., it does not cross-hybridise. There are two factors that determine the specificity of a probe:
▪ Cross-hybridization to other targets as a result of Watson–Crick base-pairing
▪ Non-specific binding to the probe; e.g., as a result of G-quartets
DNA array technology is almost fifteen years old, and still rapidly evolving. It is one of very few platforms capable of matching the scale of sequence data produced by genome sequencing. Applications range fromanalysing single base changes, SNPs, to detecting deletion or amplification of large segments of the genome, CGH. At present, its most widespread use is in the analysis of gene expression levels. When carried out globally on all the genes of an organism, this analysis exposes its molecular anatomy with unprecedented clarity. In basic research, it reveals gene activities associated with biological processes and groups genes into networks of interconnected activities. There have been practical outcomes, too. Most notably, large-scale expression analysis has revealed genes associated with disease states, such as cancer, informed the design of new methods of diagnosis, and provided molecular targets for drug development.
At face value, the method is appealingly simple. An array is no more than a set of DNA reagents for measuring the amount of sequence counterparts among them RNAs of a sample. However, the quality of the result is affected by several factors, including the quality of the array and the sample, the uniformity of hybridisation process, and the method of reading signals. Errors, inevitable at each stage, must be taken into account in the design of the experiment and in the interpretation of results. It is here that the scientist needs the help of advanced statistical tools.
Dr. Stekel is a mathematician with several years of experience in the microarray field. He has used his expertise in a company setting, developing advanced methods for probe design and for the analysis of large, complex data sets.