To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are two radically different approaches to robot navigation: the first is to use a map of the robot's environment; the second uses a set of action reflexes to enable a robot to react rapidly to local sensory information. Hybrid approaches combining features of both also exist. This book is the first to propose a method for evaluating the different approaches that shows how to decide which is the most appropriate for a given situation. It begins by describing a complete implementation of a mobile robot including sensor modelling, map–building (a feature–based map and a grid–based free–space map), localisation, and path–planning. Exploration strategies are then tested experimentally in a range of environments and starting positions. The author shows the most promising results are observed from hybrid exploration strategies which combine the robustness of reactive navigation and the directive power of map–based strategies.
Machine learning is an interdisciplinary field of science and engineering that studies mathematical theories and practical applications of systems that learn. This book introduces theories, methods and applications of density ratio estimation, which is a newly emerging paradigm in the machine learning community. Various machine learning problems such as non-stationarity adaptation, outlier detection, dimensionality reduction, independent component analysis, clustering, classification and conditional density estimation can be systematically solved via the estimation of probability density ratios. The authors offer a comprehensive introduction of various density ratio estimators including methods via density estimation, moment matching, probabilistic classification, density fitting and density ratio fitting, as well as describing how these can be applied to machine learning. The book provides mathematical theories for density ratio estimation including parametric and non-parametric convergence analysis and numerical stability analysis to complete the first and definitive treatment of the entire framework of density ratio estimation in machine learning.
This edited volume provides an overview of the latest advancements in adaptive training technology. Intelligent tutoring has been deployed for well-defined and relatively static educational domains such as algebra and geometry. However, this adaptive approach to computer-based training has yet to come into wider usage for domains that are less well defined or where student-system interactions are less structured, such as during scenario-based simulation and immersive serious games. In order to address how to expand the reach of adaptive training technology to these domains, leading experts in the field present their work in areas such as student modeling, pedagogical strategy, knowledge assessment, natural language processing and virtual human agents. Several approaches to designing adaptive technology are discussed for both traditional educational settings and professional training domains. This book will appeal to anyone concerned with educational and training technology at a professional level, including researchers, training systems developers and designers.
The European Molecular Biology Open Software Suite (EMBOSS) is a high quality package of open source software tools for molecular biology. It includes over 200 applications integrated with a range of popular third party software packages under a consistent and powerful command line interface. The tools are available from a wide range of graphical interfaces, including easy to use web interfaces and powerful workflow software.The EMBOSS Administrator's Guide is the official, definitive and comprehensive guide to EMBOSS installation and maintenance:Find all the information needed to configure, install and maintain EMBOSS, including recent additions for version 6.2Step-by-step instructions with real-world examples - saves readers time and helps them avoid the pitfalls on all the common platformsIn-depth reference to database configuration - learn how to set up and use databases under EMBOSSIncludes EMBOSS Frequently Asked Questions (FAQ) with answers - quickly find solutions to common problems
A normalization procedure is given for classical natural deduction with the standard rule of indirect proof applied to arbitrary formulas. For normal derivability and the subformula property, it is sufficient to permute down instances of indirect proof whenever they have been used for concluding a major premiss of an elimination rule. The result applies even to natural deduction for classical modal logic.
This book is an up-to-date introduction to simple theories and hyperimaginaries, with special attention to Lascar strong types and elimination of hyperimaginary problems. Assuming only knowledge of general model theory, the foundations of forking, stability and simplicity are presented in full detail. The treatment of the topics is as general as possible, working with stable formulas and types and assuming stability or simplicity of the theory only when necessary. The author offers an introduction to independence relations as well as a full account of canonical bases of types in stable and simple theories. In the last chapters the notions of internality and analyzability are discussed and used to provide a self-contained proof of elimination of hyperimaginaries in supersimple theories.
Bayesian probability theory has emerged not only as a powerful tool for building computational theories of vision, but also as a general paradigm for studying human visual perception. This 1996 book provides an introduction to and critical analysis of the Bayesian paradigm. Leading researchers in computer vision and experimental vision science describe general theoretical frameworks for modelling vision, detailed applications to specific problems and implications for experimental studies of human perception. The book provides a dialogue between different perspectives both within chapters, which draw on insights from experimental and computational work, and between chapters, through commentaries written by the contributors on each others' work. Students and researchers in cognitive and visual science will find much to interest them in this thought-provoking collection.
Taking the view that infinite plays are draws, we study Conwaynon-terminating games and non-losing strategies. These admit asharp coalgebraic presentation, where non-terminating games are seen as afinal coalgebra and game contructors, such as disjunctivesum, as final morphisms. We have shown, in a previous paper,that Conway’s theory of terminating games can be rephrased naturally in terms of game(pre)congruences. Namely, various conceptually independent notions ofequivalence can be defined and shown to coincide on Conway’sterminating games. These are the equivalence induced by the ordering on surrealnumbers, the contextual equivalence determined by observingwhat player has a winning strategy, Joyal’s categoricalequivalence, and, for impartial games, the denotationalequivalence induced by Grundy semantics. In this paper, wediscuss generalizations of such equivalences to non-terminating games andnon-losing strategies. The scenario is even more rich and intriguing inthis case. In particular, we investigate efficient characterizations of the contextualequivalence, and we introduce a category of fair strategies and acategory of fair pairs of strategies, both generalizing Joyal’s categoryof Conway games and winning strategies. Interestingly, the category of fair pairs capturesthe equivalence defined by Berlekamp, Conway, Guy on loopy games.
Let H be a graph on n vertices and let the blow-up graph G[H] be defined as follows. We replace each vertex vi of H by a cluster Ai and connect some pairs of vertices of Ai and Aj if (vi,vj) is an edge of the graph H. As usual, we define the edge density between Ai and Aj asWe study the following problem. Given densities γij for each edge (i,j) ∈ E(H), one has to decide whether there exists a blow-up graph G[H], with edge densities at least γij, such that one cannot choose a vertex from each cluster, so that the obtained graph is isomorphic to H, i.e., no H appears as a transversal in G[H]. We call dcrit(H) the maximal value for which there exists a blow-up graph G[H] with edge densities d(Ai,Aj)=dcrit(H) ((vi,vj) ∈ E(H)) not containing H in the above sense. Our main goal is to determine the critical edge density and to characterize the extremal graphs.
First, in the case of tree T we give an efficient algorithm to decide whether a given set of edge densities ensures the existence of a transversal T in the blow-up graph. Then we give general bounds on dcrit(H) in terms of the maximal degree. In connection with the extremal structure, the so-called star decomposition is proved to give the best construction for H-transversal-free blow-up graphs for several graph classes. Our approach applies algebraic graph-theoretical, combinatorial and probabilistic tools.
We present a novel compiled approach to Normalisation by Evaluation (NBE) for ML-like languages. It supports efficient normalisation of open λ-terms with respect to β-reduction and rewrite rules. We have implemented NBE and show both a detailed formal model of our implementation and its verification in Isabelle. Finally we discuss how NBE is turned into a proof rule in Isabelle.
This special issue of Mathematical Structures in Computer Science contains a selection of papers presented at three satellite events of CONCUR'09, which was held between 31 August and 5 September 2009 in Bologna (Italy). Specifically, it contains three papers from the 16th International Workshop on Expressiveness in Concurrency (EXPRESS'09), one paper from the 2nd Interaction and Concurrency Experience (ICE'09) and two papers from the 6th Workshop on Structural Operational Semantics (SOS'09).
Van Glabbeek and Goltz (and later Fecher) have investigated the relationships between various equivalences on stable configuration structures, including interleaving bisimulation (IB), step bisimulation (SB), pomset bisimulation and hereditary history-preserving (H-H) bisimulation. Since H-H bisimulation may be characterised by the use of reverse as well as forward transitions, it is of interest to investigate these and other forms of bisimulations where both forward and reverse transitions are allowed. Bednarczyk asked whether SB with reverse steps is as strong as H-H bisimulation. We answer this question negatively. We give various characterisations of SB with reverse steps, showing that forward steps do not add power. We strengthen Bednarczyk's result that, in the absence of auto-concurrency, reverse IB is as strong as H-H bisimulation, by showing that we need only exclude auto-concurrent events at the same depth in the configuration.
We consider several other forms of observations of reversible behaviour and define a wide range of bisimulations by mixing the forward and reverse observations. We investigate the power of these bisimulations and represent the relationships between them as a hierarchy with IB at the bottom and H-H at the top.
We introduce a new fragment of linear temporal logic (LTL) called LIO and a new class of Büchi automata (BA) called almost linear Büchi automata (ALBA). We provide effective translations between LIO and ALBA showing that the two formalisms are expressively equivalent. As we expect there to be applications of our results in model checking, we use two standard sources of specification formulae, namely Spec Patterns and BEEM, to study the practical relevance of the LIO fragment, and to compare our translation of LIO to ALBA with two standard translations of LTL to BA using alternating automata. Finally, we demonstrate that the LIO to ALBA translation can be much faster than the standard translation, and the resulting automata can be substantially smaller.
This paper presents a bisimulation-based method for establishing the soundness of equations between terms constructed using operations whose semantics are specified by rules in the GSOS format of Bloom, Istrail and Meyer. The method is inspired by de Simone's FH-bisimilarity and uses transition rules as schematic transitions in a bisimulation-like relation between open terms. The soundness of the method is proved and examples showing its applicability are provided. The proposed bisimulation-based proof method is incomplete, but we do offer some completeness results for restricted classes of GSOS specifications. An extension of the proof method to the setting of GSOS languages with predicates is also offered.
The concurrency theory literature offers a wealth of examples of characteristic-formula constructions for various behavioural relations over finite labelled transition systems and Kripke structures that are defined in terms of fixed points of suitable functions. Such constructions and their proofs of correctness have been developed independently, but have a common underlying structure. This paper provides a general view of characteristic formulae that are expressed in terms of logics that have a facility for the recursive definition of formulae. We show how several examples of characteristic-formula constructions in the literature can be recovered as instances of the proposed general framework, and how the framework can be used to yield novel constructions. The paper also offers general results pertaining to the definition of co-characteristic formulae and of characteristic formulae expressed in terms of infinitary modal logics.
We prove a compactness theorem in the context of Hennessy–Milner logic and use it to derive a sufficient condition on modal characterisations for the approximation induction principle to be sound modulo the corresponding process equivalence. We show that this condition is necessary when the equivalence in question is compositional with respect to the projection operators. Furthermore, we derive different upper bounds for the constructive version of the approximation induction principle with respect to simulation and decorated trace semantics.
We define session types as projections of the behaviour of processes with respect to the operations processes perform on channels. This calls for a parallel composition operator over session types denoting the simultaneous access to a channel by two or more processes. The proposed approach allows us to define a semantically grounded theory of session types that does not require the linear usage of channels. However, type preservation and progress can only be guaranteed for processes that never receive channels they already own. A number of examples show that the resulting framework validates existing session-type theories and unifies them to some extent.
A detachment of a hypergraph is formed by splitting each vertex into one or more subvertices, and sharing the incident edges arbitrarily among the subvertices. For a given edge-coloured hypergraph , we prove that there exists a detachment such that the degree of each vertex and the multiplicity of each edge in (and each colour class of ) are shared fairly among the subvertices in (and each colour class of , respectively).
Let be a hypergraph with vertex partition {V1,. . .,Vn}, |Vi| = pi for 1 ≤ i ≤ n such that there are λi edges of size hi incident with every hi vertices, at most one vertex from each part for 1 ≤ i ≤ m (so no edge is incident with more than one vertex of a part). We use our detachment theorem to show that the obvious necessary conditions for to be expressed as the union 1 ∪ ··· ∪ k of k edge-disjoint factors, where for 1 ≤ i ≤ k, i is ri-regular, are also sufficient. Baranyai solved the case of h1 = ··· = hm, λ1 = ··· = λm = 1, p1 = ··· = pm, r1 = ··· = rk. Berge and Johnson (and later Brouwer and Tijdeman, respectively) considered (and solved, respectively) the case of hi = i, 1 ≤ i ≤ m, p1 = ··· = pm = λ1 = ··· = λm = r1 = ··· = rk = 1. We also extend our result to the case where each i is almost regular.