To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This modern treatment of computer vision focuses on learning and inference in probabilistic models as a unifying theme. It shows how to use training data to learn the relationships between the observed image data and the aspects of the world that we wish to estimate, such as the 3D structure or the object class, and how to exploit these relationships to make new inferences about the world from new image data. With minimal prerequisites, the book starts from the basics of probability and model fitting and works up to real examples that the reader can implement and modify to build useful vision systems. Primarily meant for advanced undergraduate and graduate students, the detailed methodological presentation will also be useful for practitioners of computer vision.Covers cutting-edge techniques, including graph cuts, machine learning and multiple view geometryA unified approach shows the common basis for solutions of important computer vision problems, such as camera calibration, face recognition and object trackingMore than 70 algorithms are described in sufficient detail to implementMore than 350 full-color illustrations amplify the textThe treatment is self-contained, including all of the background mathematicsAdditional resources at www.computervisionmodels.com
Formal systems that describe computations over syntactic structures occur frequently in computer science. Logic programming provides a natural framework for encoding and animating such systems. However, these systems often embody variable binding, a notion that must be treated carefully at a computational level. This book aims to show that a programming language based on a simply typed version of higher-order logic provides an elegant, declarative means for providing such a treatment. Three broad topics are covered in pursuit of this goal. First, a proof-theoretic framework that supports a general view of logic programming is identified. Second, an actual language called λProlog is developed by applying this view to higher-order logic. Finally, a methodology for programming with specifications is exposed by showing how several computations over formal objects such as logical formulas, functional programs, and λ-terms and π-calculus expressions can be encoded in λProlog.
Induction is a pervasive tool in computer science and mathematics for defining objects and reasoning on them. Coinduction is the dual of induction and as such it brings in quite different tools. Today, it is widely used in computer science, but also in other fields, including artificial intelligence, cognitive science, mathematics, modal logics, philosophy and physics. The best known instance of coinduction is bisimulation, mainly employed to define and prove equalities among potentially infinite objects: processes, streams, non-well-founded sets, etc. This book presents bisimulation and coinduction: the fundamental concepts and techniques and the duality with induction. Each chapter contains exercises and selected solutions, enabling students to connect theory with practice. A special emphasis is placed on bisimulation as a behavioural equivalence for processes. Thus the book serves as an introduction to models for expressing processes (such as process calculi) and to the associated techniques of operational and algebraic analysis.
Do you need to know how to write systems, services, and applications using the TinyOS operating system? Learn how to write nesC code and efficient applications with this indispensable guide to TinyOS programming. Detailed examples show you how to write TinyOS code in full, from basic applications right up to new low-level systems and high performance applications. Two leading figures in the development of TinyOS also explain the reasons behind many of the design decisions made and, for the first time, how nesC relates to and differs from other C dialects. Handy features such as a library of software design patterns, programming hints and tips, end-of-chapter exercises, and an appendix summarizing the basic application-level TinyOS APIs make this the ultimate guide to TinyOS for embedded systems programmers, developers, designers, and graduate students.
Distributed systems are fast becoming the norm in computer science. Formal mathematical models and theories of distributed behaviour are needed in order to understand them. This book proposes a distributed pi-calculus called Dpi, for describing the behaviour of mobile agents in a distributed world. It is based on an existing formal language, the pi-calculus, to which it adds a network layer and a primitive migration construct. A mathematical theory of the behaviour of these distributed systems is developed, in which the presence of types plays a major role. It is also shown how in principle this theory can be used to develop verification techniques for guaranteeing the behavior of distributed agents. The text is accessible to computer scientists with a minimal background in discrete mathematics. It contains an elementary account of the pi-calculus, and the associated theory of bisimulations. It also develops the type theory required by Dpi from first principles.
Do you need to improve wireless system performance? Learn how to maximise the efficient use of resources with this systematic and authoritative account of wireless resource management. Basic concepts, optimization tools and techniques, and application examples, are thoroughly described and analysed, providing a unified framework for cross-layer optimization of wireless networks. State-of-the-art research topics and emerging applications, including dynamic resource allocation, cooperative networks, ad hoc/personal area networks, UWB, and antenna array processing, are examined in depth. If you are involved in the design and development of wireless networks, as a researcher, graduate student or professional engineer, this is a must-have guide to getting the best possible performance from your network.
Improve design efficiency and reduce costs with this practical guide to formal and simulation-based functional verification. Giving you a theoretical and practical understanding of the key issues involved, expert authors including Wayne Wolf and Dan Gajski explain both formal techniques (model checking, equivalence checking) and simulation-based techniques (coverage metrics, test generation). You get insights into practical issues including hardware verification languages (HVLs) and system-level debugging. The foundations of formal and simulation-based techniques are covered too, as are more recent research advances including transaction-level modeling and assertion-based verification, plus the theoretical underpinnings of verification, including the use of decision diagrams and Boolean satisfiability (SAT).
One of the most exciting and potentially rewarding areas of scientific research is the study of the principles and mechanisms underlying brain function. It is also of great promise to future generations of computers. A growing group of researchers, adapting knowledge and techniques from a wide range of scientific disciplines, have made substantial progress understanding memory, the learning process, and self organization by studying the properties of models of neural networks - idealized systems containing very large numbers of connected neurons, whose interactions give rise to the special qualities of the brain. This book introduces and explains the techniques brought from physics to the study of neural networks and the insights they have stimulated. It is written at a level accessible to the wide range of researchers working on these problems - statistical physicists, biologists, computer scientists, computer technologists and cognitive psychologists. The author presents a coherent and clear nonmechanical presentation of all the basic ideas and results. More technical aspects are restricted, wherever possible, to special sections and appendices in each chapter. The book is suitable as a text for graduate courses in physics, electrical engineering, computer science and biology.
The world is increasingly populated with interactive agents distributed in space, real or abstract. These agents can be artificial, as in computing systems that manage and monitor traffic or health; or they can be natural, e.g. communicating humans, or biological cells. It is important to be able to model networks of agents in order to understand and optimise their behaviour. Robin Milner describes in this book just such a model, by presenting a unified and rigorous structural theory, based on bigraphs, for systems of interacting agents. This theory is a bridge between the existing theories of concurrent processes and the aspirations for ubiquitous systems, whose enormous size challenges our understanding. The book is reasonably self-contained mathematically, and is designed to be learned from: examples and exercises abound, solutions for the latter are provided. Like Milner's other work, this is destined to have far-reaching and profound significance.
Let A and B be two affinely generating sets of ℤ2n. As usual, we denote their Minkowski sum by A+B. How small can A+B be, given the cardinalities of A and B? We give a tight answer to this question. Our bound is attained when both A and B are unions of cosets of a certain subgroup of ℤ2n. These cosets are arranged as Hamming balls, the smaller of which has radius 1.
By similar methods, we re-prove the Freiman–Ruzsa theorem in ℤ2n, with an optimal upper bound. Denote by F(K) the maximal spanning constant |〈 A 〉|/|A| over all subsets A ⊆ ℤ2n with doubling constant |A+A|/|A| ≤ K. We explicitly calculate F(K), and in particular show that 4K/4K ≤ F(K)⋅(1+o(1)) ≤ 4K/2K. This improves the estimate F(K) = poly(K)4K, found recently by Green and Tao [17] and by Konyagin [23].
The Parikh finite word automaton (PA) was introduced and studied in 2003 by Klaedtke andRueß. Natural variants of the PA arise from viewing a PA equivalently as an automaton thatkeeps a count of its transitions and semilinearly constrains their numbers. Here we adoptthis view and define the affine PA, that extends the PA by having eachtransition induce an affine transformation on the PA registers, and the PA onletters, that restricts the PA by forcing any two transitions on the sameletter to affect the registers equally. Then we report on the expressiveness, closure, anddecidability properties of such PA variants. We note that deterministic PA are strictlyweaker than deterministic reversal-bounded counter machines.
We investigate the possibility of extending Chrobak normal form to the probabilisticcase. While in the nondeterministic case a unary automaton can be simulated by anautomaton in Chrobak normal form without increasing the number of the states in thecycles, we show that in the probabilistic case the simulation is not possible by keepingthe same number of ergodic states. This negative result is proved by considering thenatural extension to the probabilistic case of Chrobak normal form, obtained by replacingnondeterministic choices with probabilistic choices. We then propose a different kind ofnormal form, namely, cyclic normal form, which does not suffer from the same problem: weprove that each unary probabilistic automaton can be simulated by a probabilisticautomaton in cyclic normal form, with at most the same number of ergodic states. In thenondeterministic case there are trivial simulations between Chrobak normal form and cyclicnormal form, preserving the total number of states in the automata and in theircycles.
LetLϕ,λ = {ω ∈ Σ∗| ϕ(ω) >λ} be thelanguage recognized by a formal seriesϕ:Σ∗ → ℝ with isolated cut pointλ. We provide new conditions that guarantee the regularity of thelanguage Lϕ,λ in the case thatϕ is rational or ϕ is a Hadamard quotient of rationalseries. Moreover the decidability property of such conditions is investigated.
We introduce and investigate string assembling systems which form a computational model that generates strings from copies out of a finite set of assembly units. The underlying mechanism is based on piecewise assembly of a double-stranded sequence of symbols, where the upper and lower strand have to match. The generation is additionally controlled by the requirement that the first symbol of a unit has to be the same as the last symbol of the strand generated so far, as well as by the distinction of assembly units that may appear at the beginning, during, and at the end of the assembling process. We start to explore the generative capacity of string assembling systems. In particular, we prove that any such system can be simulated by some nondeterministic one-way two-head finite automaton, while the stateless version of the two-head finite automaton marks to some extent a lower bound for the generative capacity. Moreover, we obtain several incomparability and undecidability results as well as (non-)closure properties, and present questions for further investigations.
Given a prime q and a negative discriminant D, the CM method constructs an elliptic curve E/Fq by obtaining a root of the Hilbert class polynomial HD(X) modulo q. We consider an approach based on a decomposition of the ring class field defined by HD, which we adapt to a CRT setting. This yields two algorithms, each of which obtains a root of HD mod q without necessarily computing any of its coefficients. Heuristically, our approach uses asymptotically less time and space than the standard CM method for almost all D. Under the GRH, and reasonable assumptions about the size of log q relative to ∣D∣, we achieve a space complexity of O((m+n)log q)bits, where mn=h(D) , which may be as small as O(∣D∣1/4 log q) . The practical efficiency of the algorithms is demonstrated using ∣D∣>1016 and q≈2256, and also ∣D∣>1015 and q≈233220. These examples are both an order of magnitude larger than the best previous results obtained with the CM method.
We describe an effective algorithm to compute a set of representatives for the conjugacy classes of Hall subgroups of a finite permutation or matrix group. Our algorithm uses the general approach of the so-called ‘trivial Fitting model’.
The problem of acoustic noise is becoming increasingly serious with the growing use of industrial and medical equipment, appliances, and consumer electronics. Active noise control (ANC), based on the principle of superposition, was developed in the early 20th century to help reduce noise. However, ANC is still not widely used owing to the effectiveness of control algorithms, and to the physical and economical constraints of practical applications. In this paper, we briefly introduce some fundamental ANC algorithms and theoretical analyses, and focus on recent advances on signal processing algorithms, implementation techniques, challenges for innovative applications, and open issues for further research and development of ANC systems.