To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An association between Vi and the proof system (for i ≥ 1) is shown in Chapter VII by the fact that each bounded theorem of the theory Vi translates into a family of tautologies that have polynomial-size proofs. Our theories and their associated proof systems are more deeply connected than as shown by just the propositional translation theorems. In this chapter we will present some more connections between the proof systems, their associated theories and the underlying complexity classes.
In general, for each proof system F we study the principle that asserts that the system is sound, i.e, that formulas that have F-proofs are valid. This is known as the Reflection Principle (RFN) for F. We will show in this chapter that the theories Vi and TVi prove the RNF for their associated proof systems when the principles are stated for formulas. Together with the Propositional Translation Theorems, these show that the systems and are the strongest systems (for proving formulas) whose RFN are provable in the theories Vi and TVi, respectively.
A connection between a propositional proof system F and the complexity class C definable in the theory T associated with F will be seen by the fact that the Witnessing Problem for F is complete for C. Recall Theorem VII.4.13 which shows that the Witnessing Problem for (and equivalently for eFrege) are solvable by a polytime algorithm.
Let G = (G, +) be an additive group. The sumset theory of Plünnecke and Ruzsa gives several relations between the size of sumsets A + B of finite sets A, B, and related objects such as iterated sumsets kA and difference sets A − B, while the inverse sumset theory of Freiman, Ruzsa, and others characterizes those finite sets A for which A + A is small. In this paper we establish analogous results in which the finite set A ⊂ G is replaced by a discrete random variable X taking values in G, and the cardinality |A| is replaced by the Shannon entropy H(X). In particular, we classify those random variables X which have small doubling in the sense that H(X1 + X2) = H(X) + O(1) when X1, X2 are independent copies of X, by showing that they factorize as X = U + Z, where U is uniformly distributed on a coset progression of bounded rank, and H(Z) = O(1).
When G is torsion-free, we also establish the sharp lower bound , where o(1) goes to zero as H(X) → ∞.
For random trees T generated by the binary search tree algorithm from uniformly distributed input we consider the subtree size profile, which maps k ∈ ℕ to the number of nodes in T that root a subtree of size k. Complementing earlier work by Devroye, by Feng, Mahmoud and Panholzer, and by Fuchs, we obtain results for the range of small k-values and the range of k-values proportional to the size n of T. In both cases emphasis is on the process view, i.e., the joint distributions for several k-values. We also show that the dynamics of the tree sequence lead to a qualitative difference between the asymptotic behaviour of the lower and the upper end of the profile.
This 1993 book shows how formal logic can be used to specify the behaviour of hardware designs and reason about their correctness. A primary theme of the book is the use of abstraction in hardware specification and verification. The author describes how certain fundamental abstraction mechanisms for hardware verification can be formalised in logic and used to express assertions about design correctness and the relative accuracy of models of hardware behaviour. His approach is pragmatic and driven by examples. He also includes an introduction to higher-order logic, which is a widely used formalism in this subject, and describes how that formalism is actually used for hardware verification. The book is based in part on the author's own research as well as on graduate teaching. Thus it can be used to accompany courses on hardware verification and as a resource for research workers.
Action Semantics is a novel approach to the formal description of programming languages. Its abstractness is at an intermediate level, between that of denotational and operational semantics. Action Semantics has considerable pragmatic advantages over all previous approaches, in its comprehensibility and accessibility, and especially in the usefulness of its semantic descriptions of realistic programming languages. In this volume, Dr Peter Mosses gives a thorough introduction to action semantics, and provides substantial illustrations of its use. Graduates of computer science or maths who have an interest in the semantics of programming languages will find Action Semantics a most helpful book.
This two-volume work bridges the gap between introductory expositions of logic or set theory on one hand, and the research literature on the other. It can be used as a text in an advanced undergraduate or beginning graduate course in mathematics, computer science, or philosophy. The volumes are written in a user-friendly conversational lecture style that makes them equally effective for self-study or class use. Volume 1 includes formal proof techniques, a section on applications of compactness (including nonstandard analysis), a generous dose of computability and its relation to the incompleteness phenomenon, and the first presentation of a complete proof of Godel's 2nd incompleteness since Hilbert and Bernay's Grundlagen theorem.
Information retrieval, IR, the science of extracting information from any potential source, can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these views can be reforged in the same framework used to formulate the general principles of quantum mechanics. All the usual quantum-mechanical notions have their IR-theoretic analogues, and the standard results can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To keep the book self-contained appendices with background material on physics and mathematics are included. Each chapter ends with bibliographic remarks that point to further reading. This is an important, ground-breaking book, with much new material, for all those working in IR, AI and natural language processing.
First published in 1993, this thesis is concerned with the design of efficient algorithms for listing combinatorial structures. The research described here gives some answers to the following questions: which families of combinatorial structures have fast computer algorithms for listing their members? What general methods are useful for listing combinatorial structures? How can these be applied to those families which are of interest to theoretical computer scientists and combinatorialists? Amongst those families considered are unlabelled graphs, first order one properties, Hamiltonian graphs, graphs with cliques of specified order, and k-colourable graphs. Some related work is also included, which compares the listing problem with the difficulty of solving the existence problem, the construction problem, the random sampling problem, and the counting problem. In particular, the difficulty of evaluating Pólya's cycle polynomial is demonstrated.
In this volume, which was originally published in 1996, noisy information is studied in the context of computational complexity; in other words the text deals with the computational complexity of mathematical problems for which information is partial, noisy and priced. The author develops a general theory of computational complexity of continuous problems with noisy information and gives a number of applications; deterministic as well as stochastic noise is considered. He presents optimal algorithms, optimal information, and complexity bounds in different settings: worst case, average case, mixed worst-average and average-worst, and asymptotic. The book integrates the work of researchers in such areas as computational complexity, approximation theory and statistics, and includes many fresh results as well. About two hundred exercises are supplied with a view to increasing the reader's understanding of the subject. The text will be of interest to professional computer scientists, statisticians, applied mathematicians, engineers, control theorists, and economists.
This book, first published in 2004, describes the application of statistical physics and complex systems theory to the study of the evolution and structure of the Internet. Using a statistical physics approach the Internet is viewed as a growing system that evolves in time through the addition and removal of nodes and links. This perspective permits us to outline the dynamical theory required for a description of the macroscopic evolution of the Internet. The presence of such a theoretical framework appears to be a revolutionary and promising path towards our understanding of the Internet and the various processes taking place on this network, including, for example, the spread of computer viruses or resilience to random or intentional damages. This book will be of interest to graduate students and researchers in statistical physics, computer science and mathematics studying in this subject.
Logic programming was based on first-order logic. Higher-order logics can also lead to theories of theorem-proving. This book introduces just such a theory, based on a lambda-calculus formulation of a clausal logic with equality, known as the Clausal Theory of Types. By restricting this logic to Horn clauses, a concise form of logic programming that incorporates functional programming is achieved. The book begins by reviewing the fundamental Skolem-Herbrand-Gödel Theorem and resolution, which are then extrapolated to a higher-order setting; this requires introducing higher-order equational unification which builds in higher-order equational theories and uses higher-order rewriting. The logic programming language derived has the unique property of being sound and complete with respect to Henkin-Andrews general models, and consequently of treating equivalent terms as identical. First published in 1993, the book can be used for graduate courses in theorem-proving, but will be of interest to all working in declarative programming.
This two-volume work bridges the gap between introductory expositions of logic or set theory on one hand, and the research literature on the other. It can be used as a text in an advanced undergraduate or beginning graduate course in mathematics, computer science, or philosophy. The volumes are written in a user-friendly conversational lecture style that makes them equally effective for self-study or class use. Volume II, on formal (ZFC) set theory, incorporates a self-contained 'chapter 0' on proof techniques so that it is based on formal logic, in the style of Bourbaki. The emphasis on basic techniques will provide the reader with a solid foundation in set theory and provides a context for the presentation of advanced topics such as absoluteness, relative consistency results, two expositions of Godel's constructible universe, numerous ways of viewing recursion, and a chapter on Cohen forcing.
This is an introduction to process algebra, also known as the Algebra of Communicating Processes (ACP). It is a self-contained mathematical approach to the theory which can be used for graduate courses, though it also has material of interest to researchers. It is a unique introduction to this model of concurrent programming and will be essential reading for all computer scientists interested in parallel processing and algebraic methods in computer science.
A central problem in the design of programming systems is to provide methods for verifying that computer code performs to specification. This book presents a rigorous foundation for defining Boolean categories, in which the relationship between specification and behaviour can be explored. Boolean categories provide a rich interface between program constructs and techniques familiar from algebra, for instance matrix- or ideal-theoretic methods. The book's distinction is that the approach relies on only a single program construct (the first-order theory of categories), the others being derived mathematically from four axioms. Development of these axioms (which are obeyed by an abundance of program paradigms) yields Boolean algebras of 'predicates', loop-free constructs, and a calculus of partial and total correctness which is shown to be the standard one of Hoare, Dijkstra, Pratt, and Kozen. The book is based in part on courses taught by the author, and will appeal to graduate students and researchers in theoretical computer science.
The author presents a theory of concurrent processes where three different semantic description methods that are usually studied in isolation are brought together. Petri nets describe processes as concurrent and interacting machines; algebraic process terms describe processes as abstract concurrent processes; and logical formulas specify the intended communication behaviour of processes. At the heart of this theory are two sets of transformation rules for the top-down design of concurrent processes. The first set can be used to transform stepwise logical formulas into process terms, whilst process terms can be transformed into Petri nets by the second set. These rules are based on novel techniques for the operational and denotational semantics of concurrent processes. Various results and relationships between nets, terms and formulas starting with formulas and illustrated by examples. The use of transformations is demonstrated in a series of case studies, and the author also identifies directions for research.
The theorem of Fraenkel and Simpson states that the maximum numberof distinct squares that a word w of length n can contain isless than 2n. This is based on the fact that no more than twosquares can have their last occurrences starting at the sameposition. In this paper we show that the maximum number of the lastoccurrences of squares per position in a partial word containing onehole is 2k, where k is the size of the alphabet. Moreover, weprove that the number of distinct squares in a partial word with onehole and of length n is less than 4n, regardless of the size ofthe alphabet. For binary partial words, this upper bound can bereduced to 3n.