To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This volume surveys important topics in singularity theory, with a particular focus on computational aspects of the subject. The contributors to this volume include R. O. Buchweitz, Y. A. Drozd, W. Ebeling, H. A. Hamm, Le D. T., I. Luengo, F.-O. Schreyer, E. Shustin, J. H. M. Steenbrink, D. van Straten, B. Teissier and J. Wahl. Together they describe the development of various areas of singularity theory over many years, and a range of open questions are discussed. Research workers in singularity theory, computer algebra or related subjects will find that this book contains a wealth of valuable information.
This book discusses recent research in the theoretical foundations of several subjects of importance for the design of hardware, and for computer science in general. The physical technologies of very large scale integration (VLSI) are having major effects on the electronic industry. The potential diversity and complexity of digital systems have begun a revolution in the technologies of digital design, involving the application of concepts and methods to do with algorithms and programming. In return, the problems of VLSI design have led to new subjects becoming of importance in computer science. Topics covered in this volume include: models of VLSI complexity; complexity theory; systolic algorithm design; specification theory; verification theory; design by stepwise refinement and transformations. A thorough literature survey with an exhaustive bibliography is also included. The book has grown from a workshop held at the Centre for Theoretical Computer Science at Leeds University and organised by the editors.
Many aspects of the internal and external workings of computers can be viewed as a series of communication processes. Communication complexity is the mathematical theory of such communication processes. It is also often used as an abstract model of other aspects of computation. This book surveys this mathematical theory, concentrating on the question of how much communication is necessary for any particular process. The first part of the book is devoted to the simple two-party model introduced by Yao in 1979, which is still the most widely studied model. The second part treats newer models developed to deal with more complicated communication processes. Finally, applications of these models, including computer networks, VLSI circuits, and data structures, are treated in the third part of the book. This is an essential resource for graduate students and researchers in theoretical computer science, circuits, networks and information theory.
This article presents uniform random generators of plane partitions according to size (the number of cubes in the 3D interpretation). Combining a bijection of Pak with the method of Boltzmann sampling, we obtain random samplers that are slightly superlinear: the complexity is O(n(ln n)3) in approximate-size sampling and O(n4/3) in exact-size sampling (under a real-arithmetic computation model). To our knowledge, these are the first polynomial-time samplers for plane partitions according to size (there exist polynomial-time samplers of another type, which draw plane partitions that fit inside a fixed bounding box). The same principles yield efficient samplers for (a × b)-boxed plane partitions (plane partitions with two dimensions bounded), and for skew plane partitions. The random samplers allow us to perform simulations and observe limit shapes and frozen boundaries, which have been analysed recently by Cerf and Kenyon for plane partitions, and by Okounkov and Reshetikhin for skew plane partitions.
Formal specification is a method for precisely modelling computer-based systems that combines concepts from software engineering and mathematical logic. In this book the authors describe algebraic and state-based specification techniques from the unified view of the Common Object-oriented Language for Design, COLD, a wide-spectrum language in the tradition of VDM and Z. The kernel language is explained in detail, with many examples, including: set representation, a display device, an INGRES-like database system, and a line editor. Fundamental techniques such as initial algebra semantics, loose semantics, partial functions, hiding, sharing, predicate and dynamic logic, abstraction functions, representation of invariants and black-box correctness are also presented. More advanced ideas, for example Horn logic, and large systems are given in the final part. Appendices contain full details of the language's syntax and a specification library. Techniques for software development and design are emphasised throughout, so the book will be an excellent choice for courses in these areas.
Reasoning under uncertainty, that is, making judgements with only partial knowledge, is a major theme in artificial intelligence. Professor Paris provides here an introduction to the mathematical foundations of the subject. It is suited for readers with some knowledge of undergraduate mathematics but is otherwise self-contained, collecting together the key results on the subject and formalizing within a unified framework the main contemporary approaches and assumptions. The author has concentrated on giving clear mathematical formulations, analyses, justifications and consequences of the main theories about uncertain reasoning, so the book can serve as a textbook for beginners or as a starting point for further basic research into the subject. It will be welcomed by graduate students and research workers in logic, philosophy and computer science as an account of how mathematics and artificial intelligence can complement and enrich each other.
Declarative programs consist of mathematical functions and relations and are amenable to formal specification and verification, since the methods of logic and proof can be applied to the programs in a well-defined manner. Here Dr Padawitz emphasizes verification based on logical inference rules, i.e. deduction (in contrast with model-theoretic approaches, deductive methods can be automated to some extent). His treatment of the subject differs from others in that he tries to capture the actual styles and applications of programming; neither too general with respect to the underlying logic, nor too restrictive for the practice of programming. He generalizes and unifies results from classical theorem-proving and term rewriting to provide proof methods tailored to declarative program synthesis and verification. Detailed examples accompany the development of the methods, whose use is supported by a documented prototyping system. The book can be used for graduate courses or as a reference for researchers in formal methods, theorem-proving and declarative languages.
Petri nets are a popular and powerful formal model for the analysis and modelling of concurrent systems, and a rich theory has developed around them. Petri nets are taught to undergraduates, and also used by industrial practitioners. This book focuses on a particular class of petri nets, free choice petri nets, which play a central role in the theory. The text is very clearly organised, with every notion carefully explained and every result proved. Clear exposition is given for place invariants, siphons, traps and many other important analysis techniques. The material is organised along the lines of a course book, and each chapter contains numerous exercises, making this book ideal for graduate students and research workers alike.
Randomized algorithms have become a central part of the algorithms curriculum, based on their increasingly widespread use in modern applications. This book presents a coherent and unified treatment of probabilistic techniques for obtaining high probability estimates on the performance of randomized algorithms. It covers the basic toolkit from the Chernoff–Hoeffding bounds to more sophisticated techniques like martingales and isoperimetric inequalities, as well as some recent developments like Talagrand's inequality, transportation cost inequalities and log-Sobolev inequalities. Along the way, variations on the basic theme are examined, such as Chernoff–Hoeffding bounds in dependent settings. The authors emphasise comparative study of the different methods, highlighting respective strengths and weaknesses in concrete example applications. The exposition is tailored to discrete settings sufficient for the analysis of algorithms, avoiding unnecessary measure-theoretic details, thus making the book accessible to computer scientists as well as probabilists and discrete mathematicians.
Mathematicians from Leibniz to Hilbert have sought to mechanise the verification of mathematical proofs. Developments arising out of Gödel's proof of his incompleteness theorem showed that no computer program could automatically prove true all the theorems of mathematics. In practice, however, there are a number of sophisticated automated reasoning programs that are quite effective at checking mathematical proofs. Now in paperback, this book describes the use of a computer program to check the proofs of several celebrated theorems in metamathematics including Gödel's incompleteness theorem and the Church–Rosser theorem. The computer verification using the Boyer–Moore theorem prover yields precise and rigorous proofs of these difficult theorems. It also demonstrates the range and power of automated proof checking technology. The mechanisation of metamathematics itself has important implications for automated reasoning since metatheorems can be applied by labour-saving devices to simplify proof construction. The book should be accessible to scientists and philosophers with some knowledge of logic and computing.
Polynomial equations have been long studied, both theoretically and with a view to solving them. Until recently, manual computation was the only solution method and the theory was developed to accommodate it. With the advent of computers, the situation changed dramatically. Many classical results can be more usefully recast within a different framework which in turn lends itself to further theoretical development tuned to computation. This first book in a trilogy is devoted to the new approach. It is a handbook covering the classical theory of finding roots of a univariate polynomial, emphasising computational aspects, especially the representation and manipulation of algebraic numbers, enlarged by more recent representations like the Duval Model and the Thom Codification. Mora aims to show that solving a polynomial equation really means finding algorithms that help one manipulate roots rather than simply computing them; to that end he also surveys algorithms for factorizing univariate polynomials.
The authors describe here a framework in which the type notation of functional languages is extended to include a notation for binding times (that is run-time and compile-time) that distinguishes between them. Consequently the ability to specify code and verify program correctness can be improved. Two developments are needed, the first of which introduces the binding time distinction into the lambda calculus, in a manner analogous with the introduction of types into the untyped lambda calculus. Methods are also presented for introducing combinators for run-time. The second concerns the interpretation of the resulting language, which is known as the mixed lambda-calculus and combinatory logic. The notion of 'parametrized semantics' is used to describe code generation and abstract interpretation. The code generation is for a simple abstract machine designed for the purpose; it is close to the categorical abstract machine. The abstract interpretation focuses on a strictness analysis that generalises Wadler's analysis for lists.
We investigate the rank of random (symmetric) sparse matrices. Our main finding is that with high probability, any dependency that occurs in such a matrix is formed by a set of few rows that contains an overwhelming number of zeros. This allows us to obtain an exact estimate for the co-rank.
This book develops the theory of typed feature structures, a data structure that generalizes both first-order terms and feature structures of unification-based grammars to include inheritance, typing, inequality, cycles and intensionality. The resulting synthesis serves as a logical foundation for grammars, logic programming and constraint-based reasoning systems. A logical perspective is adopted which employs an attribute-value description language along with complete equational axiomatizations of the various systems of feature structures. At the same time, efficiency concerns are kept in mind and complexity and representability results are provided. The application of feature structures to phrase structure grammars is described and completeness results are shown for standard evaluation strategies. Definite clause logic programs are treated as a special case of phrase structure grammars. Constraint systems are introduced and an enumeration technique is developed for solving arbitrary attribute-value logic constraints. This book, with its innovative approach to data structure, will be essential reading for researchers in computational linguistics, logic programming and knowledge representation. Its self-contained presentation makes it flexible enough to serve as both a research tool and a text book.
A string graph is the intersection graph of a collection of continuous arcs in the plane. We show that any string graph with m edges can be separated into two parts of roughly equal size by the removal of vertices. This result is then used to deduce that every string graph with n vertices and no complete bipartite subgraph Kt,t has at most ctn edges, where ct is a constant depending only on t. Another application shows that locally tree-like string graphs are globally tree-like: for any ε > 0, there is an integer g(ε) such that every string graph with n vertices and girth at least g(ε) has at most (1 + ε)n edges. Furthermore, the number of such labelled graphs is at most (1 + ε)nT(n), where T(n) = nn−2 is the number of labelled trees on n vertices.
This book discusses the connection between two areas of semantics, namely the semantics of databases and the semantics of natural language, and links them via a common view of the semantics of time. It is argued that a coherent theory of the semantics of time is an essential ingredient for the success of efforts to incorporate more 'real world' semantics into database models. This idea is a relatively recent concern of database research but it is receiving growing interest. The book begins with a discussion of database querying which motivates the use of the paradigm of Montague Semantics and discusses the details of the intensional logic ILs. This is followed by a description of the author's own model, the Historical Relational Data Model (HRDM) which extends the RDM to include a temporal dimension. Finally the database querying language QEHIII is defined and examples illustrate its use. A formal model for the interpretation of questions is presented in this work which will form the basis for much further research.
In 1989, Michael Rabin proposed a fundamentally new approach to the problems of fault-tolerant routing and memory management in parallel computation, based on the idea of information dispersal. Yuh-Dauh Lyuu developed this idea in a number of new and exciting ways in his PhD thesis. Further work has led to extensions of these methods to other applications such as shared memory emulations. This volume presents an extended and updated printing of Lyuu's thesis. It gives a detailed treatment of the information dispersal approach to the problems of fault-tolerance and distributed representations of information which have resisted rigorous analysis by previous methods.
The book combines topics in mathematics (geometry and topology), computer science (algorithms), and engineering (mesh generation). The motivation for these topics is the difficulty, both conceptually and in the technical execution, of combining elements of combinatorial and of numerical algorithms. Mesh generation is a topic where a meaningful combination of these different approaches to problem solving is inevitable. The book develops methods from both areas that are amenable to combination, and explains breakthrough solutions to meshing that fit into this category. This book emphasizes topics that are elementary, attractive, useful, interesting, and lend themselves to teaching, making it an ideal graduate text for courses on mesh generation.