To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper deals with the partition function of the Ising model from statistical mechanics, which is used to study phase transitions in physical systems. A special case of interest is that of the Ising model with constant energies and external field. One may consider such an Ising system as a simple graph together with vertex and edge weights. When these weights are considered indeterminates, the partition function for the constant case is a trivariate polynomial Z(G;x,y,z). This polynomial was studied with respect to its approximability by Goldberg, Jerrum and Paterson. Z(G;x,y,z) generalizes a bivariate polynomial Z(G;t,y), which was studied in by Andrén and Markström.
We consider the complexity of Z(Gt,y) and Z(G;x,y,z) in comparison to that of the Tutte polynomial, which is well known to be closely related to the Potts model in the absence of an external field. We show that Z(G;x,y,z) is #P-hard to evaluate at all points in 3, except those in an exceptional set of low dimension, even when restricted to simple graphs which are bipartite and planar. A counting version of the Exponential Time Hypothesis, #ETH, was introduced by Dell, Husfeldt and Wahlén in order to study the complexity of the Tutte polynomial. In analogy to their results, we give under #ETH a dichotomy theorem stating that evaluations of Z(G;t,y) either take exponential time in the number of vertices of G to compute, or can be done in polynomial time. Finally, we give an algorithm for computing Z(G;x,y,z) in polynomial time on graphs of bounded clique-width, which is not known in the case of the Tutte polynomial.
We consider random permutations derived by sampling from stick-breaking partitions of the unit interval. The cycle structure of such a permutation can be associated with the path of a decreasing Markov chain on n integers. Under certain assumptions on the stick-breaking factor we prove a central limit theorem for the logarithm of the order of the permutation, thus extending the classical Erdős–Turán law for the uniform permutations and its generalization for Ewens' permutations associated with sampling from the PD/GEM(θ)-distribution. Our approach is based on using perturbed random walks to obtain the limit laws for the sum of logarithms of the cycle lengths.
We study distance properties of a general class of random directed acyclic graphs (dags). In a dag, many natural notions of distance are possible, for there exist multiple paths between pairs of nodes. The distance of interest for circuits is the maximum length of a path between two nodes. We give laws of large numbers for the typical depth (distance to the root) and the minimum depth in a random dag. This completes the study of natural distances in random dags initiated (in the uniform case) by Devroye and Janson. We also obtain large deviation bounds for the minimum of a branching random walk with constant branching, which can be seen as a simplified version of our main result.
Probabilistic operational semantics for a nondeterministic extension of pure λ-calculus is studied. In this semantics, a term evaluates to a (finite or infinite) distribution of values. Small-step and big-step semantics, inductively and coinductively defined, are given. Moreover, small-step and big-step semantics are shown to produce identical outcomes, both in call-by-value and in call-by-name. Plotkin’s CPS translation is extended to accommodate the choice operator and shown correct with respect to the operational semantics. Finally, the expressive power of the obtained system is studied: the calculus is shown to be sound and complete with respect to computable probability distributions.
The infinite Post Correspondence Problem (ωPCP) was shown to be undecidable by Ruohonen (1985) in general. Blondel and Canterini [Theory Comput. Syst. 36 (2003) 231–245] showed that ωPCP is undecidable for domain alphabets of size 105, Halava and Harju [RAIRO–Theor. Inf. Appl. 40 (2006) 551–557] showed that ωPCP is undecidable for domain alphabets of size 9. By designing a special coding, we delete a letter from Halava and Harju’s construction. So we prove that ωPCP is undecidable for domain alphabets of size 8.
We initiate the theory and applications of biautomata. A biautomaton can read a word alternately from the left and from the right. We assign to each regular language L its canonical biautomaton. This structure plays, among all biautomata recognizing the language L, the same role as the minimal deterministic automaton has among all deterministic automata recognizing the language L. We expect that from the graph structure of this automaton one could decide the membership of a given language for certain significant classes of languages. We present the first two results of this kind: namely, a language L is piecewise testable if and only if the canonical biautomaton of L is acyclic. From this result Simon’s famous characterization of piecewise testable languages easily follows. The second class of languages characterizable by the graph structure of their biautomata are prefix-suffix testable languages.
This textbook, for second- or third-year students of computer science, presents insights, notations, and analogies to help them describe and think about algorithms like an expert, without grinding through lots of formal proof. Solutions to many problems are provided to let students check their progress, while class-tested PowerPoint slides are on the web for anyone running the course. By looking at both the big picture and easy step-by-step methods for developing algorithms, the author guides students around the common pitfalls. He stresses paradigms such as loop invariants and recursion to unify a huge range of algorithms into a few meta-algorithms. The book fosters a deeper understanding of how and why each algorithm works. These insights are presented in a careful and clear way, helping students to think abstractly and preparing them for creating their own innovative ways to solve problems.
Provides an innovative hands-on introduction to techniques for specifying the behaviour of software components. It is primarily intended for use as a text book for a course in the 2nd or 3rd year of Computer Science and Computer Engineering programs, but it is also suitable for self-study. Using this book will help the reader improve programming skills and gain a sound foundation and motivation for subsequent courses in advanced algorithms and data structures, software design, formal methods, compilers, programming languages, and theory. The presentation is based on numerous examples and case studies appropriate to the level of programming expertise of the intended readership. The main topics covered are techniques for using programmer-friendly assertional notations to specify, develop, and verify small but non-trivial algorithms and data representations, and the use of state diagrams, grammars, and regular expressions to specify and develop recognizers for formal languages.
Written for graduate students and advanced undergraduates in computer science, A Second Course in Formal Languages and Automata Theory treats topics in the theory of computation not usually covered in a first course. After a review of basic concepts, the book covers combinatorics on words, regular languages, context-free languages, parsing and recognition, Turing machines, and other language classes. Many topics often absent from other textbooks, such as repetitions in words, state complexity, the interchange lemma, 2DPDAs, and the incompressibility method, are covered here. The author places particular emphasis on the resources needed to represent certain languages. The book also includes a diverse collection of more than 200 exercises, suggestions for term projects, and research problems that remain open.
The new edition of this successful and established textbook retains its two original intentions of explaining how to program in the ML language, and teaching the fundamentals of functional programming. The major change is the early and prominent coverage of modules, which are extensively used throughout. In addition, the first chapter has been totally rewritten to make the book more accessible to those without experience of programming languages. The main features of new Standard Library for the revised version of ML are described and many new examples are given, while references have also been updated. Dr Paulson has extensive practical experience of ML and has stressed its use as a tool for software engineering; the book contains many useful pieces of code, which are freely available (via the Internet) from the author. He shows how to use lists, trees, higher-order functions and infinite data structures. Many illustrative and practical examples are included.. Efficient functional implementations of arrays, queues, priority queues, etc. are described. Larger examples include a general top-down parser, a lambda-calculus reducer and a theorem prover. The combination of careful explanation and practical advice will ensure that this textbook continues to be the preferred text for many courses on ML.
C# programmers: no more translating data structures from C++ or Java to use in your programs! Mike McMillan provides a tutorial on how to use data structures and algorithms plus the first comprehensive reference for C# implementation of data structures and algorithms found in the .NET Framework library, as well as those developed by the programmer. The approach is very practical, using timing tests rather than Big O notation to analyze the efficiency of an approach. Coverage includes arrays and array lists, linked lists, hash tables, dictionaries, trees, graphs, and sorting and searching algorithms, as well as more advanced algorithms such as probabilistic algorithms and dynamic programming. This is the perfect resource for C# professionals and students alike.
Are all film stars linked to Kevin Bacon? Why do the stock markets rise and fall sharply on the strength of a vague rumour? How does gossip spread so quickly? Are we all related through six degrees of separation? There is a growing awareness of the complex networks that pervade modern society. We see them in the rapid growth of the internet, the ease of global communication, the swift spread of news and information, and in the way epidemics and financial crises develop with startling speed and intensity. This introductory book on the new science of networks takes an interdisciplinary approach, using economics, sociology, computing, information science and applied mathematics to address fundamental questions about the links that connect us, and the ways that our decisions can have consequences for others.
This introduction to the basic ideas of structural proof theory contains a thorough discussion and comparison of various types of formalization of first-order logic. Examples are given of several areas of application, namely: the metamathematics of pure first-order logic (intuitionistic as well as classical); the theory of logic programming; category theory; modal logic; linear logic; first-order arithmetic and second-order logic. In each case the aim is to illustrate the methods in relatively simple situations and then apply them elsewhere in much more complex settings. There are numerous exercises throughout the text. In general, the only prerequisite is a standard course in first-order logic, making the book ideal for graduate students and beginning researchers in mathematical logic, theoretical computer science and artificial intelligence. For the new edition, many sections have been rewritten to improve clarity, new sections have been added on cut elimination, and solutions to selected exercises have been included.
This book was first published in 2003. Combinatorica, an extension to the popular computer algebra system Mathematica®, is the most comprehensive software available for teaching and research applications of discrete mathematics, particularly combinatorics and graph theory. This book is the definitive reference/user's guide to Combinatorica, with examples of all 450 Combinatorica functions in action, along with the associated mathematical and algorithmic theory. The authors cover classical and advanced topics on the most important combinatorial objects: permutations, subsets, partitions, and Young tableaux, as well as all important areas of graph theory: graph construction operations, invariants, embeddings, and algorithmic graph theory. In addition to being a research tool, Combinatorica makes discrete mathematics accessible in new and exciting ways to a wide variety of people, by encouraging computational experimentation and visualization. The book contains no formal proofs, but enough discussion to understand and appreciate all the algorithms and theorems it contains.
Shimon Even's Graph Algorithms, published in 1979, was a seminal introductory book on algorithms read by everyone engaged in the field. This thoroughly revised second edition, with a foreword by Richard M. Karp and notes by Andrew V. Goldberg, continues the exceptional presentation from the first edition and explains algorithms in a formal but simple language with a direct and intuitive presentation. The book begins by covering basic material, including graphs and shortest paths, trees, depth-first-search and breadth-first search. The main part of the book is devoted to network flows and applications of network flows, and it ends with chapters on planar graphs and testing graph planarity.
Discrete optimization problems are everywhere, from traditional operations research planning (scheduling, facility location and network design); to computer science databases; to advertising issues in viral marketing. Yet most such problems are NP-hard; unless P = NP, there are no efficient algorithms to find optimal solutions. This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions. The book is organized around central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization. Each chapter in the first section is devoted to a single algorithmic technique applied to several different problems, with more sophisticated treatment in the second section. The book also covers methods for proving that optimization problems are hard to approximate. Designed as a textbook for graduate-level algorithm courses, it will also serve as a reference for researchers interested in the heuristic solution of discrete optimization problems.
Distributed algorithms have been the subject of intense development over the last twenty years. The second edition of this successful textbook provides an up-to-date introduction both to the topic, and to the theory behind the algorithms. The clear presentation makes the book suitable for advanced undergraduate or graduate courses, whilst the coverage is sufficiently deep to make it useful for practising engineers and researchers. The author concentrates on algorithms for the point-to-point message passing model, and includes algorithms for the implementation of computer communication networks. Other key areas discussed are algorithms for the control of distributed applications (wave, broadcast, election, termination detection, randomized algorithms for anonymous networks, snapshots, deadlock detection, synchronous systems), and fault-tolerance achievable by distributed algorithms. The two new chapters on sense of direction and failure detectors are state-of-the-art and will provide an entry to research in these still-developing topics.
The focus of this book is the P versus NP Question and the theory of NP-completeness. It also provides adequate preliminaries regarding computational problems and computational models. The P versus NP Question asks whether or not finding solutions is harder than checking the correctness of solutions. An alternative formulation asks whether or not discovering proofs is harder than verifying their correctness. It is widely believed that the answer to these equivalent formulations is positive, and this is captured by saying that P is different from NP. Although the P versus NP Question remains unresolved, the theory of NP-completeness offers evidence for the intractability of specific problems in NP by showing that they are universal for the entire class. Amazingly enough, NP-complete problems exist, and furthermore hundreds of natural computational problems arising in many different areas of mathematics and science are NP-complete.