To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Satisfiability Modulo Theories (SMT) extends Propositional Satisfiability with logical theories that allow us to express relations over various types of variables, such as arithmetic constraints, or equalities over uninterpreted functions. SMT solvers are widely used in areas such as software verification, where they are able to solve surprisingly efficiently some problems that appear hard, when not undecidable. This chapter presents a general introduction to SMT solving. It then focuses on one important theory, equality, and gives both a detailed understanding of how it is solved, and a theoretical justiication of why the procedure is practically effective.
Introduction
Our starting point is research and experiences in the context of the state-of-the art SMT solver Z3 [13], developed by the authors at Microsoft Research. We first cover a selection of the main challenges and techniques for making SMT solving practical, integrating algorithms for tractable subproblems, and pragmatics and heuristics used in practice. We then take a proof-theoretical perspective on the power and scope of the engines used by SMT solvers. Most modern SMT solvers are built around a tight integration with efficient SAT solving. The framework is commonly referred to as DPLL(T), where T refers to a theory or a combination of theories. The theoretical result we present compares DPLL(T) with unrestricted resolution. A straightforward adaption of DPLL(T) provides a weaker proof system than unrestricted resolution, and we investigate an extension we call Conflict Directed Theory Resolution as a candidate method for bridging this gap. Our results apply to the case where T is the theory of equality.
Markov Random Fields have been successfully applied to many computer vision problems such as image segmentation, 3D reconstruction, and stereo. The problem of estimating the Maximum a Posteriori (MAP) solution of models such as Markov Random Fields (MRF) can be formulated as a function minimization problem. This has made function minimization an indispensable tool in computer vision. The problem of minimizing a function of discrete variables is, in general, NP-hard. However, functions belonging to certain classes of functions, such as submodular functions, can be minimized in polynomial time. In this chapter, we discuss examples of popular models used in computer vision for which the MAP inference problem results in a tractable function minimization problem. We also discuss how algorithms used in computer vision overcome challenges introduced by the scale and form of function minimization problems encountered in computer vision.
Labeling Problems in Computer Vision
Many problems in computer vision and scene understanding can be formulated in terms of finding the most probable values of certain hidden or unobserved variables. These variables encode some property of the scene and can be continuous or discrete. These problems are commonly referred to as labelling problems as they involve assigning a label to the hidden variables. Labelling problems occur in many forms, from lattice based problems of dense stereo and image segmentation discussed in [6, 40] to the use of pictorial structures for object recognition as done by [10]. Some examples of problems which can be formulated in this manner are shown in Figure 10.1.
One approach for dealing with intractability is to utilize representations that permit certain queries of interest to be computable in polytime. Such tractable representations will ultimately be exponential in size for certain problems and they may also not be suitable for direct specification by users. Hence, they are typically generated from other specifications through a process known as knowledge compilation. In this chapter, we review a subset of these tractable representations, known as decomposable negation normal forms (DNNFs), which have proved influential in a number of applications, including formal verification, model-based diagnosis and probabilistic reasoning.
Introduction
Many areas of computer science have shown a great interest in tractable and canonical representations of propositional knowledge bases (aka, Boolean functions). The ordered binary decision diagram (OBDD) is one such representation that received much attention and proved quite influential in a variety of areas [13]. Within AI, the study of tractable representations has also had a long tradition (e.g., [61, 30, 31, 49, 62, 14, 28, 19, 13, 52, 66, 50]). This area of research, which is also known as knowledge compilation, has become more systematic since [28], which showed that many known and useful representations are subsets of negation normal form (NNF) and correspond to imposing specific properties on NNF. The most fundamental of these properties turned out to be decomposability and determinism, giving rise to the corresponding language of DNNF and its subset, d-DNNF. This chapter is dedicated to DNNF and its subsets, which also include the influential language of OBDDs, and the more recently introduced sentential decision diagrams (SDDs).
Preprocessing or data reduction means reducing a problem to something simpler by solving an easy part of the input. This type of algorithm is used in almost every application. In spite of wide practical applications of preprocessing, a systematic theoretical study of such algorithms remains elusive. The framework of parameterized complexity can be used as an approach to analysing preprocessing algorithms. In this framework, the algorithms have, in the addition to the input, an extra parameter that is likely to be small. This has resulted in a study of preprocessing algorithms that reduce the size of the input to a pure function of the parameter (independent of the input size). Such types of preprocessing algorithms are called kernelization algorithms. In this survey we give an overview of some classical and new techniques in the design of such algorithms.
Introduction
Preprocessing (data reduction or kernelization) as a strategy for coping with hard problems is used in many situations. The history of this approach can be traced back to the 1950s [34], where truth functions were simplified using reduction rules. A natural question arises: how can we measure the quality of preprocessing rules proposed for a specific problem? For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this oversight was the following impossibility result: if, starting with an instance I of an NP-hard problem, we could compute in polynomial time an instance I′ equivalent to I and with |I′| < |I|, then it would follow that P=NP, thereby contradicting classical complexity assumptions.
In mathematics and computer science, optimization is the process of finding the best solution from a set of alternatives that satisfy some constraints. Many applications in allied fields of computer science like machine learning, computer vision, bioinformatics, involve the solution of an optimization problem. For instance, optimization is used to schedule trains and airplanes, allocate the advertisements we see on television or in connection with internet search results, ind the optimal placement of sensors to detect and neutralize security threats, or even to make decisions on what is the best way to perform medical surgery on a patient.
Optimization problems are generally hard to solve – their solution may involve exhaustively searching over a set of solutions whose size could increase exponentially with the number of variables whose values we may want to infer. That said, in practice, many of these problems can often be solved with remarkable efficiency. This is usually done by dedicated techniques, developed in each and every application domain, that exploit the “properties” of the problems encountered in practice.
Over the last few decades, researchers working in a number of different disciplines have tried to solve optimization problems that are encountered in their respective ields by exploiting some structure or properties inherent in the problems. In some cases, they have been able to isolate classes of optimization problems that can be solved optimally in time polynomial in the number of variables, while in other cases, they have been able to develop efficient algorithms that can produce solutions that, although not optimal, are good enough.
Classical computer science textbooks tell us that some problems are 'hard'. Yet many areas, from machine learning and computer vision to theorem proving and software verification, have defined their own set of tools for effectively solving complex problems. Tractability provides an overview of these different techniques, and of the fundamental concepts and properties used to tame intractability. This book will help you understand what to do when facing a hard computational problem. Can the problem be modelled by convex, or submodular functions? Will the instances arising in practice be of low treewidth, or exhibit another specific graph structure that makes them easy? Is it acceptable to use scalable, but approximate algorithms? A wide range of approaches is presented through self-contained chapters written by authoritative researchers on each topic. As a reference on a core problem in computer science, this book will appeal to theoreticians and practitioners alike.
We present a two-parameter family $(G_{m,k})_{m, k \in \mathbb{N}_{\geq 2}}$, of finite, non-abelian random groups and propose that, for each fixed k, as m → ∞ the commuting graph of Gm,k is almost surely connected and of diameter k. We present heuristic arguments in favour of this conjecture, following the lines of classical arguments for the Erdős–Rényi random graph. As well as being of independent interest, our groups would, if our conjecture is true, provide a large family of counterexamples to the conjecture of Iranmanesh and Jafarzadeh that the commuting graph of a finite group, if connected, must have a bounded diameter. Simulations of our model yielded explicit examples of groups whose commuting graphs have all diameters from 2 up to 10.
We address the problem to know whether the relation induced by a one-rulelength-preserving rewrite system is rational. We partially answer to a conjecture of ÉricLilin who conjectured in 1991 that a one-rule length-preserving rewrite system is arational transduction if and only if the left-hand side u and theright-hand side v of the rule of the system are not quasi-conjugate orare equal, that means if u and v are distinct, there donot exist words x, y and z such thatu = xyz and v = zyx.We prove the only if part of this conjecture and identify two non trivialcases where the if part is satisfied.
We present a new incremental algorithm for minimising deterministic finite automata. Itruns in quadratic time for any practical application and may be halted at any point,returning a partially minimised automaton. Hence, the algorithm may be applied to a givenautomaton at the same time as it is processing a string for acceptance. We also includesome experimental comparative results.
In a 1976 paper published in Science, Knuth presented an algorithm to sample (non-uniform) self-avoiding walks crossing a square of side k. From this sample, he constructed an estimator for the number of such walks. The quality of this estimator is directly related to the (relative) variance of a certain random variable Xk. From his experiments, Knuth suspected that this variance was extremely large (so that the estimator would not be very efficient). But how large? For the analogous Rosenbluth algorithm, which samples unconfined self-avoiding walks of length n, the variance of the corresponding estimator is believed to be exponential in n.
A few years ago, Bassetti and Diaconis showed that, for a sampler à la Knuth that generates walks crossing a k × k square and consisting of North and East steps, the relative variance is only $O(\sqrt k)$. In this note we take one step further and show that, for walks consisting of North, South and East steps, the relative variance jumps to $2^{k(k+1)}/(k+1)^{2k}$. This is exponential in the average length of the walks, which is of order k2. We also obtain partial results for general self-avoiding walks crossing a square, suggesting that the relative variance could be exponential in k2 (which is again the average length of these walks).
Knuth's algorithm is a basic example of a widely used technique called sequential importance sampling. The present paper, following the paper by Bassetti and Diaconis, is one of very few examples where the variance of the estimator can be found.
A rational polyhedron$P\subseteq {\mathbb{R^n}}$ is a finite union of simplexes in ${\mathbb{R^n}}$ with rational vertices. P is said to be $\mathbb Z$-homeomorphic to the rational polyhedron $Q\subseteq {\mathbb{R^{\it m}}}$ if there is a piecewise linear homeomorphism η of P onto Q such that each linear piece of η and η−1 has integer coefficients. When n=m, $\mathbb Z$-homeomorphism amounts to continuous $\mathcal{G}_n$-equidissectability, where $\mathcal{G}_n=GL(n,\mathbb Z) \ltimes \mathbb Z^{n}$ is the affine group over the integers, i.e., the group of all affinities on $\mathbb{R^{n}}$ that leave the lattice $\mathbb Z^{n}$ invariant. $\mathcal{G}_n$ yields a geometry on the set of rational polyhedra. For each d=0,1,2,. . ., we define a rational measure λd on the set of rational polyhedra, and show that any two $\mathbb Z$-homeomorphic rational polyhedra $$P\subseteq {\mathbb{R^n}}$$ and $Q\subseteq {\mathbb{R^{\it m}}}$ satisfy $\lambda_d(P)=\lambda_d(Q)$. $\lambda_n(P)$ coincides with the n-dimensional Lebesgue measure of P. If 0 ≤ dim P=d < n then λd(P)>0. For rational d-simplexes T lying in the same d-dimensional affine subspace of ${\mathbb{R^{\it n}}, $\lambda_d(T)$$ is proportional to the d-dimensional Hausdorff measure of T. We characterize λd among all unimodular invariant valuations.
Based on the formal framework of reaction systems by Ehrenfeucht and Rozenberg[Fund. Inform. 75 (2007) 263–280], reaction automata (RAs)have been introduced by Okubo et al. [Theoret. Comput. Sci.429 (2012) 247–257], as language acceptors with multiset rewritingmechanism. In this paper, we continue the investigation of RAs with a focus on the twomanners of rule application: maximally parallel and sequential. Considering restrictionson the workspace and the λ-input mode, we introduce the correspondingvariants of RAs and investigate their computation powers. In order to explore Turingmachines (TMs) that correspond to RAs, we also introduce a new variant of TMs withrestricted workspace, called s(n)-restricted TMs. Themain results include the following: (i) for a language L and a functions(n), L is accepted by ans(n)-bounded RA with λ-input mode insequential manner if and only if L is accepted by alog s(n)-bounded one-way TM; (ii) if a languageL is accepted by a linear-bounded RA in sequential manner, thenL is also accepted by a P automaton [Csuhaj−Varju and Vaszil, vol. 2597of Lect. Notes Comput. Sci. Springer (2003) 219–233.] in sequentialmanner; (iii) the class of languages accepted by linear-bounded RAs in maximally parallelmanner is incomparable to the class of languages accepted by RAs in sequential manner.
Several types of systems of parallel communicating restarting automata are introduced andstudied. The main result shows that, for all types of restarting automata X, centralizedsystems of restarting automata of type X have the same computational power asnon-centralized systems of restarting automata of the same type and the same number ofcomponents. This result is proved by a direct simulation. In addition, it is shown thatfor one-way restarting automata without auxiliary symbols, systems of degree at least twoare more powerful than the component automata themselves. Finally, a lower and an upperbound are given for the expressive power of systems of parallel communicating restartingautomata.
The recently introduced model of transducing by observing is compared with traditionalmodels for computing transductions on the one hand and the recently introduced restartingtransducers on the other hand. Most noteworthy, transducing observer systems withlength-reducing rules are almost equivalent to RRWW-transducers. With painter rules weobtain a larger class of relations that additionally includes nearly all rationalrelations.