To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Markov Random Fields have been successfully applied to many computer vision problems such as image segmentation, 3D reconstruction, and stereo. The problem of estimating the Maximum a Posteriori (MAP) solution of models such as Markov Random Fields (MRF) can be formulated as a function minimization problem. This has made function minimization an indispensable tool in computer vision. The problem of minimizing a function of discrete variables is, in general, NP-hard. However, functions belonging to certain classes of functions, such as submodular functions, can be minimized in polynomial time. In this chapter, we discuss examples of popular models used in computer vision for which the MAP inference problem results in a tractable function minimization problem. We also discuss how algorithms used in computer vision overcome challenges introduced by the scale and form of function minimization problems encountered in computer vision.
Labeling Problems in Computer Vision
Many problems in computer vision and scene understanding can be formulated in terms of finding the most probable values of certain hidden or unobserved variables. These variables encode some property of the scene and can be continuous or discrete. These problems are commonly referred to as labelling problems as they involve assigning a label to the hidden variables. Labelling problems occur in many forms, from lattice based problems of dense stereo and image segmentation discussed in [6, 40] to the use of pictorial structures for object recognition as done by [10]. Some examples of problems which can be formulated in this manner are shown in Figure 10.1.
One approach for dealing with intractability is to utilize representations that permit certain queries of interest to be computable in polytime. Such tractable representations will ultimately be exponential in size for certain problems and they may also not be suitable for direct specification by users. Hence, they are typically generated from other specifications through a process known as knowledge compilation. In this chapter, we review a subset of these tractable representations, known as decomposable negation normal forms (DNNFs), which have proved influential in a number of applications, including formal verification, model-based diagnosis and probabilistic reasoning.
Introduction
Many areas of computer science have shown a great interest in tractable and canonical representations of propositional knowledge bases (aka, Boolean functions). The ordered binary decision diagram (OBDD) is one such representation that received much attention and proved quite influential in a variety of areas [13]. Within AI, the study of tractable representations has also had a long tradition (e.g., [61, 30, 31, 49, 62, 14, 28, 19, 13, 52, 66, 50]). This area of research, which is also known as knowledge compilation, has become more systematic since [28], which showed that many known and useful representations are subsets of negation normal form (NNF) and correspond to imposing specific properties on NNF. The most fundamental of these properties turned out to be decomposability and determinism, giving rise to the corresponding language of DNNF and its subset, d-DNNF. This chapter is dedicated to DNNF and its subsets, which also include the influential language of OBDDs, and the more recently introduced sentential decision diagrams (SDDs).
Preprocessing or data reduction means reducing a problem to something simpler by solving an easy part of the input. This type of algorithm is used in almost every application. In spite of wide practical applications of preprocessing, a systematic theoretical study of such algorithms remains elusive. The framework of parameterized complexity can be used as an approach to analysing preprocessing algorithms. In this framework, the algorithms have, in the addition to the input, an extra parameter that is likely to be small. This has resulted in a study of preprocessing algorithms that reduce the size of the input to a pure function of the parameter (independent of the input size). Such types of preprocessing algorithms are called kernelization algorithms. In this survey we give an overview of some classical and new techniques in the design of such algorithms.
Introduction
Preprocessing (data reduction or kernelization) as a strategy for coping with hard problems is used in many situations. The history of this approach can be traced back to the 1950s [34], where truth functions were simplified using reduction rules. A natural question arises: how can we measure the quality of preprocessing rules proposed for a specific problem? For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this oversight was the following impossibility result: if, starting with an instance I of an NP-hard problem, we could compute in polynomial time an instance I′ equivalent to I and with |I′| < |I|, then it would follow that P=NP, thereby contradicting classical complexity assumptions.
In mathematics and computer science, optimization is the process of finding the best solution from a set of alternatives that satisfy some constraints. Many applications in allied fields of computer science like machine learning, computer vision, bioinformatics, involve the solution of an optimization problem. For instance, optimization is used to schedule trains and airplanes, allocate the advertisements we see on television or in connection with internet search results, ind the optimal placement of sensors to detect and neutralize security threats, or even to make decisions on what is the best way to perform medical surgery on a patient.
Optimization problems are generally hard to solve – their solution may involve exhaustively searching over a set of solutions whose size could increase exponentially with the number of variables whose values we may want to infer. That said, in practice, many of these problems can often be solved with remarkable efficiency. This is usually done by dedicated techniques, developed in each and every application domain, that exploit the “properties” of the problems encountered in practice.
Over the last few decades, researchers working in a number of different disciplines have tried to solve optimization problems that are encountered in their respective ields by exploiting some structure or properties inherent in the problems. In some cases, they have been able to isolate classes of optimization problems that can be solved optimally in time polynomial in the number of variables, while in other cases, they have been able to develop efficient algorithms that can produce solutions that, although not optimal, are good enough.
This paper presents a reference framework for the configuration process. The reference framework is established through an extensive review of existing literature, and as such consolidates an extensive theoretical base. The review of literature shows a broadening of the understanding of the configuration task. The definition of the configuration task is somewhat ambiguous because different research groups define configuration tasks differently. This paper proposes a reference framework for configuration that permits a more precise understanding of a configuration task, a definition of the basic concepts in product configuration, and a total configuration system view that describes how operators come together to perform the configuration task in the configuration process. We will define the product, the product model, the configuration task, and the configuration system, and put the whole thing into perspective with the theory of technical systems, where we describe the configuration process and the different abstraction level of configurations. We will also use our resulting framework to describe sales configuration, technical configuration, and reconfiguration. We do this to synthesize previous work, to clarify and make coherent definitions of relevant terms, to extent the definition of product configuration to include “softer” products like information and service, and finally, to give a comparative framework to analyze work done in the field of product configuration. The total configuration system, together with the definition of key concepts, comprises a strong reference framework when working with, developing, and analyzing configuration systems.
In the world of Facebook, Twitter and Yelp, water-cooler conversations with co-workers and backyard small talk with neighbors have moved from the physical world to the digital arena. In this new landscape, organizations ranging from Fortune 500 companies to government agencies to political campaigns continuously monitor online opinions in an effort to guide their actions. Are consumers satisfied with our product? How are our policies perceived? Do voters agree with our platform? Measuring online opinion is more complex than just reading a few posted reviews. Social media is replete with noise and chatter that can contaminate monitoring efforts. By knowing what shapes online opinions, organizations can better uncover the valuable insights hidden in the social media chatter and better inform strategy. This book can help anyone facing the challenge of making sense of social media data to move beyond the current practice of social media monitoring to a more comprehensive use of social media intelligence.
SuperFractals, first published in 2006, is the successor to Fractals Everywhere, in which the power and beauty of Iterated Function Systems were introduced and applied to producing startling and original images that reflect complex structures found for example in nature. This provoked the question of whether there is a deeper connection between topology, geometry, IFS and codes on the one hand and biology, DNA and protein development on the other. Now, 20 years later, Barnsley explains how IFS have developed in order to address this issue. Ideas such as fractal tops and superIFS are introduced, and the classical deterministic approach is combined with probabilistic ideas to produce new mathematics and algorithms that open a whole theory that could have applications in computer graphics, bioinformatics, economics, signal processing and beyond. For the first time these ideas are explained in book form, and illustrated with breathtaking pictures.
A valuable resource for working programmers, as well as a fount of useful algorithmic tools for computer scientists, this new edition of the popular calendars book expands the treatment of the previous edition to new calendar variants: generic cyclical calendars and astronomical lunar calendars as well as the Korean, Vietnamese, Aztec, and Tibetan calendars. The authors frame the calendars of the world in a completely algorithmic form, allowing easy conversion among these calendars and the determination of secular and religious holidays. LISP code for all the algorithms are available on the Web.
A novel swarm intelligence approach for combinatorial optimization is proposed, which we call probability increment based swarm optimization (PIBSO). The population evolution mechanism of PIBSO is depicted. Each state in search space has a probability to be chosen. The rule of increasing the probabilities of states is established. Incremental factor is proposed to update probability of a state, and its value is determined by the fitness of the state. It lets the states with better fitness have higher probabilities. Usual roulette wheel selection is employed to select states. Population evolution is impelled by roulette wheel selection and state probability updating. The most distinctive feature of PIBSO is because roulette wheel selection and probability updating produce a trade-off between global and local search; when PIBSO is applied to solve the printed circuit board assembly optimization problem (PCBAOP), it performs superiorly over existing genetic algorithm and adaptive particle swarm optimization on length of tour and CPU running time, respectively. The reason for having such advantages is analyzed in detail. The success of PCBAOP application verifies the effectiveness and efficiency of PIBSO and shows that it is a good method for combinatorial optimization in engineering.
Classical computer science textbooks tell us that some problems are 'hard'. Yet many areas, from machine learning and computer vision to theorem proving and software verification, have defined their own set of tools for effectively solving complex problems. Tractability provides an overview of these different techniques, and of the fundamental concepts and properties used to tame intractability. This book will help you understand what to do when facing a hard computational problem. Can the problem be modelled by convex, or submodular functions? Will the instances arising in practice be of low treewidth, or exhibit another specific graph structure that makes them easy? Is it acceptable to use scalable, but approximate algorithms? A wide range of approaches is presented through self-contained chapters written by authoritative researchers on each topic. As a reference on a core problem in computer science, this book will appeal to theoreticians and practitioners alike.
We present a two-parameter family $(G_{m,k})_{m, k \in \mathbb{N}_{\geq 2}}$, of finite, non-abelian random groups and propose that, for each fixed k, as m → ∞ the commuting graph of Gm,k is almost surely connected and of diameter k. We present heuristic arguments in favour of this conjecture, following the lines of classical arguments for the Erdős–Rényi random graph. As well as being of independent interest, our groups would, if our conjecture is true, provide a large family of counterexamples to the conjecture of Iranmanesh and Jafarzadeh that the commuting graph of a finite group, if connected, must have a bounded diameter. Simulations of our model yielded explicit examples of groups whose commuting graphs have all diameters from 2 up to 10.
Beginning with a deterministic distributed feedback control for nonholonomic vehicle formations, we develop a stochastic optimal control approach for agents to enhance their non-optimal controls with additive correction terms based on the Hamilton–Jacobi–Bellman equation, making them optimal and robust to uncertainties. In order to avoid discretization of the high-dimensional cost-to-go function, we exploit the stochasticity of the distributed nature of the problem to develop an equivalent Kalman smoothing problem in a continuous state space using a path integral representation. Our approach is illustrated by numerical examples in which agents achieve a formation with their neighbors using only local observations.
Consider the setting of sparse graphs on N vertices, where the vertices have distinct “names”, which are strings of length O(log N) from a fixed finite alphabet. For many natural probability models, the entropy grows as c N log N for some model-dependent rate constant c. The mathematical content of this paper is the (often easy) calculation of c for a variety of models, in particular for various standard random graph models adapted to this setting. Our broader purpose is to publicize this particular setting as a natural setting for future theoretical study of data compression for graphs, and (more speculatively) for discussion of unorganized versus organized complexity.
Many research works on the control of nonholonomic wheeled mobile robots (WMRs) do not consider the actuator saturation problem and the absence of velocity sensors in practice. The actuator saturation deteriorates the tracking performance of the controller, and the use of velocity sensors increases the cost and weight of WMR systems. This paper simultaneously addresses these problems by designing a saturated output feedback controller for uncertain nonholonomic WMRs. First, a second-order input–output model of nonholonomic WMRs is developed by defining a suitable set of output equations. Then a saturated adaptive robust tracking controller is proposed without velocity measurements. For this purpose, a nonlinear saturated observer is used to estimate robot velocities. The risk of actuator saturation is effectively reduced by utilizing saturation functions in the design of the observer–controller scheme. Semi-global uniform ultimate boundedness of error signals is guarantied by the Lyapunov stability analyses. Finally, simulation results are provided to show the effectiveness of the proposed controller. Compared with one recent work of the author, a comparative study is also presented to illustrate that the proposed saturated controller is more effective when WMR actuators are subjected to saturation.