To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book aims to be an introduction to model theory which can be used without any background in logic. We start from scratch, introducing first-order logic, structures, languages etc. but move on fairly quickly to the fundamental results in model theory and stability theory. We also decided to cover simple theories and Hrushovski constructions, which over the last decade have developed into an important subject. We try to give the necessary background in algebra, combinatorics and set theory either in the course of the text or in the corresponding section of the appendices. The exercises form an integral part of the book. Some of them are used later on, others complement the text or present aspects of the theory that we felt should not be completely ignored. For the most important exercises (and the more difficult ones) we include (hints for) solutions at the end of the book. Those exercises which will be used in the text have their solution marked with an asterisk.
The book falls into four parts. The first three chapters introduce the basics as would be contained in a course giving a general introduction to model theory. This first part ends with Chapter 4 which introduces and explores the notion of a type, the topology on the space of types and a way to make sure that a certain type will not be realized in a model to be constructed. The chapter ends with Fraïssé's amalgamation method, a simple but powerful tool for constructing models.
Commercial Users of Functional Programming (CUFP) is a yearly workshop that is aimed at the community of software developers who use functional programming in real-world settings. This scribe report covers the talks that were delivered at the 2011 workshop, which was held in association with ICFP in Tokyo. The goal of the report is to give the reader a sense of what went on, rather than to reproduce the full details of the talks. Videos and slides from all the talks are available online at http://cufp.org.
Despite the historical difference in focus between AI planning techniques and Integer Programming (IP) techniques, recent research has shown that IP techniques show significant promise in their ability to solve AI planning problems. This paper provides approaches to encode AI planning problems as IP problems, describes some of the more significant issues that arise in using IP for AI planning, and discusses promising directions for future research.
We investigate several geometric models of networks that simultaneously have some nice global properties, including the small-diameter property, the small-community phenomenon, which is defined to capture the common experience that (almost) everyone in society also belongs to some meaningful small communities, and the power law degree distribution, for which our result significantly strengthens those given in van den Esker (2008) and Jordan (2010). These results, together with our previous work in Li and Peng (2011), build a mathematical foundation for the study of both communities and the small-community phenomenon in various networks.
In the proof of the power law degree distribution, we develop the method of alternating concentration analysis to build a concentration inequality by alternately and iteratively applying both the sub- and super-martingale inequalities, which seems to be a powerful technique with further potential applications.
Valentini (1983) has presented a proof of cut-elimination for provability logic GL for a sequent calculus using sequents built from sets as opposed to multisets, thus avoiding an explicit contraction rule. From a formal point of view, it is more syntactic and satisfying to explicitly identify the applications of the contraction rule that are ‘hidden’ in proofs of cut-elimination for such sequent calculi. There is often an underlying assumption that the move to a proof of cut-elimination for sequents built from multisets is straightforward. Recently, however, it has been claimed that Valentini’s arguments to eliminate cut do not terminate when applied to a multiset formulation of the calculus with an explicit rule of contraction. The claim has led to much confusion and various authors have sought new proofs of cut-elimination for GL in a multiset setting.
Here we refute this claim by placing Valentini’s arguments in a formal setting and proving cut-elimination for sequents built from multisets. The use of sequents built from multisets enables us to accurately account for the interplay between the weakening and contraction rules. Furthermore, Valentini’s original proof relies on a novel induction parameter called “width” which is computed ‘globally’. It is difficult to verify the correctness of his induction argument based on “width.” In our formulation however, verification of the induction argument is straightforward. Finally, the multiset setting also introduces a new complication in the case of contractions above cut when the cut-formula is boxed. We deal with this using a new transformation based on Valentini’s original arguments.
Finally, we discuss the possibility of adapting this cut-elimination procedure to other logics axiomatizable by formulae of a syntactically similar form to the GL axiom.
Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast, scheduling research has focused on much larger problems where there is little action choice, but the resulting ordering problem is hard. In this paper, we give an overview of AI planning and scheduling techniques, focusing on their similarities, differences, and limitations. We also argue that many difficult practical problems lie somewhere between planning and scheduling, and that neither area has the right set of tools for solving these vexing problems.
In the philosophy of mathematics, indispensability arguments aim to show that we are justified in believing that mathematical objects exist on the grounds that we make indispensable reference to such objects in our best scientific theories (Quine, 1981a; Putnam, 1979a) and in our everyday reasoning (Ketland, 2005). I wish to defend a particular objection to such arguments called instrumental nominalism. Existing formulations of this objection are either insufficiently precise or themselves make reference to mathematical objects or possible worlds. I show how to formulate the position precisely without making any such reference. To do so, it is necessary to supplement the standard modal operators with two new operators that allow us to shift the locus of evaluation for a subformula. I motivate this move and give a semantics for the new operators.
The recent effort to integrate techniques from the fields of artificial intelligence and operations research has been motivated in part by the fact that scientists in each group are often unacquainted with recent (and not so recent) progress in the other field. Our goal in this paper is to introduce the artificial intelligence community to pseudo-Boolean representation and cutting plane proofs, and to introduce the operations research community to restricted learning methods such as relevance-bounded learning. Complete methods for solving satisfiability problems are necessarily bounded from below by the length of the shortest proof of unsatisfiability; the fact that cutting plane proofs of unsatisfiability can be exponentially shorter than the shortest resolution proof can thus in theory lead to substantial improvements in the performance of complete satisfiability engines. Relevance-bounded learning is a method for bounding the size of a learned constraint set. It is currently the best artificial intelligence strategy for deciding which learned constraints to retain and which to discard. We believe these two elements or some analogous form of them are necessary ingredients to improving the performance of satisfiability algorithms generally. We also present a new cutting plane proof of the pigeonhole principle that is of size n2, and show how to implement some intelligent backtracking techniques using pseudo-Boolean representation.
This paper describes ILP-PLAN, a framework for solving AI planning problems represented as integer linear programs. ILP-PLAN extends the planning as satisfiability framework to handle plans with resources, action costs, and complex objective functions. We show that challenging planning problems can be effectively solved using both traditional branch-and-bound integer programming solvers and efficient new integer local search algorithms. ILP-PLAN can find better quality solutions for a set of hard benchmark logistics planning problems than had been found by any earlier system.
We study a first-order functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and type-test (a Boolean expression testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; for example, intersection, union, negation, singleton, nullable, variant, and algebraic types are all derivable. We formulate a semantics in which expressions denote terms, and types are interpreted as first-order logic formulas. Subtyping is defined as valid implication between the semantics of types. The formulas are interpreted in a specific model that we axiomatize using standard first-order theories. On this basis, we present a novel type-checking algorithm able to eliminate many dynamic tests and to detect many errors statically. The key idea is to rely on a Satisfiability Modulo Theories solver to compute subtyping efficiently. Moreover, using a satisfiability modulo theories solver allows us to show the uniqueness of normal forms for non-deterministic expressions, provide precise counterexamples when type-checking fails, detect empty types, and compute instances of types statically and at run-time.
Optimization and constraint satisfaction methods are complementary to a large extent, and there has been much recent interest in combining them. Yet no generally accepted principle or scheme for their merger has evolved. We propose a scheme based on two fundamental dualities: the duality of search and inference, and the duality of strengthening and relaxation. Optimization as well as constraint satisfaction methods can be seen as exploiting these dualities in their respective ways. Our proposal is that rather than employ either type of method exclusively, one can focus on how these dualities can be exploited in a given problem class. The resulting algorithm is likely to contain elements from both optimization and constraint satisfaction, and perhaps new methods that belong to neither.
Both the Artificial Intelligence (AI) and the Operations Research (OR) communities are interested in developing techniques for solving hard combinatorial problems, in particular in the domain of planning and scheduling. AI approaches encompass a rich collection of knowledge representation formalisms for dealing with a wide variety of real-world problems. Some examples are constraint programming representations, logical formalisms, declarative and functional programming languages such as Prolog and Lisp, Bayesian models, rule-based formalism, etc. The downside of such rich representations is that in general they lead to intractable problems, and we therefore often cannot use such formalisms for handling realistic size problems. OR, on the other hand, has focused on more tractable representations, such as linear programming formulations. OR-based techniques have demonstrated the ability to identify optimal and locally optimal solutions for well-defined problem spaces. In general, however, OR solutions are restricted to rigid models with limited expressive power. AI techniques, on the other hand, provide richer and more flexible representations of real-world problems, supporting efficient constraint-based reasoning mechanisms as well as mixed initiative frameworks, which allow the human expertise to be in the loop. The challenge lies in providing representations that are expressive enough to describe real-world problems and at the same time guaranteeing good and fast solutions.
There are two radically different approaches to robot navigation: the first is to use a map of the robot's environment; the second uses a set of action reflexes to enable a robot to react rapidly to local sensory information. Hybrid approaches combining features of both also exist. This book is the first to propose a method for evaluating the different approaches that shows how to decide which is the most appropriate for a given situation. It begins by describing a complete implementation of a mobile robot including sensor modelling, map–building (a feature–based map and a grid–based free–space map), localisation, and path–planning. Exploration strategies are then tested experimentally in a range of environments and starting positions. The author shows the most promising results are observed from hybrid exploration strategies which combine the robustness of reactive navigation and the directive power of map–based strategies.
Machine learning is an interdisciplinary field of science and engineering that studies mathematical theories and practical applications of systems that learn. This book introduces theories, methods and applications of density ratio estimation, which is a newly emerging paradigm in the machine learning community. Various machine learning problems such as non-stationarity adaptation, outlier detection, dimensionality reduction, independent component analysis, clustering, classification and conditional density estimation can be systematically solved via the estimation of probability density ratios. The authors offer a comprehensive introduction of various density ratio estimators including methods via density estimation, moment matching, probabilistic classification, density fitting and density ratio fitting, as well as describing how these can be applied to machine learning. The book provides mathematical theories for density ratio estimation including parametric and non-parametric convergence analysis and numerical stability analysis to complete the first and definitive treatment of the entire framework of density ratio estimation in machine learning.