To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper provides an overview of recent work by the authors and others on two topics in the model theory of finite structures. The point of view here differs from that usually associated with the term ‘finite model theory’, as presented for example in [21] or [46], in which the emphasis and motivation come primarily from computer science. Instead, the inspiration for this work has its origins in contemporary (infinite) model theoretic themes such as dimension, independence, and various measures of the complexity of definable sets. Each of the topics deals with classes of finite structures for first-order logic that are isolated by conditions that are drawn from these model-theoretic considerations. Moreover, in both cases, connections exist to areas in infinite model theory such as stability and simplicity theory, and o-minimality. This survey is intended for both mathematical logicians and computer scientists whose work focuses on logical aspects of the subject.
The first theme concerns asymptotic classes of finite structures. This subject has its origins in the model theory of finite fields, via the work of Chatzidakis, van den Dries and Macintyre [13] (see Theorem 4.2.1) and the earlier model theory of finite fields developed by Ax [4], and ultimately rests on the Lang-Weil bounds for the number of points in a finite field of an irreducible variety defined over that field.
In this chapter, we consider spatial databases that are modeled as semi-algebraic sets and we present some logic-based languages to query them. We discuss various properties of these query languages, mainly concerning their expressive power.
The basic query language in this context is first-order logic over the real numbers extended with predicates to address the spatial database relations (Section 2.2). We discuss geometric properties that are expressible in this logic (Section 2.3) and then focus on first-order expressible topological properties of 2-dimensional spatial datasets. A property is called topological if it is invariant under homeomorphisms of the ambient space. We give a characterization of topological elementary equivalence and present a point-based language, called cone logic that captures exactly the topological queries expressible in first-order logic over the reals (Section 2.4 and 2.7). Next, we present another point-based language that captures the first-order queries that are invariant under affinities (Section 2.6).
The second half of this chapter is devoted to extensions of first-order logic over the reals with some form of recursion. We briefly discuss two such extensions: spatial Datalog and first-order logic extended with a while-loop (Section 2.8). We discuss in more detail extensions of first-order logic with different types of transitive-closure operators, with or without stop-conditions (Section 2.9) and investigate their expressive power (Section 2.10). The evaluation of queries expressed in transitive-closure logic with or without stop conditions may be non-terminating.
This volume is based on the satellite workshop on Finite and Algorithmic Model Theory that took place at the University of Durham, January 9–13, 2006, to inaugurate the scientific program Logic and Algorithms held at the Isaac Newton Institute for Mathematical Sciences during the first six months of 2006. The goal of the workshop was to explore the emerging and potential connections between finite and infinite model theory, and their applications to theoretical computer science. The primarily tutorial format introduced researchers and graduate students to a number of fundamental topics. The excellent quality of the tutorials suggested to the program organizers, Anuj Dawar and Moshe Vardi, that a volume based on the workshop presentations could serve as a valuable and lasting reference. They proposed this to the workshop scientific committee; this volume is the outcome.
The Logic and Algorithms program focused on the connection between two chief concerns of theoretical computer science: (i) how to ensure and verify the correctness of computing systems; and (ii) how to measure the resources required for computations and ensure their efficiency. The two areas historically have interacted little with each other, partly because of the divergent mathematical techniques they have employed. More recently, areas of research in which model-theoretic methods play a central role have reached across both sides of this divide. Results and techniques that have been developed have found applications to fields such as database theory, complexity theory, and verification.
Algorithmic meta-theorems are general algorithmic results applying to a whole range of problems, rather than just to a single problem alone. They often have a logical and a structural component, that is they are results of the form: every computational problem that can be formalised in a given logic ℒ can be solved efficiently on every class C of structures satisfying certain conditions.
This paper gives a survey of algorithmic meta-theorems obtained in recent years and the methods used to prove them. As many meta-theorems use results from graph minor theory, we give a brief introduction to the theory developed by Robertson and Seymour for their proof of the graph minor theorem and state the main algorithmic consequences of this theory as far as they are needed in the theory of algorithmic meta-theorems.
Introduction
Algorithmic meta-theorems are general algorithmic results applying to a whole range of problems, rather than just to a single problem alone. In this paper we will concentrate on meta-theorems that have a logical and a structural component, that is on results of the form: every computational problem that can be formalised in a given logic ℒ can be solved efficiently on every class C of structures satisfying certain conditions.
The first such theorem is Courcelle's well-known result [13] stating that every problem definable in monadic second-order logic can be solved efficiently on any class of graphs of bounded tree-width.
The model theory of finite structures is intimately connected to various fields in computer science, including complexity theory, databases, and verification. In particular, there is a close relationship between complexity classes and the expressive power of logical languages, as witnessed by the fundamental theorems of descriptive complexity theory, such as Fagin's Theorem and the Immerman-Vardi Theorem (see [78, Chapter 3] for a survey).
However, for many applications, the strict limitation to finite structures has turned out to be too restrictive, and there have been considerable efforts to extend the relevant logical and algorithmic methodologies from finite structures to suitable classes of infinite ones. In particular this is the case for databases and verification where infinite structures are of crucial importance [130]. Algorithmic model theory aims to extend in a systematic fashion the approach and methods of finite model theory, and its interactions with computer science, from finite structures to finitely-presentable infinite ones.
There are many possibilities to present infinite structures in a finite manner. A classical approach in model theory concerns the class of computable structures; these are countable structures, on the domain of natural numbers, say, with a finite collection of computable functions and relations. Such structures can be finitely presented by a collection of algorithms, and they have been intensively studied in model theory since the 1960s. However, from the point of view of algorithmic model theory the class of computable structures is problematic.
Most of the work in model theory has, so far, considered infinite structures and the methods and results that have been worked out in this context cannot usually be transferred to the study of finite structures in an obvious way. In addition, some basic results from infinite model theory fail within the context of finite models. The theory about finite structures has largely developed in connection with theoretical computer science, in particular complexity theory [12]. The question arises whether these two “worlds”, the study of infinite structures and the study of finite structures, can be woven together in some way and enrich each other. In particular, one may ask if it is possible to adapt notions and methods which have played an important role in infinite model theory to the context of finite structures, and in this way get a better understanding of fairly large and sufficiently well-behaved classes of finite structures.
If we are to study structures in relation to some formal language, then the question arises which one to choose. Most of infinite model theory considers first-order logic. Within finite model theory various restrictions and extensions of first-order logic have been considered, since first-order logic may be considered as being both too strong and too weak (in different senses) for the study of finite structures.
Some prominent fragments of first-order logic are discussed from a game-oriented and modal point of view, with an emphasis on model theoretic techniques for the non-classical context. This includes the context of finite model theory as well as the model theory of other natural non-elementary classes of structures. We stress the modularity and compositionality of the games as a key ingredient in the exploration of the expressive power of logics over specific classes of structures. The leading model theoretic theme is expressive completeness – or the characterisation of fragments of first-order logic as expressively complete over some class of (finite) structures for first-order properties with some prescribed semantic preservation behaviour. In contrast with classical expressive completeness arguments, the emphasis here is on explicit model constructions and transformations, which are guided by the game analysis of both first-order logic and of the imposed semantic constraints.
keywords: finite model theory, model theoretic games, bisimulation, modal and guarded logic, expressive completeness, preservation and characterisation theorems
Introduction
Expressiveness over restricted classes of structures
The purpose of this survey is to highlight game-oriented methods and explicit model constructions for the analysis of fragments of first-order logic, in particular in restriction to non-elementary classes of structures. The following is meant to highlight and preview some key points in terms of both the material to be covered and the perspective that we want to adopt in its presentation.
The Annual European Meeting of the Association for Symbolic Logic, also known as the Logic Colloquium, is among the most prestigious annual meetings in the field. The current volume, Logic Colloquium 2007, with contributions from plenary speakers and selected special session speakers, contains both expository and research papers by some of the best logicians in the world. This volume covers many areas of contemporary logic: model theory, proof theory, set theory, and computer science, as well as philosophical logic, including tutorials on cardinal arithmetic, on Pillay's conjecture, and on automatic structures. This volume will be invaluable for experts as well as those interested in an overview of central contemporary themes in mathematical logic.
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of the repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate ‘faulty’ or ‘unexpected’ behaviour), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. Solution concepts that try to address these shortcomings of Nash equilibrium are discussed.
Introduction
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. Intuitively, a Nash equilibrium is a strategy profile (a collection of strategies, one for each player in the game) such that no player can do better by deviating. The intuition behind Nash equilibrium is that it represents a possible steady state of play. It is a fixed-point where each player holds correct beliefs about what other players are doing, and plays a best response to those beliefs. Part of what makes Nash equilibrium so attractive is that in games where each player has only finitely many possible deterministic strategies, and we allow mixed (i.e., randomised) strategies, there is guaranteed to be a Nash equilibrium [Nash, 1950a] (this was, in fact, the key result of Nash's thesis).
This chapter gives an introduction to the connection between automata theory and the theory of two player games of infinite duration. We illustrate how the theory of automata on infinite words can be used to solve games with complex winning conditions, for example specified by logical formulae. Conversely, infinite games are a useful tool to solve problems for automata on infinite trees such as complementation and the emptiness test.
Introduction
The aim of this chapter is to explain some interesting connections between automata theory and games of infinite duration. The context in which these connections have been established is the problem of automatic circuit synthesis from specifications, as posed by Church [1962]. A circuit can be viewed as a device that transforms input sequences of bit vectors into output sequences of bit vectors. If the circuit acts as a kind of control device, then these sequences are assumed to be infinite because the computation should never halt.
The task in synthesis is to construct such a circuit based on a formal specification describing the desired input/output behaviour. This problem setting can be viewed as a game of infinite duration between two players: The first player provides the bit vectors for the input, and the second player produces the output bit vectors. The winning condition of the game is given by the specification. The goal is to find a strategy for the second player such that all pairs of input/output sequences that can be produced according to the strategy satisfy the specification.
We study observation-based strategies for two-player turn-based games played on graphs with parity objectives. An observation-based strategy relies on imperfect information about the history of a play, namely, on the past sequence of observations. Such games occur in the synthesis of a controller that does not see the private state of the plant. Our main results are twofold. First, we give a fixed-point algorithm for computing the set of states from which a player can win with a deterministic observation-based strategy for a parity objective. Second, we give an algorithm for computing the set of states from which a player can win with probability 1 with a randomised observation-based strategy for a reachability objective. This set is of interest because in the absence of perfect information, randomised strategies are more powerful than deterministic ones.
Introduction
Games are natural models for reactive systems. We consider zero-sum two player turn-based games of infinite duration played on finite graphs. One player represents a control program, and the second player represents its environment. The graph describes the possible interactions of the system, and the game is of infinite duration because reactive systems are usually not expected to terminate. In the simplest setting, the game is turn-based and with perfect information, meaning that the players have full knowledge of both the game structure and the sequence of moves played by the adversary. The winning condition in a zero-sum graph game is defined by a set of plays that the first player aims to enforce, and that the second player aims to avoid.
In this chapter we discuss relationships between logic and games, focusing on first-order logic and fixed-point logics, and on reachability and parity games. We discuss the general notion of model-checking games. While it is easily seen that the semantics of first-order logic can be captured by reachability games, more effort is required to see that parity games are the appropriate games for evaluating formulae from least fixed-point logic and the modal µ-calculus. The algorithmic consequences of this result are discussed. We also explore the reverse relationship between games and logic, namely the question of how winning regions in games are definable in logic. Finally the connections between logic and games are discussed for more complicated scenarios provided by inflationary fixed-point logic and the quantitative µ-calculus.
Introduction
The idea that logical reasoning can be seen as a dialectic game, where a proponent attempts to convince an opponent of the truth of a proposition is very old. Indeed, it can be traced back to the studies of Zeno, Socrates, and Aristotle on logic and rhetoric. Modern manifestation of this idea are the presentation of the semantics of logical formulae by means of model-checking games and the algorithmic evaluation of logical statements via the synthesis of winning strategies in such games.
model-checking games are two-player games played on an arena which is formed as the product of a structure and a formula ψ where one player, called the Verifier, attempts to prove that ψ is true in while the other player, the Falsifier, attempts to refute this.
This is a short introduction to the subject of strategic games. We focus on the concepts of best response, Nash equilibrium, strict and weak dominance, and mixed strategies, and study the relation between these concepts in the context of the iterated elimination of strategies. Also, we discuss some variants of the original definition of a strategic game. Finally, we introduce the basics of mechanism design and use pre-Bayesian games to explain it.
Introduction
Mathematical game theory, as launched by Von Neumann and Morgenstern in their seminal book, von Neumann and Morgenstern [1944], followed by Nash's contributions Nash [1950, 1951], has become a standard tool in economics for the study and description of various economic processes, including competition, cooperation, collusion, strategic behaviour and bargaining. Since then it has also been successfully used in biology, political sciences, psychology and sociology. With the advent of the Internet game theory became increasingly relevant in computer science.
One of the main areas in game theory are strategic games (sometimes also called non-cooperative games), which form a simple model of interaction between profit maximising players. In strategic games each player has a payoff function that he aims to maximise and the value of this function depends on the decisions taken simultaneously by all players. Such a simple description is still amenable to various interpretations, depending on the assumptions about the existence of private information.