To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we consider spatial databases that are modeled as semi-algebraic sets and we present some logic-based languages to query them. We discuss various properties of these query languages, mainly concerning their expressive power.
The basic query language in this context is first-order logic over the real numbers extended with predicates to address the spatial database relations (Section 2.2). We discuss geometric properties that are expressible in this logic (Section 2.3) and then focus on first-order expressible topological properties of 2-dimensional spatial datasets. A property is called topological if it is invariant under homeomorphisms of the ambient space. We give a characterization of topological elementary equivalence and present a point-based language, called cone logic that captures exactly the topological queries expressible in first-order logic over the reals (Section 2.4 and 2.7). Next, we present another point-based language that captures the first-order queries that are invariant under affinities (Section 2.6).
The second half of this chapter is devoted to extensions of first-order logic over the reals with some form of recursion. We briefly discuss two such extensions: spatial Datalog and first-order logic extended with a while-loop (Section 2.8). We discuss in more detail extensions of first-order logic with different types of transitive-closure operators, with or without stop-conditions (Section 2.9) and investigate their expressive power (Section 2.10). The evaluation of queries expressed in transitive-closure logic with or without stop conditions may be non-terminating.
This volume is based on the satellite workshop on Finite and Algorithmic Model Theory that took place at the University of Durham, January 9–13, 2006, to inaugurate the scientific program Logic and Algorithms held at the Isaac Newton Institute for Mathematical Sciences during the first six months of 2006. The goal of the workshop was to explore the emerging and potential connections between finite and infinite model theory, and their applications to theoretical computer science. The primarily tutorial format introduced researchers and graduate students to a number of fundamental topics. The excellent quality of the tutorials suggested to the program organizers, Anuj Dawar and Moshe Vardi, that a volume based on the workshop presentations could serve as a valuable and lasting reference. They proposed this to the workshop scientific committee; this volume is the outcome.
The Logic and Algorithms program focused on the connection between two chief concerns of theoretical computer science: (i) how to ensure and verify the correctness of computing systems; and (ii) how to measure the resources required for computations and ensure their efficiency. The two areas historically have interacted little with each other, partly because of the divergent mathematical techniques they have employed. More recently, areas of research in which model-theoretic methods play a central role have reached across both sides of this divide. Results and techniques that have been developed have found applications to fields such as database theory, complexity theory, and verification.
Algorithmic meta-theorems are general algorithmic results applying to a whole range of problems, rather than just to a single problem alone. They often have a logical and a structural component, that is they are results of the form: every computational problem that can be formalised in a given logic ℒ can be solved efficiently on every class C of structures satisfying certain conditions.
This paper gives a survey of algorithmic meta-theorems obtained in recent years and the methods used to prove them. As many meta-theorems use results from graph minor theory, we give a brief introduction to the theory developed by Robertson and Seymour for their proof of the graph minor theorem and state the main algorithmic consequences of this theory as far as they are needed in the theory of algorithmic meta-theorems.
Introduction
Algorithmic meta-theorems are general algorithmic results applying to a whole range of problems, rather than just to a single problem alone. In this paper we will concentrate on meta-theorems that have a logical and a structural component, that is on results of the form: every computational problem that can be formalised in a given logic ℒ can be solved efficiently on every class C of structures satisfying certain conditions.
The first such theorem is Courcelle's well-known result [13] stating that every problem definable in monadic second-order logic can be solved efficiently on any class of graphs of bounded tree-width.
The model theory of finite structures is intimately connected to various fields in computer science, including complexity theory, databases, and verification. In particular, there is a close relationship between complexity classes and the expressive power of logical languages, as witnessed by the fundamental theorems of descriptive complexity theory, such as Fagin's Theorem and the Immerman-Vardi Theorem (see [78, Chapter 3] for a survey).
However, for many applications, the strict limitation to finite structures has turned out to be too restrictive, and there have been considerable efforts to extend the relevant logical and algorithmic methodologies from finite structures to suitable classes of infinite ones. In particular this is the case for databases and verification where infinite structures are of crucial importance [130]. Algorithmic model theory aims to extend in a systematic fashion the approach and methods of finite model theory, and its interactions with computer science, from finite structures to finitely-presentable infinite ones.
There are many possibilities to present infinite structures in a finite manner. A classical approach in model theory concerns the class of computable structures; these are countable structures, on the domain of natural numbers, say, with a finite collection of computable functions and relations. Such structures can be finitely presented by a collection of algorithms, and they have been intensively studied in model theory since the 1960s. However, from the point of view of algorithmic model theory the class of computable structures is problematic.
Most of the work in model theory has, so far, considered infinite structures and the methods and results that have been worked out in this context cannot usually be transferred to the study of finite structures in an obvious way. In addition, some basic results from infinite model theory fail within the context of finite models. The theory about finite structures has largely developed in connection with theoretical computer science, in particular complexity theory [12]. The question arises whether these two “worlds”, the study of infinite structures and the study of finite structures, can be woven together in some way and enrich each other. In particular, one may ask if it is possible to adapt notions and methods which have played an important role in infinite model theory to the context of finite structures, and in this way get a better understanding of fairly large and sufficiently well-behaved classes of finite structures.
If we are to study structures in relation to some formal language, then the question arises which one to choose. Most of infinite model theory considers first-order logic. Within finite model theory various restrictions and extensions of first-order logic have been considered, since first-order logic may be considered as being both too strong and too weak (in different senses) for the study of finite structures.
Some prominent fragments of first-order logic are discussed from a game-oriented and modal point of view, with an emphasis on model theoretic techniques for the non-classical context. This includes the context of finite model theory as well as the model theory of other natural non-elementary classes of structures. We stress the modularity and compositionality of the games as a key ingredient in the exploration of the expressive power of logics over specific classes of structures. The leading model theoretic theme is expressive completeness – or the characterisation of fragments of first-order logic as expressively complete over some class of (finite) structures for first-order properties with some prescribed semantic preservation behaviour. In contrast with classical expressive completeness arguments, the emphasis here is on explicit model constructions and transformations, which are guided by the game analysis of both first-order logic and of the imposed semantic constraints.
keywords: finite model theory, model theoretic games, bisimulation, modal and guarded logic, expressive completeness, preservation and characterisation theorems
Introduction
Expressiveness over restricted classes of structures
The purpose of this survey is to highlight game-oriented methods and explicit model constructions for the analysis of fragments of first-order logic, in particular in restriction to non-elementary classes of structures. The following is meant to highlight and preview some key points in terms of both the material to be covered and the perspective that we want to adopt in its presentation.
We show that if G is a simple outerplanar graph and H is a graph with the same Tutte polynomial as G, then H is also outerplanar. Examples show that the condition of G being simple cannot be omitted.
A vast amount of usable electronic data is in the form of unstructured text. The relation extraction task aims to identify useful information in text (e.g. PersonW works for OrganisationX, GeneY encodes ProteinZ) and recode it in a format such as a relational database or RDF triplestore that can be more effectively used for querying and automated reasoning. A number of resources have been developed for training and evaluating automatic systems for relation extraction in different domains. However, comparative evaluation is impeded by the fact that these corpora use different markup formats and notions of what constitutes a relation. We describe the preparation of corpora for comparative evaluation of relation extraction across domains based on the publicly available ACE 2004, ACE 2005 and BioInfer data sets. We present a common document type using token standoff and including detailed linguistic markup, while maintaining all information in the original annotation. The subsequent reannotation process normalises the two data sets so that they comply with a notion of relation that is intuitive, simple and informed by the semantic web. For the ACE data, we describe an automatic process that automatically converts many relations involving nested, nominal entity mentions to relations involving non-nested, named or pronominal entity mentions. For example, the first entity is mapped from ‘one’ to ‘Amidu Berry’ in the membership relation described in ‘Amidu Berry, one half of PBS’. Moreover, we describe a comparably reannotated version of the BioInfer corpus that flattens nested relations, maps part-whole to part-part relations and maps n-ary to binary relations. Finally, we summarise experiments that compare approaches to generic relation extraction, a knowledge discovery task that uses minimally supervised techniques to achieve maximally portable extractors. These experiments illustrate the utility of the corpora.1
Two players share a connected graph with non-negative weights on the vertices. They alternately take the vertices (one in each turn) and collect their weights. The rule they have to obey is that the remaining part of the graph must be connected after each move. We conjecture that the first player can get at least half of the weight of any tree with an even number of vertices. We provide a strategy for the first player to get at least 1/4 of an even tree. Moreover, we confirm the conjecture for subdivided stars. The parity condition is necessary: Alice gets nothing on a three-vertex path with all the weight at the middle. We suspect a kind of general parity phenomenon, namely, that the first player can gather a substantial portion of the weight of any ‘simple enough’ graph with an even number of vertices.
Over the past few years, an increasingly diverse and ever-changing wireless spectrum has created a need for cognitive radio networks. Such networks leverage spectrum sensing and information from each layer in the protocol stack to overcome spectrum diversity by adapting all layers (e.g., the MAC and PHY) on the fly. By doing so, cognitive radios can achieve the greatest level of performance, given the current networking conditions. For example, in areas where access to the spectrum is highly contended, the radio can switch from using a carrier sense multiple access (CSMA) MAC protocol, to a time division multiple access protocol that reduces overhead in accessing the spectrum to increase capacity and reduce collisions. Despite the increased recent activity in cognitive radio networks, supporting the development of protocols at the MAC and PHY layers, as well as cross-layer optimizations for such networks, has been extremely challenging. Commodity wireless hardware does not facilitate such development, because the majority of MAC functionality is placed on the network interface (NIC) hardware, where programmability is limited and access to the software that runs on the NIC is often restricted.
The limited programmability of wireless NICs makes Software-Defined Radios (SDRs) an attractive alternative for building cognitive radio network protocols. SDRs implement the majority of functionality, including the physical and link layers, in software running on commodity hardware, making all layers of the protocol stack easy to modify.
In the previous chapters of this book, we have covered a broad range of networking requirements for emerging wireless scenarios along with the protocol features needed to support them. Clearly, not all of these requirements will be reflected in the general purpose architecture of the Internet, but it may be expected that many of the core capabilities will gradually migrate into main-stream networking protocols that will be in use ten to twenty years into the future. In this concluding chapter, we provide a brief discussion of the roadmap for network evolution or revolution in response to the changes in usage and technology that have been identified in this book.
Although it is impossible to predict exactly how the future Internet of the year 2025 will be realized, we can still enumerate a few alternative scenarios by which the Internet might evolve to meet the many challenges of cellular convergence and mobility. These are:
(1) Incremental evolution of IP features: This scenario assumes that the IP standardization process (e.g., IETF and ITU) will anticipate a reasonable set of future requirements and incorporate them into next-generation standards. This would be similar in spirit to IPv6, which improved on IPv4 by providing key features for addressing, mobility, and security. As discussed in Chapter 2, standards processes are already responding to emerging wireless technologies (such as IP-based cellular networks) and usage scenarios (such as multihop wireless access).
Ad hoc and multihop wireless networks are becoming increasingly important for a variety of applications ranging from tactical military networks, to metro area WiFi networks, to sensor applications. Multihop wireless is motivated by the fact that many embedded wireless devices are power-limited and cannot communicate directly with a distant base station or access point. In addition, ad hoc network formation is motivated by mobile service scenarios, such as tactical or vehicular. Protocol design considerations are given for both mobile ad hoc network (MANET) and static (planned) mesh network scenarios. These include self-organization, resource discovery, medium access control, and routing. Existing routing protocols for MANETs, including Destination Sequenced Distance Vector (DSDV), Dynamic Source Routing (DSR), and Ad hoc On-demand Distance Vector (AODV), are described and performance comparisons are given. More recent work on cross-layer mesh-routing protocols is introduced, including cross-layer metrics such as Airtime or PHY/MAC Awate Routing Metric for Ad hoc networks (PARMA), as well as Integrated Routing and Medium Access (IRMA) control. The chapter concludes with implications for future IP protocols that would allow for seamless integration of multihop wired and wireless networks.
Introduction and Motivation
Wireless ad hoc and mesh networks have been an important research area for about two decades. Research topics like the network architecture and design, integration with TCP/IP, routing, and medium access control in the shared wireless medium have been discussed at length. However, the mobile and dynamic nature of the network introduces new challenges in self-organization, including neighbor and topology discovery, network management, and disconnected operation.
Vehicular networks are expected to be one of the major new application areas for wireless and Internet services. There are more than 600 million vehicles worldwide and these will be networked to achieve improvements to safety, traffic management, navigation, and user convenience. Vehicular networks (VANETs) have several elements in common with ad hoc mesh networks, but also have unique new requirements including high mobility, rapidly changing topology, multiple usage modes (vehicle-to-infrastructure [V2I] and vehicle-to-vehicle [V2V]), and the central importance of geo-location.
In the first part of this chapter, emerging VANETs are shown to be unique in the broad family of MANETs (Mobile Ad Hoc Networks). VANET services are reviewed and classified. A location-aware content distribution (“car-torrent”) is then presented. Next, vehicle urban sensing is showcased for applications that range from traffic congestion/pollution measurements to distributed civilian surveillance. MobEyes, an urban surveillance application that supports forensic investigations, is then described and contrasted to other urban sensing projects.
In the second part of the chapter, the enabling VANET protocols are reviewed. First, physical and MAC layer standards for vehicular communications (DSRC, WAVE, and IEEE 802.11p) are reviewed. Then, new VANET network level protocol requirements are identified and solutions are discussed. Geo-location-based protocol architectures are introduced and briefly touch on complementary techniques such as geo-based handoff and geo-based beam adaptation for smart antennas. Security and privacy issues are addresses, with particular attention to location privacy. These protocols are illustrated with urban sensing applications.
The number of endpoints connected wirelessly to the Internet has long overtaken the number of wired endpoints, and the difference between the two is widening. Wireless mesh networks, sensor networks, and vehicular networks represent some of the new growth segments in wireless networking in addition to mobile data networks, which is currently the fastest-growing segment in the wireless industry. Wireless networks with time-varying bandwidth, error rate, and connectivity beg for opportunistic transport, especially when the link bandwidth is high, error rate is low, and the endpoint is connected to the network in contrast to when the link bandwidth is low, error rate is high, and the endpoint is not connected to the network. “Connected” is a binary attribute in TCP/IP, meaning one is either part of the Internet and can talk to everything or is isolated. In addition, connecting requires a globally unique IP address that is topologically stable on routing timescale (minutes to hours). This makes it difficult and inefficient to handle mobility and opportunistic transport in the Internet. Clearly we need a new networking paradigm that avoids a heavyweight operation like end-to-end connection and enables opportunistic transport. In addition to the these scenarios, given that the predominant use of the Internet today is for content distribution and content retrieval, there is a need for handling dissemination of content in an efficient manner. This chapter describes a network architecture that addresses the previously mentioned unique requirements.
The current Internet is an outgrowth of the ARPANET (Advanced Research Projects Agency Network) that was initiated four decades ago. The TCP/IP (Transmission Control Protocol/Internet Protocol) designed by Vinton Cerf and Robert Kahn in 1973 did not anticipate, quite understandably, such extensive use of wireless channels and mobile terminals as we are witnessing today. The packet-switching technology for the ARPANET was not intended to support real-time applications that are sensitive to delay jitter. Furthermore, the TCP/IP designers assumed that its end users – researchers at national laboratories and universities in the United States, who would exchange their programs, data, and email – would be trustworthy; thus, security was not their concern, although reliability was one of the key considerations in the design and operation of the network.
It is amazing, therefore, that given the age of the TCP/IP, the Internet has successfully continued to grow by supporting the ever increasing numbers of end users and new applications, with a series of ad hoc modifications and extensions made to the original protocol. In recent years, however, many in the Internet research community began to wonder how long they could continue to do “patch work” to accommodate new applications and their requirements. New research initiatives have been launched within the past several years, aimed at a grand design of “a future Internet.” Such efforts include the NSF's FIND (Future Internet Design) and GENI (Global Environment for Network Innovations), the European Community's FP 7 (Frame-network Program, Year 7), Germany's G-Lab, and Japan's NWGN (New Generation Network).