To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we introduce logical reasoning and the idea of mechanizing it, touching briefly on important historical developments. We lay the groundwork for what follows by discussing some of the most fundamental ideas in logic as well as illustrating how symbolic methods can be implemented on a computer.
What is logical reasoning?
There are many reasons for believing that something is true. It may seem obvious or at least immediately plausible, we may have been told it by our parents, or it may be strikingly consistent with the outcome of relevant scientific experiments. Though often reliable, such methods of judgement are not infallible, having been used, respectively, to persuade people that the Earth is flat, that Santa Claus exists, and that atoms cannot be subdivided into smaller particles.
What distinguishes logical reasoning is that it attempts to avoid any unjustified assumptions and confine itself to inferences that are infallible and beyond reasonable dispute. To avoid making any unwarranted assumptions, logical reasoning cannot rely on any special properties of the objects or concepts being reasoned about. This means that logical reasoning must abstract away from all such special features and be equally valid when applied in other domains. Arguments are accepted as logical based on their conformance to a general form rather than because of the specific content they treat. For instance, compare this traditional example:
All men are mortal
Socrates is a man
Therefore Socrates is mortal
with the following reasoning drawn from mathematics:
All positive integers are the sum of four integer squares
15 is a positive integer
Therefore 15 is the sum of four integer squares
These two arguments are both correct, and both share a common pattern:
In the XML standard, data are represented as unranked labeledordered trees. Regular unranked tree automata provide a usefulformalism for the validation of schemas enforcing regular structuralconstraints on XML documents. However some concrete applicationcontexts need the expression of more general constraints than theregular ones. In this paper we propose a new framework in whichcontext-free style structural constraints can be expressed andvalidated. This framework is characterized by: (i) the introductionof a new notion of trees, the so-called typed unranked labeledtrees (tulab trees for short) in which each node receivesone of three possible types (up, down or fix), and (ii) thedefinition of a new notion of tree automata, the so-callednested sibling tulab tree automata, able to enforcecontext-free style structural constraints on tulab tree languages.During their structural control process, such automata are usingvisibly pushdown languages of words [R. Alur and P. Madhusudan, Visibly pushdown languages, 36th ACM symposium on Theory of Computing, Chicago, USA (2004) 202–211] on theiralphabet of states. We show that the resulting class NSTL oftulab tree languages recognized by nested sibling tulab treeautomata is robust, i.e. closed under Boolean operations and withdecision procedures for the classical membership, emptiness andinclusion problems. We then give three characterizations of NSTL:a logical characterization by defining an adequate logic in whichNSTL happens to coincide with the models of monadic second ordersentences; the two other characterizations are using adequateencodings and map together languages of NSTL with some regularsets of 3-ary trees or with particular sets of binary trees.
By their very nature, sensor network applications are often platform-specific: the application uses a particular set of sensors on a mote-specific sensor board, to measure application-specific conditions. The hardware, and hence the application, is typically not directly reusable on another mote platform. Furthermore, some applications may want to push a mote platform to its limits, e.g. to maximize sampling rate, or minimize the latency in reacting to an external event. Getting to these limits normally requires extensive platform-specific tuning, including platform-specific code (possibly even written in assembly language).
Conversely, large portions of sensor network applications are portable: multi-hop network protocols, radio stacks for commonly available radio chips, signal processing, etc. need little or no change for a new platform. Thus, while a sensor network application is not typically directly portable, TinyOS should make it easy to port applications to new platforms by minimizing the extent of the necessary changes.
Portability and the hardware abstraction architecture
TinyOS's main tool to maximize portability while maintaining easy access to platform-specific features is a multi-level hardware abstraction architecture (HAA), shown in Figure 12.1. The device driver components that give access to a mote's various hardware resources are divided into three categories:
The hardware interface layer (HIL): a device driver is part of the HIL if it provides access to a device (radio, storage, timers, etc.) in a platform-independent way, using only hardware independent interfaces.
We've considered various algorithms (tableaux, resolution, etc.) for verifying that a first-order formula is logically valid, if indeed it is. But these will not in general tell us when a formula is not valid. We'll see in Chapter 7 that there is no systematic procedure for doing so. However, there are procedures that work for certain special classes of formulas, or for validity in certain special (classes of) models, and we discuss some of the more important ones in this chapter. Often these naturally generalize common decision problems in mathematics and universal algebra such as equation-solving or the ‘word problem’.
The decision problem
There are three natural and closely connected problems for first-order logic for which we might want an algorithmic solution. By negating the formula, we can according to taste present them in terms of validity or unsatisfiability.
Confirm that a logically valid (or unsatisfiable) formula is indeed valid (resp. unsatisfiable), and never confirm an invalid (satisfiable) one.
Confirm that a logically invalid (or satisfiable) formula is indeed invalid (resp. satisfiable), and never confirm a valid (unsatisfiable) one.
Test whether a formula is valid or invalid (or whether it is satisfiable or unsatisfiable).
Evidently (3) encompasses both (1) and (2). Conversely, solutions to both (1) and (2) could be used together to solve (3): just run the verification procedures for validity and invalidity (or satisfiability and unsatisfiability) in parallel. Now, we have presented explicit solutions to (1), such as tableaux or resolution. But these do not solve (3). Given a satisfiable formula, these algorithms, while at least not incorrectly claiming they are unsatisfiable, will not always terminate.
In this appendix we collect together some useful mathematical background. Readers may prefer to read the main text and refer to this appendix only if they get stuck. We do not give much in the way of proofs and the style is terse and rather dull, so this is not a substitute for standard texts. For example, Forster (2003) discusses in detail almost all the topics here, as well as much relevant material in logic and computability and some more advanced topics in set theory.
Mathematical notation and terminology
We use ‘iff’ as a shorthand for ‘if and only if’ and ‘w.r.t.’ for ‘with respect to’. We write x | y, read ‘x divides y’, to mean that y is an integer multiple of x, e.g. 3 | 6, 1 | x and x | 0. We use the usual arithmetic operations (‘+’ etc.) on numbers; we generally write xy for the product of x and y, but sometimes write x y to emphasize that there is an operation involved and make the syntax more regular. An operation such as addition for which the order of the two arguments is irrelevant (x + y = y + x) is called commutative, and an operation where the association does not matter (x + (y + z) = (x + y) + z) is said to be associative. We also use the conventional equality and inequality relations (‘=’, ‘≤’ etc.) on numbers, and sometimes emphasize that an equation is the definition of a concept by decorating the equality sign with def, e.g. tan(x) =def sin(x)/cos(x).
This chapter presents TinyOS 's execution model, which is based on split-phase operations, run-to-completion tasks and interrupt handlers. Chapter 3 introduced components and modules, Chapter 4 introduced how to connect components together through wiring. This chapter goes into how these components execute, and how you can manage the concurrency between them in order to keep a system responsive. This chapter focuses on tasks, the basic concurrency mechanism in nesC and TinyOS. We defer discussion of concurrency issues relating to interrupt handlers and resource sharing to Chapter 11, as these typically only arise in very high-performance applications and low-level drivers.
Overview
As we saw in Section 3.4, all TinyOS I/O (and long-running) operations are split-phase, avoiding the need for threads and allowing TinyOS programs to execute on a single stack. In place of threads, all code in a TinyOS program is executed either by a task or an interrupt handler. A task is in effect a lightweight deferred procedure call: a task can be posted at anytime and posted tasks are executed later, one-at-a-time, by the TinyOS scheduler. Interrupts, in contrast can occur at any time, interrupting tasks or other interrupt handlers (except when interrupts are disabled).
While a task or interrupt handler is declared within a particular module, its execution may cross component boundaries when it calls a command or signals an event (Figure 5.1).As a result, it isn't always immediately clear whether a piece of code is only executed by tasks or if it can also be executed from an interrupt handler.
This chapter describes components, the building blocks of nesC programs. Every component has a signature, which describes the functions it needs to call as well as the functions that others can call on it. A component declares its signature with interfaces, which are sets of functions for a complete service or abstraction. Modules are components that implement and call functions in C-like code. Configurations connect components into larger abstractions. This chapter focuses on modules, and covers configurations only well enough to modify and extend existing applications: Chapter 4 covers writing new configurations from scratch.
Component signatures
A nesC program is a collection of components. Every component is in its own source file, and there is a one-to-one mapping between component and source file names. For example, the file LedsC.nc contains the nesC code for the component LedsC, while the component PowerupC can be found in the file PowerupC.nc. Components in nesC reside in a global namespace: there is only one PowerupC definition, and so the nesC compiler loads only one file named PowerupC.nc.
There are two kinds of components: modules and configurations. Modules and configurations can be used interchangeably when combining components into larger services or abstractions. The two types of components differ in their implementation sections. Module implementation sections consist of nesC code that looks like C. Module code declares variables and functions, calls functions, and compiles to assembly code. Configuration implementation sections consist of nesC wiring code, which connects components together. Configurations are the major difference between nesC and C (and other C derivatives).
In this chapter, we look at the design and implementation of SoundLocalizer, a somewhat more complex sensor network application. SoundLocalizer implements a coordinated event detection system where a group of motes detect a particular event – a loud sound – and then communicate amongst themselves to figure out which mote detected the event first and is therefore presumed closest to where the event occurs. To ensure timely event detection, and accurately compare event detection times, this application needs to use some low-level interfaces from the platform's hardware abstraction and hardware presentation layers (HAL, HPL, as described in the previous chapter). As a result, this application is not directly portable – we implement it here for micaz motes with an mts300 sensor board. In the design and implementation descriptions below, we discuss how the application and code are designed to simplify portability and briefly describe what would be involved in porting this application to another platform.
The HAL and HPL components used by SoundLocalizer offer lower-level interfaces (interrupt-driven, controlled by a Resource interface, etc.) than the high-level HIL components we used to build the AntiTheft application of Chapter 6. As a result, SoundLocalizer's implementation must use atomic statements and arbitration to prevent concurrency-induced problems, as we saw in Chapter 11.
The complete code for SoundLocalizer is available from TinyOS's contributed code directory (under “TinyOS Programming”).
Sound Localizer design
Figure 13.1 shows a typical setup for the SoundLocalizer application. A number of detector motes are placed on a surface a couple of feet apart. When the single coordinator mote is switched on, it sends a series of radio packets that let the detector motes synchronize their clocks.
This book is about writing TinyOS systems and applications in the nesC language. This chapter gives a brief overview of TinyOS and its intended uses. TinyOS is an open-source project which a large number of research universities and companies contribute to. The main TinyOS website, www.tinyos.net, has instructions for downloading and installing the TinyOS programming environment. The website has a great deal of useful information which this book doesn't cover, such as common hardware platforms and how to install code on a node.
Networked, embedded sensors
TinyOS is designed to run on small, wireless sensors. Networks of these sensors have the potential to revolutionize a wide range of disciplines, fields, and technologies. Recent example uses of these devices include:
Golden Gate Bridge safety High-speed accelerometers collect synchonizedd ata on the movement of and oscillations within the structure of San Francisco's Golden Gate Bridge. This data allows the maintainers of the bridge to easily observe the structural health of the bridge in response to events such as high winds or traffic, as well as quickly assess possible damage after an earthquake [10]. Being wireless avoids the need for installing and maintaining miles of wires.
Volcanic monitoring Accelerometers and microphones observe seismic events on the Reventador and Tungurahua volcanoes in Ecuador. Nodes locally compare when they observe events to determine their location, and report aggregate data to a camp several kilometers away using a long-range wireless link. Small, wireless nodes allow geologists and geophysicists to install dense, remote scientific instruments [30], obtaining data that answers other questions about unapproachable environments.
To quote the Gang of Four, design patterns are “descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context.” [3] In the components we've seen so far, we see several recurring patterns, such as the use of parameterized interfaces to implement services with multiple clients (VirtualizeTimerC, Section 9.1.3), or one component wrapping another (RandomC, Section 4.2). In this chapter, in the spirit of the Gang of Four's original design patterns work, we attempt to formalize a number of these patterns, based on our observations during TinyOS's development.
This chapter presents eight nesC design patterns: three behavioral (relating to component interaction): Dispatcher, Decorator, and Adapter, three structural (relating to how applications are structured): Service Instance, Placeholder, and Facade and two namespace (management of identifiers such as message types): Keyspace and Keymap. Each pattern's presentation follows the model of the Design Patterns book. Each one has an Intent, which briefly describes its purpose. A more in-depth Motivation follows, providing an example drawn from TinyOS. Applicable When provides a succinct list of conditions for use and a component diagram shows the Structure of how components in the pattern interact.1 In addition to our usual conventions for component diagrams, we attach folded sub-boxes to components to show relevant code snippets (a floating folded box represents source code in some other, unnamed, component). The diagram is followed by a Participants list explaining the role of each component. Sample Code shows an example nesC implementation, and Known Uses points to some uses of the pattern in TinyOS. Consequences describes how the pattern achieves its goals, and notes issues to consider when using it.
We now move from propositional logic to richer first-order logic, where propositions can involve non-propositional variables that may be universally or existentially quantified. We show how proof in first-order logic can be mechanized naively via Herbrand's theorem. We then introduce various refinements, notably unification, that help make automated proof more efficient.
First-order logic and its implementation
Propositional logic only allows us to build formulas from primitive propositions that may independently be true or false. However, this is too restrictive to capture patterns of reasoning where the truth or falsity of propositions depends on the values of non-propositional variables. For example, a typical proposition about numbers is ‘m < n’, and its truth depends on the values of m and n. If we simply introduce a distinct propositional variable for each such proposition, we lose the ability to interrelate different instances according to the variables they contain, e.g. to assert that ¬(m < nΛn < m). Firstorder (predicate) logic extends propositional logic in two ways to accommodate this need:
• the atomic propositions can be built up from non-propositional variables and constants using functions and predicates;
• the non-propositional variables can be bound with quantifiers.
We make a syntactic distinction between formulas, which are intuitively intended to be true or false, and terms, which are intended to denote ‘objects’ in the domain being reasoned about (numbers, people, sets or whatever). Terms are built up from (object-denoting) variables using functions. In discussions we use f(s, t, u) for a term built from subterms s, t and u using the function f, or sometimes infix notation like s + t rather than +(s, t) where it seems more natural or familiar.