To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A basic problem that must be addressed in any design of a distributed network is the routing of messages. That is, if a node in the network wants to send a message to some other node in the network or receives a message destined for some other node, a method is needed to enable the node to decide over which outgoing link it has to send this message. Algorithms for this problem are called routing algorithms. In the sequel we will only consider distributed routing algorithms which are determined by the cooperative behavior of the local routing protocols of the nodes in order to guarantee effective message handling and delivery.
Desirable properties of routing algorithms are for example correctness, optimality, and robustness. Correctness seems easy to achieve in a static network, but the problem is far less trivial in case links and nodes are allowed to go down and come up as they can do in practice. Optimality is concerned with finding the “quickest” routes. Ideally, a route should be chosen for a message on which it will encounter the least delay but, as this depends on the amount of traffic on the way, such routes are hard to predict and hence the goal is actually difficult to achieve. A frequent compromise is to minimize the number of hops, i.e., the number of links over which the message travels from origin to destination. We will restrict ourselves to minimum-hop routing. Robustness is concerned with the ease with which the routing scheme is adapted in case of topological changes.
Originally, the research reported in this book was motivated by the way the material used for an introductory course on Distributed Computing (taught in the spring of 1985) was presented in the literature. The teacher of the course, Jan van Leeuwen, and I felt that many results were presented in a way that needed clarifying, and that correctness proofs, if existent, were often far from convincing, if correct at all. Thus we started to develop correctness proofs for some distributed protocols. Gradually a methodology emerged for such proofs, based on the idea of “protocol skeletons” and “system-wide invariants”. Similar ideas were developed by others in the context of formal proof systems for parallel and distributed programs.
I thank the ESPRIT Basic Research Action No. 7141 (project ALCOM II: Algorithms and Complexity), the Netherlands Organization for Scienctific Research (NWO) under contract NF 62-376 (NFI project ALADDIN: Algorithmic Aspects of Parallel and Distributed Systems), and the Department of Computer Science at Utrecht University for giving me the opportunity to do research on this topic, and for providing such a stimulating environment. I thank my coauthors Jan van Leeuwen, Hans Bodlaender, and Gerard Tel, and also Petra van Haaften, Hans Zantema, and Netty van Gasteren for all the discussions we had.
Later the idea came up to write a thesis about this subject, and I especially thank my thesis advisor Jan van Leeuwen for his stimulating support. The first four chapters of the thesis served as preliminary versions for the first four chapters of this book, while chapter 5 on commit protocols was added later.
Consider a communication network in which processors want to transmit many short messages to each other. The processors are not necessarily connected by a communication channel. Usually this service is provided for by protocols in the transport layer. A protocol can incorporate such a message in a packet and send the packet to the destination processor. As discussed in chapter 1, in the transport layer it is again necessary that communication errors are considered, even though we can assume that the communication over channels is handled correctly by the lower layers. -Thus we have to assume that the communication network can lose packets, copy packets (due to necessary retransmissions), delay packets arbitrarily long, and deliver packets in a different order than the order in which they were sent.
We consider the design of some protocols that handle the communication of messages correctly, in the sense that there is no loss or duplication of messages (cf. Belsnes [Bel76]). To specify this more precisely, suppose processor i wants to transmit a message m to processor j. The message m is said to be lost if i thinks that j received m while this is not the case, and m is said to be duplicated if j receives two or more copies of m from i and thinks that they are different messages.
If a processor i has a message or a sequence of messages to send to j, it sets up a temporary connection with j, which is closed as soon as i knows that j received the message(s) (or that j is not in a position to receive them).
In this chapter we consider some link-level protocols and show their partial correctness by assertional verification. Link-level protocols, i.e., protocols residing in the data link layer, are designed to control the exchange of information between two computing stations, e.g. computers or processors over a full-duplex link. They should guard against the loss of information when the transmission medium is unreliable. We only discuss transmission errors that occur while the link is up, and thus use the model of a static network consisting of two nodes i and j, and a bidirectional link (i, j). We will not deal with the problems caused by links or nodes going down, nor with the termination of a protocol. In a different context, these issues will be dealt with in later chapters.
In section 2.1 we discuss a generalization of the sliding window protocol. This protocol is meant to control the exchange of messages in an asynchronous environment. Although sliding window protocols belong to the data link layer, we will see in chapter 4 that the generalization can also be used as a basis for connection management, which belongs to the transport layer. We show that the alternating bit protocol and the “balanced” two-way sliding window protocol are instances of this one general protocol skeleton, that contains several further parameters to tune the simultaneous transmission of data over a full-duplex link. After proving the partial correctness of the protocol skeleton, we discuss the dependence of the optimal choice of the parameters on the propagation delay of the link, the transmission speed of the senders, and the error rate of the link.
In the past two decades, distributed computing has evolved rapidly from a virtually non-existent to an important area in computer science research. As hardware costs declined, single mainframe computers with a few simple terminals were replaced by all kinds of general and special purpose computers and workstations, as the latter became more cost effective. At many sites it became necessary to interconnect all these computers to make communication and file exchanges possible, thus creating a computer network. Given a set of computers that can communicate, it is also desirable that they can cooperate in some sense, for example, to contribute to one and the same computation. Thus a network of computers is turned into a distributed system, capable of performing distributed computations. The field of distributed computing is concerned with the problems that arise in the cooperation and coordination between computers in performing distributed tasks.
Distributed algorithms (or: protocols) range from algorithms for communication to algorithms for distributed computations. These algorithms in a distributed system appear to be conceptually far more complex than in a single processing unit environment. With a single processing unit only one action can occur at a time, while in a distributed system the number of possibilities of what can happen when and where at a time tends to be enormous, and our human minds are just not able to keep track of all of them.
This leads to the problem of determining whether the executions of a distributed algorithm indeed have the desired effect in all possible circumstances and combinations of events. Testing algorithms has now become completely infeasible: some form of “verification” is the only way out.
The notion of a recursively enumerable (r.e.) set, i.e. a set of integers whose members can be effectively listed, is a fundamental one. Another way of approaching this definition is via an approximating function { As}s∈w to the set A in the following sense: We begin by guessing x ∉ A>/i> at stage 0 (i.e. A0(x) ≥ 0); when x later enters A at a stage s + 1, we change our approximation from As(x) = 0 to As+1(x) = 1. Note that this approximation (for fixed) x may change at most once as s increases, namely when x enters A. An obvious variation on this definition is to allow more than one change: A set A is 2- r.e. (or d-r.e.) if for each x, As(x) change at most twice as s increases. This is equivalent to requiring the set A to be the difference of two r.e. sets A1 − A2. (Similarly, one can define n-r.e. sets by allowing at most n changes for each x.)
The notion of d-r.e. and n-r.e. sets goes back to Putnam [1965] and Gold [1965] and was investigated (and generalized) by Ershov [1968a, b, 1970]. Cooper showed that even in the Turing degrees, the notions of r.e. and dr. e. differ:
Theorem 1.1. (Cooper [1971[) There is a properly d-r.e. degree, i.e. a Turing degree containing a d-r.e. but no r.e. set.
In the eighties, various structural differences between the r.e. and the dr. e. degrees were exhibited by Arslanov [1985], Downey [1989], and others.
Resource-bounded genericity concepts have been introduced by Ambos-Spies, Fleischhack and Huwig [AFH84], [AFH88], Lutz [Lu90], and Fenner [Fe91]. Though it was known that some of these concepts are incompatible, the relations among these notions were not fully understood. Here we survey these notions and clarify the relations among them by specifying the types of diagonalizations captured by the individual concepts. Moreover, we introduce two new, stronger resource-bounded genericity concepts corresponding to fundamental diagonalization concepts in complexity theory. First we define general genericity, which generalizes all of the previous concepts and captures both, standard finite extension arguments and slow diagonalizations. The second new concept, extended genericity, actually is a hierarchy of genericity concepts for a given complexity class which extends general genericity and in addition captures delayed diagonalizations. Moreover, this hierarchy will show that in general there is no strongest genericity concept for a complexity class. A similar hierarchy of genericity concepts was independently introduced by Fenner [Fe95].
Finally we study some properties of the Baire category notions on E induced by the genericity concepts and we point out some relations between resource-bounded genericity and resource-bounded random- ness.
Introduction
The finite extension method is a central diagonalization technique in computability theory (see e.g. [Ro67], [Od89], [Le83], [So87]). In a standard finite extension argument a set A of strings (or equivalently of numbers) is inductively defined by specifying longer and longer initial segments of it. The global property to be ensured for A by the construction is split into countably many subgoals, given as a list {Re: e≥ 0 } of so called requirements.
A set A ⊆ ω is computably enumerable (c.e.), also called recursively enumerable, (r.e.), or simply enumerable, if there is a computable algorithm to list its members. Let ε denote the structure of the c.e. sets under inclusion. Starting with Post [1944] there has been much interest in relating the definable (especially ε-definable) properties of a c.e. set A to its “information content”, namely its Turing degree, deg(A), under ≤T, the usual Turing reducibility. [Turing 1939]. Recently, Harrington and Soare answered a question arising from Post's program by constructing a nonemptly ε-definable property Q(A) which guarantees that A is incomplete (A <TK). The property Q(A) is of the form (∃C)[A ⊂mC & Q−(A, C)], where A ⊂mC abbreviates that “A is a major subset of C”, and Q−(A,C) contains the main ingredient for incompleteness.
A dynamic property P(A), such as prompt simplicity, is one which is defined by considering how fast elements elements enter A relative to some simultaneous enumeration of all c.e. sets. If some set in deg(A) is promptly simple then A is prompt and otherwise tardy. We introduce here two new tardiness notions, small-tardy (A, C) and Q-tardy(A, C). We begin by proving that small-tardy(A, C) holds iff A is small in C (A ⊂sC) as defined by Lachlan [1968]. Our main result is that Q-tardy(A, C) holds iff Q−(A,C). Therefore, the dynamic property, Q-tardy(A, C), which is more intuitive and easier to work with than the ε-definable counterpart, Q−(A,C), is exactly equivalent and captures the same incompleteness phenomenon.
This is an informal list of some open problems in recursion theory, based on the list of open problems compiled during the Leeds Recursion Theory Year. Solutions have been announced for some of these problems. The current status of the questions below and of questions added after July 1995 can be found on the World Wide Web at html://www.math.uchicago.edu/͂ted.
Solutions and new problems are welcome and should be directed to T. A. Slaman at ted@math.uchicago.edu.
It is shown that there is a non-trivial obstruction to block the extension of embeddings on the quotient structure of the recursively enumerable degrees modulo the cappable degrees. Therefore, Shoenfield's conjecture fails on that structure, which answers a question of Ambos- Spies, Jockusch, Shore, and Soare [1984], and Schwarz [1984] (also see Slaman [1994]).
Introduction
The recursively enumerable (r.e.) sets are these subsets of natural numbers (denoted ω) which can enumerated by an effective procedure (or a computable function from ω to ω). There is a notion of relatively computability (or Turing reducibility) among the r.e. sets (in fact, among all sets of natural numbers), which can view that one is more complicated or harder to compute than other. The equivalence classes under this notion of relatively computability of r.e. sets are called the r.e. (Turing) degrees. The set of all r.e. degrees (denoted R) is made into a partial ordering with least (0, the equivalence class of all recursive sets) and greatest (0′, the equivalence class which contains the halting problem) elements in the natural way, namely, the reducibility relation between r.e. sets induces a partial ordering on degrees. It is readily shown that finite supremum always exists in R. Therefore, R forms an upper semilattice.
ABSTRACT: We describe a general method to separate relativizations of structures arising from computability theory. The method is applied to the lattice of r.e. sets, and the partial orders of r.e. m–degrees and T–degrees. We also consider classes of oracles where all relativizations are elementarily equivalent. We hope that the paper can serve as well as an introduction to coding in these structures.
Introduction. The relativization of a concept from computability theory to an oracle set Z is obtained by expanding the underlying concept of computation in a way such that, at any step of the computation procedure, tests of the form “n ∈ Z”, where n is some number obtained previously in the computation, are allowed. For instance, the relativization of the concept of r.e. sets to Z is “set r.e. in Z”. In this paper, we study to what extent the isomorphism type and the theory of the relativization Az of a structure A from computability theory depend on the oracle set Z. We consider mainly the case that A is the structure ε of r.e. sets under inclusion or a degree structure on r.e. sets, but first discuss the case that A is the structure of DT all T –degrees or Dm of all m –degrees. In this case, is the structure of degrees of subsets of ω under many–one reductions via (total) functions recursive in Z, while is simply the upper cone of DT above the T –degree of Z.