23 results
9 - Distributed mutual exclusion algorithms
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 305-351
-
- Chapter
- Export citation
-
Summary
Introduction
Mutual exclusion is a fundamental problem in distributed computing systems. Mutual exclusion ensures that concurrent access of processes to a shared resource or data is serialized, that is, executed in a mutually exclusive manner. Mutual exclusion in a distributed system states that only one process is allowed to execute the critical section (CS) at any given time. In a distributed system, shared variables (semaphores) or a local kernel cannot be used to implement mutual exclusion. Message passing is the sole means for implementing distributed mutual exclusion. The decision as to which process is allowed access to the CS next is arrived at by message passing, in which each process learns about the state of all other processes in some consistent way. The design of distributed mutual exclusion algorithms is complex because these algorithms have to deal with unpredictable message delays and incomplete knowledge of the system state. There are three basic approaches for implementing distributed mutual exclusion:
Token-based approach.
Non-token-based approach.
Quorum-based approach.
In the token-based approach, a unique token (also known as the PRIVILEGE message) is shared among the sites. A site is allowed to enter its CS if it possesses the token and it continues to hold the token until the execution of the CS is over. Mutual exclusion is ensured because the token is unique.
12 - Distributed shared memory
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 410-455
-
- Chapter
- Export citation
-
Summary
Abstraction and advantages
Distributed shared memory (DSM) is an abstraction provided to the programmer of a distributed system. It gives the impression of a single monolithic memory, as in traditional von Neumann architecture. Programmers access the data across the network using only read and write primitives, as they would in a uniprocessor system. Programmers do not have to deal with send and receive communication primitives and the ensuing complexity of dealing explicitly with synchronization and consistency in the messagepassing model. The DSM abstraction is illustrated in Figure 12.1. A part of each computer's memory is earmarked for shared space, and the remainder is private memory. To provide programmers with the illusion of a single shared address space, a memory mapping management layer is required to manage the shared virtual memory space.
DSM has the following advantages:
Communication across the network is achieved by the read/write abstraction that simplifies the task of programmers.
A single address space is provided, thereby providing the possibility of avoiding data movement across multiple address spaces, and simplifying passing-by-reference and passing complex data structures containing pointers.
If a block of data needs to be moved, the system can exploit locality of reference to reduce the communication overhead.
DSM is often cheaper than using dedicated multiprocessor systems, because it uses simpler software interfaces and off-the-shelf hardware.
There is no bottleneck presented by a single memory access bus.
[…]
17 - Self-stabilization
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 631-676
-
- Chapter
- Export citation
-
Summary
Introduction
The idea of self-stabilization in distributed computing was first proposed by Dijkstra in 1974. The concept of self-stabilization is that, regardless of its initial state, the system is guaranteed to converge to a legitimate state in a bounded amount of time by itself without any outside intervention. A non-self-stabilizing system may never reach a legitimate state or it may reach a legitimate state only temporarily. The main complication in designing a self-stabilizing distributed system is that nodes do not have a global memory that they can access instantaneoulsy. Each node must make decisions based on the local knowledge available to it and actions of all nodes must achieve a global ojective.
The definition of legitimate and illegitimate states depends on the particular application. Generally, all illegitimate states are defined to be those states which are not legitimate states. Dijkstra also gave an example of the concept of self-stabilization using a self-stabilizing token ring system. For any given token ring when there are multiple tokens or there is no token, then such global states are known as illegitimate states. When we consider a distributed system where a large number of systems are widely distributed and communicate with each other using message passing or shared memory approach, there is a possibility for these systems to go into an illegitimate state, for example, if a message is lost. The concept of self-stabilization can help us recover from such situations in distributed system.
7 - Termination detection
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 241-281
-
- Chapter
- Export citation
-
Summary
Introduction
In distributed processing systems, a problem is typically solved in a distributed manner with the cooperation of a number of processes. In such an environment, inferring if a distributed computation has ended is essential so that the results produced by the computation can be used. Also, in some applications, the problem to be solved is divided into many subproblems, and the execution of a subproblem cannot begin until the execution of the previous subproblem is complete. Hence, it is necessary to determine when the execution of a particular subproblem has ended so that the execution of the next subproblem may begin. Therefore, a fundamental problem in distributed systems is to determine if a distributed computation has terminated.
The detection of the termination of a distributed computation is non-trivial since no process has complete knowledge of the global state, and global time does not exist. A distributed computation is considered to be globally terminated if every process is locally terminated and there is no message in transit between any processes. A “locally terminated” state is a state in which a process has finished its computation and will not restart any action unless it receives a message. In the termination detection problem, a particular process (or all of the processes) must infer when the underlying computation has terminated.
When we are interested in inferring when the underlying computation has ended, a termination detection algorithm is used for this purpose.
16 - Authentication in distributed systems
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 598-630
-
- Chapter
- Export citation
-
Summary
Introduction
A fundamental concern in building a secure distributed system is the authentication of local and remote entities in the system. In a distributed system, the hosts communicate by sending and receiving messages over the network. Various resources (such as files and printers) distributed among the hosts are shared across the network in the form of network services provided by servers. The entities in a distributed system, such as users, clients, servers, and processes, are collectively referred to as principals. A distributed system is susceptible to a variety of threats mounted by intruders as well as legitimate users of the system.
In an environment where a principal can impersonate another principal, principals must adopt a mutually suspicious attitude toward one another and authentication becomes an important requirement. Authentication is a process by which one principal verifies the identity of another principal. For example, in a client–server system, the server may need to authenticate the client. Likewise, the client may want to authenticate the server so that it is assured that it is talking to the right entity. Authentication is needed for both authorization and accounting functions. In one-way authentication, only one principal verifies the identity of the other principal, while in mutual authentication both communicating principals verify each other's identity. A user gains access to a distributed system by logging on to a host in the system.
Frontmatter
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp i-vi
-
- Chapter
- Export citation
Distributed Computing
- Principles, Algorithms, and Systems
- Ajay D. Kshemkalyani, Mukesh Singhal
-
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008
-
Designing distributed computing systems is a complex process requiring a solid understanding of the design problems and the theoretical and practical aspects of their solutions. This comprehensive textbook covers the fundamental principles and models underlying the theory, algorithms and systems aspects of distributed computing. Broad and detailed coverage of the theory is balanced with practical systems-related issues such as mutual exclusion, deadlock detection, authentication, and failure recovery. Algorithms are carefully selected, lucidly presented, and described without complex proofs. Simple explanations and illustrations are used to elucidate the algorithms. Important emerging topics such as peer-to-peer networks and network security are also considered. With vital algorithms, numerous illustrations, examples and homework problems, this textbook is suitable for advanced undergraduate and graduate students of electrical and computer engineering and computer science. Practitioners in data networking and sensor networks will also find this a valuable resource. Additional resources are available online at www.cambridge.org/9780521876346.
1 - Introduction
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 1-38
-
- Chapter
- Export citation
-
Summary
Definition
A distributed system is a collection of independent entities that cooperate to solve a problem that cannot be individually solved. Distributed systems have been in existence since the start of the universe. From a school of fish to a flock of birds and entire ecosystems of microorganisms, there is communication among mobile intelligent agents in nature. With the widespread proliferation of the Internet and the emerging global village, the notion of distributed computing systems as a useful and widely deployed tool is becoming a reality. For computing systems, a distributed system has been characterized in one of several ways:
You know you are using one when the crash of a computer you have never heard of prevents you from doing work.
A collection of computers that do not share common memory or a common physical clock, that communicate by a messages passing over a communication network, and where each computer has its own memory and runs its own operating system. Typically the computers are semi-autonomous and are loosely coupled while they cooperate to address a problem collectively.
A collection of independent computers that appears to the users of the system as a single coherent computer.
A term that describes a wide range of computers, from weakly coupled systems such as wide-area networks, to strongly coupled systems such as local area networks, to very strongly coupled systems such as multiprocessor systems.
Index
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 731-736
-
- Chapter
- Export citation
2 - A model of distributed computations
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 39-49
-
- Chapter
- Export citation
-
Summary
A distributed system consists of a set of processors that are connected by a communication network. The communication network provides the facility of information exchange among processors. The communication delay is finite but unpredictable. The processors do not share a common global memory and communicate solely by passing messages over the communication network. There is no physical global clock in the system to which processes have instantaneous access. The communication medium may deliver messages out of order, messages may be lost, garbled, or duplicated due to timeout and retransmission, processors may fail, and communication links may go down. The system can be modeled as a directed graph in which vertices represent the processes and edges represent unidirectional communication channels.
A distributed application runs as a collection of processes on a distributed system. This chapter presents a model of a distributed computation and introduces several terms, concepts, and notations that will be used in the subsequent chapters.
A distributed program
A distributed program is composed of a set of n asynchronous processes p1, p2, …, pi, …, pn that communicate by message passing over the communication network. Without loss of generality, we assume that each process is running on a different processor. The processes do not share a global memory and communicate solely by passing messages. Let Cij denote the channel from process pi to process pj and let mij denote a message sent by pi to pj. The communication delay is finite and unpredictable.
11 - Global predicate detection
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 379-409
-
- Chapter
- Export citation
-
Summary
Stable and unstable predicates
Specifying predicates on the system state provides an important handle to specify, observe, and detect the behavior of a system. This is useful in formally reasoning about the system behavior. By being able to detect a specified predicate in the execution, we gain the ability to monitor the execution. Predicate specification and detection has uses in distributed debugging, sensor networks used for sensing in various applications, and industrial process control. As an example in the manufacturing process, a system may be monitoring the pressure of Reagent A and the temperature of Reagent B. Only when ψ1 = (PressureA > 240 KPa) ∧ (TemperatureB > 300 °C) should the two reagents be mixed. As another example, consider a distributed execution where variables x, y, and z are local to processes Pi, Pj, and Pk, respectively. An application might be interested in detecting the predicate ψ2 = xi + yj + zk < −125. In a nuclear power plant, sensors at various locations would monitor the relevant parameters such as the radioactivity level and temperature at multiple locations within the reactor.
Observe that the “predicate detection” problem is inherently different from the global snapshot problem. A global snapshot gives one of the possible states that could have existed during the period of the snapshot execution. Thus, a snapshot algorithm can observe only one of the predicate values that could have existed during the algorithm execution.
4 - Global state and snapshot recording algorithms
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 87-125
-
- Chapter
- Export citation
-
Summary
Recording the global state of a distributed system on-the-fly is an important paradigm when one is interested in analyzing, testing, or verifying properties associated with distributed executions. Unfortunately, the lack of both a globally shared memory and a global clock in a distributed system, added to the fact that message transfer delays in these systems are finite but unpredictable, makes this problem non-trivial.
This chapter first defines consistent global states (also called consistent snapshots) and discusses issues which have to be addressed to compute consistent distributed snapshots. Then several algorithms to determine on-the-fly such snapshots are presented for several types of networks (according to the properties of their communication channels, namely, FIFO, non-FIFO, and causal delivery).
Introduction
A distributed computing system consists of spatially separated processes that do not share a common memory and communicate asynchronously with each other by message passing over communication channels. Each component of a distributed system has a local state. The state of a process is characterized by the state of its local memory and a history of its activity. The state of a channel is characterized by the set of messages sent along the channel less the messages received along the channel. The global state of a distributed system is a collection of the local states of its components.
Recording the global state of a distributed system is an important paradigm and it finds applications in several aspects of distributed system design.
10 - Deadlock detection in distributed systems
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 352-378
-
- Chapter
- Export citation
-
Summary
Introduction
Deadlocks are a fundamental problem in distributed systems and deadlock detection in distributed systems has received considerable attention in the past. In distributed systems, a process may request resources in any order, which may not be known a priori, and a process can request a resource while holding others. If the allocation sequence of process resources is not controlled in such environments, deadlocks can occur. A deadlock can be defined as a condition where a set of processes request resources that are held by other processes in the set.
Deadlocks can be dealt with using any one of the following three strategies: deadlock prevention, deadlock avoidance, and deadlock detection. Deadlock prevention is commonly achieved by either having a process acquire all the needed resources simultaneously before it begins execution or by pre-empting a process that holds the needed resource. In the deadlock avoidance approach to distributed systems, a resource is granted to a process if the resulting global system is safe. Deadlock detection requires an examination of the status of the process–resources interaction for the presence of a deadlock condition. To resolve the deadlock, we have to abort a deadlocked process.
In this chapter, we study several distributed deadlock detection techniques based on various strategies.
System model
A distributed system consists of a set of processors that are connected by a communication network. The communication delay is finite but unpredictable.
15 - Failure detectors
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 567-597
-
- Chapter
- Export citation
-
Summary
Introduction
This chapter deals with the design of fault-tolerant distributed systems. It is widely known that the design and verification of fault-tolerent distributed systems is a difficult problem. Consensus and atomic broadcast are two important paradigms in the design of fault-tolerent distributed systems and they find wide applications. Consensus allows a set of processes to reach a common decision or value that depends upon the initial values at the processes, regardless of failures. In atomic broadcast, processes reliably broadcast messages such that they agree on the set of messages delivered and the order of message deliveries.
This chapter focuses on solutions to consensus and atomic broadcast problems in asynchronous distributed systems. In asynchronous distributed systems, there is no bound on the time it takes for a process to execute a computation step or for a message to go from its sender to its receiver. In an asynchronous distributed system, there is no upper bound on the relative processor speeds, execution times, clock drifts, and delay during the transmission of messages although they are finite. This is mainly casued by unpredictable loads on the system that causes asynchrony in the system and one cannot make any timing assumptions of any types. On the other hand, synchronous systems are characterized by strict bounds on the execution times and message transmission delays.
3 - Logical time
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 50-86
-
- Chapter
- Export citation
-
Summary
Introduction
The concept of causality between events is fundamental to the design and analysis of parallel and distributed computing and operating systems. Usually causality is tracked using physical time. However, in distributed systems, it is not possible to have global physical time; it is possible to realize only an approximation of it. As asynchronous distributed computations make progress in spurts, it turns out that the logical time, which advances in jumps, is sufficient to capture the fundamental monotonicity property associated with causality in distributed systems. This chapter discusses three ways to implement logical time (e.g., scalar time, vector time, and matrix time) that have been proposed to capture causality between events of a distributed computation.
Causality (or the causal precedence relation) among events in a distributed system is a powerful concept in reasoning, analyzing, and drawing inferences about a computation. The knowledge of the causal precedence relation among the events of processes helps solve a variety of problems in distributed systems. Examples of some of these problems is as follows:
Distributed algorithms design The knowledge of the causal precedence relation among events helps ensure liveness and fairness in mutual exclusion algorithms, helps maintain consistency in replicated databases, and helps design correct deadlock detection algorithms to avoid phantom and undetected deadlocks.
[…]
18 - Peer-to-peer computing and overlay graphs
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 677-730
-
- Chapter
- Export citation
-
Summary
Introduction
Peer-to-peer (P2P) network systems use an application-level organization of the network overlay for flexibly sharing resources (e.g., files and multimedia documents) stored across network-wide computers. In contrast to the client–server model, any node in a P2P network can act as a server to others and, at the same time, act as a client. Communication and exchange of information is performed directly between the participating peers and the relationships between the nodes in the network are equal. Thus, P2P networks differ from other Internet applications in that they tend to share data from a large number of end users rather than from the more central machines and Web servers. Several well known P2P networks that allow P2P file-sharing include Napster, Gnutella, Freenet, Pastry, Chord, and CAN.
Traditional distributed systems used DNS (domain name service) to provide a lookup from host names (logical names) to IP addresses. Special DNS servers are required, and manual configuration of the routing information is necessary to allow requesting client nodes to navigate the DNS hierarchy. Further, DNS is confined to locating hosts or services (not data objects that have to be a priori associated with specific computers), and host names need to be structured as per administrative boundary regulations. P2P networks overcome these drawbacks, and, more importantly, allow the location of arbitrary data objects.
5 - Terminology and basic algorithms
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 126-188
-
- Chapter
- Export citation
-
Summary
In this chapter, we first study a methodical framework in which distributed algorithms can be classified and analyzed. We then consider some basic distributed graph algorithms. We then study synchronizers, which provide the abstraction of a synchronous system over an asynchronous system. Finally, we look at some practical graph problems, to appreciate the necessity of designing efficient distributed algorithms.
Topology abstraction and overlays
The topology of a distributed system can be typically viewed as an undirected graph in which the nodes represent the processors and the edges represent the links connecting the processors. Weights on the edges can represent some cost function we need to model in the application. There are usually three (not necessarily distinct) levels of topology abstraction that are useful in analyzing the distributed system or a distributed application. These are now described using Figure 5.1. To keep the figure simple, only the relevant end hosts participating in the application are shown. The WANs are indicated by ovals drawn using dashed lines. The switching elements inside the WANs, and other end hosts that are not participating in the application, are not shown even though they belong to the physical topological view. Similarly, all the edges connecting all end hosts and all edges connecting to all the switching elements inside the WANs also belong to the physical topology view even though only some edges are shown.
Preface
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp xv-xviii
-
- Chapter
- Export citation
-
Summary
Background
The field of distributed computing covers all aspects of computing and information access across multiple processing elements connected by any form of communication network, whether local or wide-area in the coverage. Since the advent of the Internet in the 1970s, there has been a steady growth of new applications requiring distributed processing. This has been enabled by advances in networking and hardware technology, the falling cost of hardware, and greater end-user awareness. These factors have contributed to making distributed computing a cost-effective, high-performance, and fault-tolerant reality. Around the turn of the millenium, there was an explosive growth in the expansion and efficiency of the Internet, which was matched by increased access to networked resources through the World Wide Web, all across the world. Coupled with an equally dramatic growth in the wireless and mobile networking areas, and the plummeting prices of bandwidth and storage devices, we are witnessing a rapid spurt in distributed applications and an accompanying interest in the field of distributed computing in universities, governments organizations, and private institutions.
Advances in hardware technology have suddenly made sensor networking a reality, and embedded and sensor networks are rapidly becoming an integral part of everyone's life – from the home network with the interconnected gadgets to the automobile communicating by GPS (global positioning system), to the fully networked office with RFID monitoring. In the emerging global village, distributed computing will be the centerpiece of all computing and information access sub-disciplines within computer science.
14 - Consensus and agreement algorithms
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 510-566
-
- Chapter
- Export citation
-
Summary
Problem definition
Agreement among the processes in a distributed system is a fundamental requirement for a wide range of applications. Many forms of coordination require the processes to exchange information to negotiate with one another and eventually reach a common understanding or agreement, before taking application-specific actions. A classical example is that of the commit decision in database systems, wherein the processes collectively decide whether to commit or abort a transaction that they participate in. In this chapter, we study the feasibility of designing algorithms to reach agreement under various system models and failure models, and, where possible, examine some representative algorithms to reach agreement.
We first state some assumptions underlying our study of agreement algorithms:
Failure models Among the n processes in the system, at most f processes can be faulty. A faulty process can behave in any manner allowed by the failure model assumed. The various failure models – fail-stop, send omission and receive omission, and Byzantine failures – were discussed in Chapter 5. Recall that in the fail-stop model, a process may crash in the middle of a step, which could be the execution of a local operation or processing of a message for a send or receive event. In particular, it may send a message to only a subset of the destination set before crashing. In the Byzantine failure model, a process may behave arbitrarily.
[…]
13 - Checkpointing and rollback recovery
- Ajay D. Kshemkalyani, University of Illinois, Chicago, Mukesh Singhal, University of Kentucky
-
- Book:
- Distributed Computing
- Published online:
- 05 June 2012
- Print publication:
- 17 April 2008, pp 456-509
-
- Chapter
- Export citation
-
Summary
Introduction
Distributed systems today are ubiquitous and enable many applications, including client–server systems, transaction processing, the World Wide Web, and scientific computing, among many others. Distributed systems are not fault-tolerant and the vast computing potential of these systems is often hampered by their susceptibility to failures. Many techniques have been developed to add reliability and high availability to distributed systems. These techniques include transactions, group communication, and rollback recovery. These techniques have different tradeoffs and focus. This chapter covers the rollback recovery protocols, which restore the system back to a consistent state after a failure.
Rollback recovery treats a distributed system application as a collection of processes that communicate over a network. It achieves fault tolerance by periodically saving the state of a process during the failure-free execution, enabling it to restart from a saved state upon a failure to reduce the amount of lost work. The saved state is called a checkpoint, and the procedure of restarting from a previously checkpointed state is called rollback recovery. A checkpoint can be saved on either the stable storage or the volatile storage depending on the failure scenarios to be tolerated.
In distributed systems, rollback recovery is complicated because messages induce inter-process dependencies during failure-free operation. Upon a failure of one or more processes in a system, these dependencies may force some of the processes that did not fail to roll back, creating what is commonly called a rollback propagation.