To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the preceding chapters we learned that resources in a network must be shared among different classes of customers. This sharing is performed through scheduling at each station in the network.
Scheduling is just one of many decision processes encountered in typical applications. In the Internet there are many paths between nodes, and hence protocols must be constructed to determine appropriate routes. In a power distribution system there may be many generators that can meet current demands distributed across the power grid. A manufacturing system may have redundant processing equipment, or multiple vendors, and this then leads to a network somewhat more complex than those considered in the previous two chapters.
Figure 6.1 shows a network with eight nodes, four arrival streams, and ten links. The high congestion between nodes 1 and 3 can be modeled through an additional linear constraint on the rate vector ζ in a fluid model. There are many routes from node 1 to node 8, even though node 4 is temporarily unavailable. This example demonstrates that there may be many equilibria in a routing model; the best route for a given user will depend upon the current environment.
Most of the concepts introduced in previous chapters, such as stabilizability and workload relaxations, will be extended to this more general setting. Consideration is largely restricted to a fluid model since the definitions are most transparent in this setting.
Network models are used to describe power grids, cellular telecommunications systems, large-scale manufacturing processes, computer systems, and even systems of elevators in large office buildings. Although the applications are diverse, there are many common goals:
(i) In any of these applications one is interested in controlling delay, inventory, and loss. The crudest issue is stability: do delays remain bounded, perhaps in the mean, for all time?
(ii) Estimating performance, or comparing the performance of one policy over another. Performance is of course context-dependent, but common metrics are average delay, loss probabilities, or backlog.
(iii) Prescriptive approaches to policy synthesis are required. A policy should have reasonable complexity; it should be flexible and robust. Robustness means that the policy will be effective even under significant modeling error. Flexibility requires that the system respond appropriately to changes in network topology, or other gross structural changes.
In this chapter we begin in Section 1.1 with a survey of a few network applications, and the issues to be explored within each application. This is far from comprehensive. In addition to the network examples described in the Preface, we could fill several books with applications to computer networks, road traffic, air traffic, or occupancy evolution in a large building.
Although complexity of the physical system is both intimidating and unavoidable in typical networks, for the purposes of control design it is frequently possible to construct models of reduced complexity that lead to effective control solutions for the physical system of interest.
In this chapter we introduce many of the modeling and control concepts to be developed in this book through several examples. The examples in this chapter are extremely simple, but are intended to convey key concepts that can be generalized to more complex networks. We will return to each of these examples over the course of the book to illustrate various techniques.
A natural starting point is the single server queue.
Modeling the single server queue
The single server queue illustrated in Fig. 2.1 is a useful model for a range of very different physical systems. The most familiar example is the single-line queue at a bank: Customers arrive to the bank in a random manner, wait until they reach the head of the line, are served according to their needs and the abilities of the teller, and then exit the system. In the single server queue we assume that there is a single line, and only one bank teller. To understand how delays develop in this system we must look at average time requirements of customers and the rate of arrivals to the bank. Also, variability of service times or interarrivals times of customers has a detrimental effect on average delays.
Even in this very simple system there are control and design issues to consider. Is it in the best interest of the bank to reserve a teller to take care of customers with short time requirements?
This chapter develops extensions of the fluid and stochastic network models to capture a wider range of activities. As in the previous chapters we allow scheduling and routing. In the demand-driven models considered in this chapter we also permit “admission control” of raw material arriving to the network so that the total amount of material in the system can be regulated. Although manufacturing systems will motivate most of the discussion in this chapter, power distribution systems as described in Section 2.7 and some communication systems can be modeled as demand-driven networks.
Figure 7.1 illustrates a typical example of the class of models to be investigated. In this 16-buffer network there are two sources of exogenous demand, and the release of two different types of raw material into the system is controlled. At two of the five stations there are multiple buffers so that scheduling is required, and routing is controlled at the exit of Station 3.
This is the most complex example considered in any detail in this book, although it is far simpler than a typical semiconductor wafer fab as described in Section 1.1.1. The International Semiconductor Roadmap for Semiconductors (ITRS) provides an annual assessment of the challenges facing the semiconductor industry [279]. In recent years their reports have contained some recurring themes:
Contention for resources There may be dozens of different product flows in a single factory.
Chapter 4 touches on many of the techniques to be developed in this book for controlling large interconnected networks. The fluid model was highlighted precisely because control is most easily conceptualized when variability is disregarded. The infinitehorizon optimal control problem with objective function defined in (4.37) can be recast as an infinite-dimensional linear program when c is linear. In many examples, such as the simple re-entrant line introduced in Section 2.8, a solution is explicitly computable. The MaxWeight policy and its generalizations are universally stabilizing, in the sense that a single policy is stabilizing for any CRW scheduling model satisfying the load condition ρ• < 1 along with the second moment constraint E[∥A(1)∥2] < ∞.
What is missing at this stage is any intuition regarding the structure of “good policies” for a network with many stations and buffers. In this chapter we introduce one of the most important concepts in this book, the workload relaxation. Our main goal is to construct a model of reduced dimension to simplify computation of policies, and to better visualize network behavior.
In the theory of optimization, a relaxation of a given model is simply a new model obtained by removing constraints. In the case of networks there are several classes of constraints that complicate analysis:
(i) The integer constraint on buffer levels.
(ii) Constraints on the allocation sequence determined by the constituency matrix.
(iii) State space constraints, including positivity of buffer levels, as well as strict upper limits on available storage.