To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter summarizes the basic technical concepts used throughout this book. As stated in the Introduction, this book focuses on modeling, so most algorithmic aspects are left “under the hood.” Because this book is intended to appeal to anyone familiar with power systems or optimization, background material on both topics is covered, albeit at the minimum depth necessary to access the later material.
Convexity and computational complexity
We begin with a few core concepts. A point x0 ∈ X is a global minimum of the function f(x) over the set X ⊆ ℝn if f(x0) ≤ f(x) for all x ∈ X. If f(x) is continuous and X is compact, which is to say closed and bounded, such a point is guaranteed to exist. x0 is a local minimum of f(x) if there exists an ε > 0 for which f(x0) ≤ f(x) for all x ∈ X satisfying ∥x − x0∥ ≤ ε. All minima of a convex function achieve the same function value and are therefore global. In general, a function may have multiple local and global minima.
To find a global minimum of a convex function, choose a descending algorithm, let it run free, and in a perfect world it will eventually end up there. (In the real world, large problem sizes or bad numerical conditioning can derail any algorithm.) The intuitive simplicity of convexity translates to a genuine computational advantage, which is evinced by the powerful algorithms that exist for convex optimization and the extreme difficulty of nonconvex optimization. This section gives some basic characterizations of convexity and describes the varieties of convex optimization problems encountered in this book. For more comprehensive coverage, the reader is referred to endnotes [1–5], and to endnotes [6, 7] for theoretical treatments of convex functions and sets.
A function f is convex if, for any two points in its domain, x and y,
f(αx + (1 − α)y) ≥ αf(x) + (1 − α)f(y) for all α ∈ [0, 1].
Many models in economics are formulated as dynamic optimization problems in which the objective functional is a sum (often discounted) of the instantaneous utilities derived over infinitely many periods and in which the system variables of adjacent periods are linked via intertemporal constraints. In this chapter, we present different solution methods for this class of problems which we refer to as ‘standard problems’.
We start in section 5.1 by describing the structure of the standard problem and by defining what we mean by its solution. After formulating the problem in primitive form (containing both state variables and control variables), we also present a reduced form that contains only state variables. In section 5.2, we discuss a solution method for the reduced form problem that is based on the Euler equation and the transversality condition. In section 5.3, we return to the primitive form of the dynamic optimization problem and show how it can be solved by a Lagrangian approach. Both approaches, the one based on the Euler equation and the transversality condition as well as the Lagrangian approach, are variational techniques that require differentiability assumptions. In section 5.4, we contrast these approaches to the technique of dynamic programming which can, in principle, be applied without any smoothness assumptions. Dynamic programming is a recursive solution technique, the heart of which is formed by the Bellman equation and the optimal value function. It is particularly powerful in the class of stationary discounted problems which will be discussed in section 5.5.
Model formulation and terminology
In the present section, we formulate a dynamic optimization problem that appears frequently in economics. As in part I of the book, we assume that the economic system under consideration is described by a finite set of system variables. In what follows, however, we shall distinguish between two kinds of system variables: state variables and control variables. The state variables describe the state of the system at the beginning of period t before the decision maker can make any choices.
In the previous chapter, we have analyzed models in which several decision makers interact strategically. They solve interconnected dynamic optimization problems and the strategic nature of the interaction arises because of the assumption that each decision maker (player) is aware of and takes into account the influence of his or her own strategy choice on the decision problems of the other agents. The present chapter deals with interconnected dynamic optimization problems as well, but now we assume that there is no strategic interaction. Every agent's optimization problem depends on an aggregate system variable, which itself is influenced by the actions of all agents together. Nevertheless, every agent takes the dynamic evolution of the aggregate system variable as given: we say that the agents act competitively. These assumptions are usually justified by the fact that there exists a very large number of decision makers (often infinitely many of them), all of which are ‘small’ relative to the size of the entire economy. In such a situation, every individual decision maker can safely neglect his or her own influence on the aggregate system and on the other decision makers' optimization problems.
In the first two sections of this chapter, we present two different definitions of dynamic competitive equilibrium. In section 8.1, the decision makers take the trajectory of the aggregate system variable as given, whereas in section 8.2 they take the law of motion of this variable as given. In sections 8.3 and 8.4, we assume that in addition to the many competitively behaving agents there exists an additional ‘big’ player (typically a policy maker) who is aware of his or her influence on the evolution of the aggregate system and who can also affect the decision problems of the competitive agents. This player is assumed to act like a Stackelberg leader, that is, he or she chooses a strategy before the competitive agents do so.
The power system of yore was a jungle. In cities like New York and Chicago, dozens of companies haphazardly strung up their own wires, each carrying its own special blend of electricity. In the United States, Samuel Insull, who began working for Edison in the 1880s, recognized the need for standardization. By the 1920s, he owned a significant share of Chicago's electric power industry, which he made extremely profitable through uniformity, scale, and shady financial practices. His company imploded during the Great Depression, and by the 1930s he had fled to Europe with a villainous reputation. He was acquitted of all charges shortly thereafter but died in 1938 with a debt of nearly $14 million.
Progressive or scoundrel, Samuel Insull's actions paved the way for the Public Utility Holding Company Act of 1935, which effectively made power systems government-regulated, vertically integrated monopolies. Over the next forty years, the power system developed into the dependable infrastructure we know today. Concurrently, it became an inefficient, rigid, environmentally hostile dinosaur. In 1978, the U.S. Congress passed the Public Utility Regulatory Policies Act, which forced utilities to purchase power from less expensive independent power producers at their avoided costs of generation, marking the first incursion of competition into the electricity sector in decades [1, 2]. In spurts over the next twenty years, the North American electric power system was deregulated to the applause of economists and chagrin of many engineers.
Then strange things began to happen, the most notable of which was the California Electricity Crisis [3, 4]. At the center of the crisis was the Enron scandal, and at the center of that, Kenneth Lay, perceived by some as a modern, mirrored version of Samuel Insull. Enron and its ilk gamed every aspect of the vulnerable new electricity markets, leading to extreme price volatility and blackouts.
Linear difference equations form a very special but important class of dynamical systems. The theory of linear difference equations is well developed and provides a complete characterization of the solutions of these equations. In the present chapter, we discuss some elements of this theory that are of high relevance for economists. The importance of linear difference equations stems, on the one hand, from numerous applications in many areas and, on the other hand, from the fact that under certain conditions the solutions to non-linear difference equations can be approximated by those of suitably linearized equations. Before we turn to non-linear equations in chapter 3, we must therefore deal with the linear case.
We start in section 2.1 by discussing the difference between homogeneous and non-homogeneous linear difference equations and by showing that the set of trajectories of a homogeneous linear difference equation forms a finite-dimensional vector space. This allows us to represent every trajectory of such an equation as a linear combination of finitely many basis solutions. In section 2.2, we derive explicit formulas for the basis solutions of homogeneous linear difference equations with constant coefficients, and in section 2.3 we deal with the important special case of a two-dimensional system domain. Finally, in section 2.4 we present two different approaches for solving non-homogeneous linear difference equations.
Terminology and general results
Throughout this chapter, we assume that the system domain X is the entire n-dimensional Euclidean space ℝn. A difference equation with system domain X is said to be linear if it is of the form
xt+1 = A(t)xt + b(t) (2.1)
where A : ℕ0 → ℝn × n and b : ℕ0 → ℝn are a matrix-valued function and a vector-valued function, respectively. In words, the difference equation xt+1 = f(xt, t) is linear if the law of motion f is an (affine) linear function of the system variables.
The application of optimization to power systems has become so common that it deserves treatment as a distinct subject. The abundance of optimization problems in power systems can give the impression of diversity, but in truth most are merely layers on a common core: the steady-state description of power flow in a network. In this book, many of the most prominent examples of optimization in power systems are unified under this perspective.
As suggested by the title, this book focuses exclusively on convex frameworks, which by reputation are phenomenally powerful but often too restrictive for realistic, non-convex power system models. In Chapter 3, the application of classical and recent mathematical techniques yields a rich spectrum of convex power flow approximations ranging from high tractability and low accuracy to slightly reduced tractability and high accuracy. The remaining chapters explore problems in power system operation, planning, and economics, each consisting of details layered on top of the convex power flow approximations. Because all formulations can be solved using standard software packages, only models are presented, which is a departure from most books on power systems. It is a major perk of convex optimization that the user often does not need to program an algorithm to proceed.
I should comment that this book is not an up-to-date exposition of power system applications or optimization theory and that, inevitably, many important topics in both fields have been omitted. My intention has rather been to bridge modern convex optimization and power systems in a rigorous manner. While I have attempted to be mathematically self-contained, the pace assumes an advanced undergraduate level of mathematical exposure (linear algebra, calculus, and some probability) as well as familiarity with power systems and optimization. This book could be used in a course on power system optimization or as a mathematical supplement to a course in power system design, operation, or economics. It is my hope that it will also prove useful to researchers in power systems with an interest in optimization and vice versa, and to industry practitioners seeking firm foundations for their optimization applications.
Streetlights, subways, the Internet, this book you are reading now – it is difficult to imagine life without such amenities, all enabled by electric power. To support such a vast set of technologies, electric power systems have grown into some of the most complex and expensive machines in existence. While much of this growth resembles an organic process more than deliberate design, the advent of computing is enabling us more and more to direct the evolution of power systems toward greater efficiency, reliability, and versatility.
At the time of writing, the complexity of power systems is poised to take off. This is largely due to shifts toward renewable energy production and the active involvement of power consumers through demand response, as well as our still-developing handle on economic deregulation. To meet these challenges, new computational tools will be developed, and the most ubiquitous computation in power systems is optimization. An objective of this book is to simplify and unify various topics in power system optimization so as to provide a firm foundation for future developments.
At the heart of most power system optimizations are the equations of the steady-state, single-phase approximation to alternating current power flow in a network. Well-known problems like optimal power flow, reconfiguration, and transmission planning all consist of details layered on top of power flow. Nodal prices, a core component of electricity markets, are obtained from the dual of optimal power flow. It is therefore most unfortunate that the power flow equations are nonconvex, making all of these optimizations extremely difficult. We are thus faced with a tradeoff between realistic models that are too hard to solve at practical scales and tractable approximations.
For many years, linear programming (LP) was the most general efficiently solvable optimization class, and so many large-scale power system models were based on linear power flow approximations or even simpler descriptions like network flow or a real power balance. At the other extreme, a number of nonlinear programming (NLP) algorithms were developed for exact, nonconvex models.
In this chapter, we turn to the analysis of non-linear difference equations. Since most applications in economics assume a stationary environment, that is, an environment that does not change over time, the focus of our study will be on autonomous equations. Such equations typically admit constant solutions – so-called fixed points – or periodic solutions, and the first step of the analysis of an autonomous difference equation is often the identification of these simple types of solutions. Therefore, we collect in section 3.1 a number of results about fixed points and periodic points.
As a next step, we turn to the investigation of the dynamics locally around the fixed points and the periodic points, respectively. This is greatly facilitated by local linearization techniques and the Hartman–Grobman theorem, which we present in section 3.2. In section 3.3, we introduce the important notion of the stability of a fixed point or a periodic point and we derive stability criteria. Some of these criteria are based on local linearization techniques, whereas others involve Lyapunov functions.
As we shall see in section 3.4, the appropriate definition of stability for an economic problem depends on how many of the system variables are pre-determined and how many are jump variables. This will lead us to the concept of saddle point stability. Finally, in section 3.5 we demonstrate that in the case of ‘too much stability’, a notion that we will formally define, purely deterministic economic models can admit stochastic solutions if we properly take into account the influence that expectations have on the behaviour of economic agents.
Invariant sets, fixed points, and periodic points
In this section, we consider autonomous difference equations of the form
xt+1= f(xt), (3.1)
where the law of motion f : X → X is a given function and the system domain X ⊆ ℝn is a non-empty set.