We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we introduce sensitivity analysis in linear programming. That is, we study the effect on the optimal solution that small changes in the statement in the problem entail. We consider changes in the objective function coefficients and changes in the constraint inequalities (which are often determined by supplies of raw materials in real life problems). The latter changes are closely tied with the notion of marginal values (shadow prices), which we also introduce and compute. We define and compute stable ranges of the values of a parameter where certain features of the solution remain unchanged. Finally, we study duality, which fits here as a topic because the decision variables in the dual problem are the marginal values of the original problem.
In this chapter, we return to linear programming. We begin by covering Phase I pivoting, so that now readers can solve any linear programming problem, whether it is a standard form problem or not. Then we return to sensitivity analysis, but now with the benefit of the simplex algortihm, we show how it becomes much easier to calculate revised optimal solutions and stable ranges of parameters by using some simple matrix algebra.Finally, we consider integer programming, where we require that the solution values of some or all of the decision variables to be integers. While there are several possible approaches, we stick to the branch and bound method in this text. We also observe that integer constraints can be easily handled when solving integer programming problems using Microsoft Excel.
In this appendix, we collect together some basic material that readers should know before studying this book. The material includes sets, enumeration, and probability. In practice, we have found that students have often already been exposed to this material and only need a quick review, so we did not want to include another introductory chapter delaying the study of our main topics. We cover sets, subsets, universal sets, unions,intersections and complements, Cartesian products, power sets, and rules such as De Morgan's laws. For enumeration, we cover the addition and multiplications rules and special cases such as permutations and combinations. For probability, although our coverage is less detailed than other chapters, we actually cover more than we need for the rest of the book, including conditional probability and Bayes's theorem. Most readers will be comfortable just using this chapter as a reference or review.
In this chapter, we introduce the simplex algorithm for solving linear programming problems. We confine the chapter to Phase II pivoting, which is valid for any standard form maximization problem. We handle standard form minimization problems by using duality, since the dual problem of a standard form minimization problem is a standard form maximization problem. Since all linear programming problems that arise in game theory are standard form, the material of this chapter is sufficent to study game theory. Phase I pivoting, which is used for problems with mixed constraints, and integer programming are deferred until a later chapter, after we explore game theory. We end the chapter with a section on using software packages such as Microsoft Excel and Wolfram Mathematica to solve linear programming problems.
We introduce the mathematical modeling process. We also set the stage for the rest of the book by discussing systems of linear equations and their solutions, matrices, Gauss-Jordan elimination, linear combinations of vectors, basis vectors of Euclidean space, and their connection to basic solutions of a linear system. We conclude with simple optimization problems with quadratic functions.
We continue the study of constant-sum games by illustrating how to solve them if the payoff matrix is larger than 2 x 2.We derive the method of equalizing expectation to solve such games, Williams's method of oddments, and finally, we show how to solve any constant-sum game using linear programming. This provides us with a full proof of the minimax theorem. Also, using linear programming, we can prove the square subgame theorem, which states that the solution to any constant-sum game is the same as a solution to one of its subgames that has a square payoff matrix. We then illustrate how to use Microsoft Excel or Wolfram Mathematica to solve such games. In the final section of the chapter, we study variable-sum games and introduce the notions of payoff polygon and Pareto efficiency of an outcome. We show that not every such game has a universally accepted solution, so there is no analog of the minimax theorem for such games. In the 2 x 2 case, we show how to find a Nash equilibrium using mixed strategies if necessary (Nash proved that any game has one). However, the equlibrium point so obtained may not be Pareto efficient so may not be a good "solution" to the game.
We introduce the notion of a mathematical game. We give examples and classify them into various types, such as two-person games vs. n-person games (where n > 2), and zero-sum vs. constant-sum vs. variable-sum games. We carefully delineate the assumptions under which we operate in game theory. We illustrate how two-person games can be described by payoff matrices or by game trees. Using examples, including an analysis of the Battle of the Bismarck Sea from World War II, we develop the notions of a strategy, dominant strategy, and Nash equilibrium point of a game. Specializing to constant-sum games, we show the equivalence between Nash equilibrium and saddle point of a payoff matrix. We then consider games where the payoff matrix has no saddle point and develop the notion of a mixed strategy, after a quick review of some basic probability notions. Finally, we introduce the minimax theorem, which states that all constant-sum games have an optimal solution, and give a novel proof of the theorem in case the payoff matrix is 2 x 2.
This chapter provides a basic introduction to matrices, including the following: scaling, transposing, adding, and subtracting matrices; multiplying matrices and applications; finding determinants and inverses of 2 x 2 matrices; and solving systems of equations by matrix inversion. Applications of matrix algebra, including applications to cryptography and to Leontief economic models, are discussed.A method for finding inverses of larger matrices using elimination is developed in a series of exercises.
In this chapter, we introduce linear programming. We start with a simple algebra problem that can be modeled as a system of equations. However, that problem has no solution, so we modify the model to make it more realistic. The result is a standard form linear programming problem. This shows how linear programming arises naturally from the modeling process. Next, we show how to solve these problems in the case when there are two decision variables by the standad graphical technique, which we call graphing in the decision space. This leads naturally to a discussion of the corner point theorem. Finally, we exhibit an alternate graphical approach to solving these problems in the case when there are just two constraints but an arbitrary number of decision variables. We call this method graphing in the constraint space. This technique is a feature of this text; it is not covered in most texts. It is based on linear combinations of vectors in a plane and basic solutions to systems of linear equations, and it sets the stage for the simplex algorithm in Chapter 5.
In this chapter, we vary the assumptions under which we play a game, so the chapter can be regarded as a sort of sensitvity analysis of game theory. We illustrate reverse induction to solve games of perfect information if play is sequential rather than simultaneous. We also consider changes such as what happens if the players are allowed to communicate with each other or if your opponent is indifferent (such as nature) as opposed to a rational player. We consider ordinal games (where the outcomes are just ranked in order of preference rather than having numerical payoffs).Here is where we cover the famous dilemmas of game theory, such as the prisoner's dilemma, and discuss applications to politics and international relations (the arms race, the Cuban Missile Crisis, and federal government shutdowns due to budget gaps). We discuss the theory of moves, proposed by Brams in 1994 as a way of making ordinal game models more realistic, and offer our own small adjustment to this theory. We conclude the chapter with brief mention of n-person games and discuss games in characeristic function form. Examples include legislative voting systems, where we introduce power indices.