The Accountability of Politicians in International Crises and the Nature of Audience Cost: Supplementary Materials


 We study the problem of how citizens should punish or reward a leader's choices during international crises. Audiences should impose costs rooted in citizens’ preferences over policy outcomes, but that need not mean that these costs directly reflect the citizens’ preferences over actions. Instead, rewards and punishments are valued for their equilibrium consequences. To understand how citizens’ policy preferences shape electoral accountability, we characterize the retention strategies that maximize citizen welfare. In the optimal strategy, citizens always punish leaders who initiate crises and then back down. This is a robust finding, and true even though the citizens have no intrinsic preferences for policy consistency. Whether they punish leaders for backing down rather than going to war, on the other hand, depends on the status quo and on the costs of war. Importantly, these strategies of rewarding and punishing leaders need not have any immediate connection to voter's ex ante preferences over war and peace, even if preferences over policy outcomes ultimately motivate citizen behavior. This has important implications for interpreting empirical and experimental results related to audience costs.


Proposition 4: Maximally Extractive Retention Strategies
Proof of Proposition 4. The Home citizen's payoff is bounded above by a function that has two terms.The first is the payoff from types who keep the status quo: py. The second comes from types who initiate a crisis and (for the upper bound) fully extract: (1 − p) min 1, 1+p 2 + c F . All together, the payoff at p is F(p) = py + (1 − p) min 1, 1 + p 2 + c F .
We will proceed in two steps. First, we will find the values of p that maximize F. Second, we will extend the construction from the proof of Proposition 2 to a complete retention strategy.
Step 1: Unpacking the minimization and simplifying gives The function F is continuous since the two defining expressions are equal when The function p 1 2 + c F + (y − c F )p − 1 2 p 2 is strictly concave, and is maximized at p = y − c F . This gives three cases for F: (i) y ≤ c F . In this case, F is decreasing on its entire domain, and is maximized In this case, F is increasing on the interval [0, 1 − 2c F ). Thus the function is maximized at p = 1 − 2c F .
Step 2: From the previous step, we have to consider three cases: (i) p = 0, In each, we must extend the pair (r S , r W ) to a complete retention strategy in such a way that the appropriate p is part of the equilibrium.
The first case, p = 0, is easy-set r Q = 0.
For the second and third cases, we will use: Lemma 9. Suppose the citizen uses a fully extractive cutoff reward scheme (r S , r W ) with cutoff given by x † . This strategy can be extended to a complete retention strategy (r Q , r S , r W ) that induces an interior entry threshold p if and only if x † − y + r ≤ 1.
Proof. Notice that type p is indifferent between the status quo and initiation followed by acceptance if and only if y + r Q = x † + r, which can be rearranged to give r Q = x † − y + r. This is a feasible retention probability only if x † − y + r ≤ 1. Now we consider the two remaining cases in turn.
(ii) p = 1 − 2c F . From the proof of Proposition ??, r = c F . Thus the critical condition from Lemma ?? is 1 − y + c F ≤ 1, or y > c F .
So in each case, Proposition 2 and Lemma 9 imply that there is a retention strategy that is maximally extractive with the indicated properties.
In this appendix, we sketch the argument that our analysis in the main text also cover the case in which the Voter can condition retention on the outcome of a war.
We do so for the case of secret settlements.
A retention strategy is now a 4-tuple (r Q , r S , r V , r D ), where r V is the probability of retention in the event that Home wins a war and r D is the probability of retention in the event that Home loses a war. Otherwise, the model is as in Section 1.
For a fixed retention strategy, the analysis of equilibrium is similar to that laid out in Section 2. The key difference is in the Home Leader's acceptance decision.
She accepts offer x if and only if her type p satisfies An argument like that leading to Lemma 2 shows that Foreign's optimal offer is Now we can give an example to show that rewarding losers-r D > r V -can increase the Home citizen's payoff, relative to treating the war outcomes symmetrically-

Example 1
Example 1. Let c H = 1 2 , c F = 1 10 , and let y = 1 10 . Then Lemma 6 says that, when r V is constrained to equal r D , the optimal retention strategy induces settlement with probability 1, conditional on initiation. From the proof of Lemma 7, we can calculate the Home citizen's payoff in this case as To show that this is no longer optimal, we do not have to solve the complete optimization problem; it suffices to display a sample strategy that's better. Since the optimal strategy with unrestricted r must give payoffs at least this great, this shows that allowing leaders to be rewarded for losing can make the citizen better off.
Here's the strategy. Set ∆ = 0 and have everyone enter, i.e. set p = 0. Then the equilibrium offer from Foreign is The first argument of min is strictly increasing in α while the second argument is decreasing in α, so x * is maximized where they are equal: Solving this equation gives α = 3/2. Since every type of Home enters and accepts the offer, Home's payoff is x * = 2/3 > V . As α > 1, this strategy rewards losers.
The idea of rewarding losers seems very strange, but there is actually a compelling intuition for the idea. The leader's payoff to war as a function of his signal This is, of course, increasing in strength measured by p. The rate of this increase with p is governed by the difference r V − r D : the smaller is this difference, the less does the war payoff increase with p. This implies the fraction of types bought out of war when Foreign increases the offer by a fixed increment is greater the lower is the difference r V − r D . Consequently, increasing the reward to losing (relative to the reward to winning) increases Foreign's marginal incentive to make larger offers.
So, within the current model, rewarding losers makes a lot of intuitive sense.
But doing so may be a bad idea for reasons neglected by the model. For example, rewarding losers of wars might create perverse incentives for leaders to mismanage conflicts. 1 Thus we turn to the question of what is the optimal strategy subject the constraint that r V ≥ r D . We show that the answer to that question is identical to the answer we derived before when the citizen could not make rewards contingent on the war's outcome.

Proposition 1: Re-selection with war outcome contingent schemes
Proposition 1. Assume the citizen can distinguish victory from defeat, but is constrained to retain victors at least as often as losers (r V ≥ r D ). The optimal incentive strategy is that described in Appendix B.
Sketch of the Proof. In this new setup, the intermediate program from the Proof 1 While it is not obvious that the leader has bad incentives when losers are rewarded-he prefers to win as long as 1+r V > r D , which is consistent with r V < r Dno one has worked out the incentives to manage a war and, therefore, we cannot be sure of the consequences.
of Lemma 5 becomes: Lemma 1. At a solution, Proof. If neither inequality constraint binds, the solution includes p = 0, x = 1, α = 1, and ∆ = 0. But then we have and the first inequality constraint is violated.
A similar argument works if the second inequality constraint binds but the first does not.
Thus the relaxed program becomes: If neither constraint binds, the first-order conditions imply that α = 0. And if only the first constraint binds, the first-order conditions imply that α = 0. In neither case is the second constraint satisfied. 2 Suppose neither constraint binds. Then the first-order conditions are: Substitute for the derivatives to get: This system has two solutions: (α, ∆, x) = (0, c H + y, y + c F + c H ) and (α, ∆, x) = (0, c H + y, y).
Neither satisfies the second constraint.
Suppose only the first constraint binds. Then the Kuhn-Tucker conditions are: where λ 1 is the multiplier on the the constraint 1 − ∆ − αx ≥ 0.
And again, neither satisfies the second constraint.
Thus one of the inequalities in the second constraint must be an equality.If α = 1 2 , then the Kuhn-Tucker conditions are: If the first constraint is slack, then there is no solution with λ 2 nonnegative. If the first constraint binds, then The right-hand side is strictly decreasing in c F and strictly increasing in y. Thus λ 2 is bounded above by 0.
So the only candidates for a solution have α = 1. But this is the same as