Cambridge Catalogue  
  • Your account
  • View basket
  • Help
Home > Catalogue > Imperfect Knowledge and Monetary Policy
Imperfect Knowledge and Monetary Policy

Details

  • 1 table
  • Page extent: 152 pages
  • Size: 216 x 138 mm
  • Weight: 0.33 kg

Library of Congress

  • Dewey number: 332.46
  • Dewey version: 22
  • LC Classification: n/a
  • LC Subject headings:
    • Monetary policy

Library of Congress Record

Add to basket

Hardback

 (ISBN-13: 9780521854863 | ISBN-10: 0521854865)

Available, despatch within 3-4 weeks

US $118.00
Singapore price US $126.26 (inclusive of GST)



Introduction




Uncertainty is a pervasive fact of life. Many decisions have to be taken with limited information, imperfect knowledge, in an ever-changing environment.

   The decision to purchase any consumption good is always taken based on limited information about, for example, the distribution of prices across retailers. One can cross-check prices from a couple of small shops and large retailers, but very quickly the costs of gathering and processing new information become prohibitively high. Quoting F. H. Knight, “it is evident that the rational thing to do is to be irrational, where deliberation and estimation cost more than they are worth.”1

   Imperfect knowledge also characterizes most important decisions in life. The choice of a university degree course is often made while ignoring one’s own chances of actually attaining the degree, the impact of the degree on future job opportunities or the relative merits of the degree when compared with alternatives.

   A by-product of imperfect knowledge is that evaluations of future outcomes may be formed in ways that are not necessarily correct. Different people may have different perceptions on the best degree and university. In turn, people’s perceptions may affect outcomes, so that “the truth” will not be independent of the learning process by which perceptions are formed. To continue the university analogy, perceptions about the least useful degrees may change over time. A degree perceived as unattractive may actually become unattractive ex post – even if it was not ex ante – because the demand for graduates will be diverted toward other degree holders and the quality of candidates will deteriorate as fewer and fewer students are interested in a course that “proves” to be unpopular.

   The ensuing vicious circle of adverse selection is an example of endogenous dynamics due to the interaction between individual learning, which is necessary to form beliefs about the future, and outcomes. One could speculate that such dynamics would tend to become less relevant over time, as beliefs are validated and decisions improve. The problem, however, is that in a world of learning under imperfect knowledge structural change must be recurrent. As a result, learning dynamics are arguably a perennial feature of the real world. After identifying the cheapest retailer of a specific consumption good, one may realize that a better price is available over the Internet. The learning process must therefore be restarted along this new dimension.

   The combination of imperfect knowledge, limited information and learning implies that we are often unable to characterize uncertainty precisely. At the individual level, a large body of experimental evidence has in fact emphasized a number of puzzles emerging when individuals are assumed to behave in ways consistent with the postulates of expected utility theory.2 The importance and frequency of paradoxes relating to inference and to economic behavior under uncertainty clearly show the limits of intuition on these matters.

   As do all other decision makers, central banks have to face these daunting dimensions of uncertainty. More specifically, central banks have limited information on the state of the economy and on the nature, size, and persistence of various disturbances. At the same time, central banks are extremely uncertain about the exact functioning of economies, and notably about the extent and timing of the propagation mechanism of policy actions. While economic research, conducted both in academia and in central banks,3 has helped to uncover some broad features of the transmission mechanism, recurrent structural breaks imply that what we have learned from the past cannot be trusted to remain useful.

   In central banks, moreover, uncertainty reaches a different, more complex dimension. The main reason is that central banks are important players affecting the overall behavior of the economic system. The result is that, for a central bank, the problem of taking decisions under uncertainty is compounded by that of understanding how private agents’ behavior will react to such decisions. More concretely, this problem amounts to ensuring that agents’ expectations, which will themselves be formed under uncertain conditions and as a result of some learning mechanism, remain appropriately anchored.

   Given this powerful interaction of limited information, imperfect knowledge and endogenous expectations under recurrent structural breaks, what is the appropriate course of action for a central bank?

   The academic literature has long striven to provide an answer.

   A first tentative answer can be obtained by ignoring the complications of imperfect knowledge and structural breaks and focusing on some dimensions of limited information. More specifically, a large literature has provided policy prescriptions based on the assumption that the central bank and private agents have perfect knowledge of the mechanisms that regulate the functioning of the economy. Some of these mechanisms, however, are assumed to be hidden, or observed with noise, and they therefore need to be inferred from other available information.

   This sort of uncertainty can arise at different levels. First, it can be due to imperfections in the quality of the data. Some variables may be known only with a time lag; others may be subject to substantial revision over time. Second, uncertainty regards the level of unobservable variables that can be defined within specific models of the economy. Well-known examples are the output gap, the equilibrium real interest rate, the equilibrium exchange rate, and various measures of excess liquidity conditions. Third, and finally, economic models typically include disturbances whose nature, identification, and interpretation are also uncertain. Here uncertainty involves, for example, assessing whether the shocks occur on the demand side or the supply side of the economy and whether they are transitory or persistent.

   In these cases, under some additional conditions, the response of theory is that certainty equivalence holds.4 The certainty equivalence principle simply states that optimal policy can be determined as in a world of certainty, provided that unknown variables are replaced by their expected or estimated values. Certainty equivalence also implies that estimation can be separated from the policy decision. The central bank can therefore try to form the best possible assessment of the evolution of the unobservable variables in a first logical stage, and then decide policy, in a second stage, as if the estimated values of the unobservable variables were certain. In this respect, uncertainty causes no problem for decision-making; it just introduces an estimation problem as an extra step.

   Certainty equivalence shows that some dimensions of uncertainty can be tackled more easily than one might conjecture ex ante. The drawbacks are that these policy prescriptions are applicable only when the correct mechanisms that regulate the functioning of the economy are indeed known, that is when there is no “model uncertainty.” The same policy prescriptions may cause large mistakes when model uncertainty is great. An interesting example based on Larson (1999) is illustrative in this respect. He states: “In the age of certainty, at the gateways of the twentieth century, the expected was as good as fact.” He then continues: “To turn was every storm’s destiny.” Larson is telling the story of the hurricane of 8 September 1900 that destroyed most of Galveston.5

   Conscious of the limitations of the assumption of perfect knowledge of the economy, the academic literature has moved on to consider the implications of deeper forms of uncertainty.

   In particular, academia has realized that considerable uncertainty characterizes, first, the quantitative strength of some structural relationships, i.e. the value of parameters which define elasticities and functional dependencies within any particular model. Inevitably, available parameter estimates are affected by data imperfections and by the particular econometric techniques that are employed for estimation. Second, there may be fundamental uncertainty about the overall features of the model that would provide the most appropriate description of the structural relationships in the economy. For example, different variables might be thought to affect the dynamics of inflation, or there may be ambiguity about the exact functional forms of some structural relations.

   Traditionally these deeper forms of uncertainty have been modeled using Bayesian decision theory. Authors solve for decision rules that are desirable under some prior on the model parameters. Uncertainty is usually related to policy multipliers or, more generally, to the transmission mechanism of monetary policy. In an optimal policy context, uncertainty about policy multipliers sometimes leads to a “conservative” result, namely a cautious, more muted response of policy instruments to disturbances to the economy, compared with conditions of certainty. The conservatism principle could be intuitively appealing in some contexts, as argued by Alan Blinder (1998), for example. It is, however, by no means applicable to all cases of parameter uncertainty. Uncertainty concerning the dynamic response of the economy, for example, has been shown to warrant a more forceful policy response to shocks than would be implied by certainty equivalence.6

   Caution can be taken even less for granted when one analyses model uncertainty in general. Model uncertainty poses a truly fundamental challenge. It is a very relevant challenge because there seems to be a substantial lack of agreement on the best model to use for policy purposes. As McCallum (1999) clearly stated, “it is not just that the economics profession does not have a well-tested quantitative model of the quarter-to-quarter dynamics, the situation is much worse than that: we do not even have any basic agreement about the qualitative nature of the mechanism.”

   A general problem of trying to deal with model uncertainty is that there is not even consensus on how to describe it analytically. Some authors have chosen to analyze the performance of simple monetary policy rules within different macroeconomic models, selected to reflect a wide range of views on aggregate dynamics. This follows a suggestion by McCallum (1997, p. 355): “Because there is a great deal of professional disagreement as to the proper specification of a structural macroeconomic model, it seems likely to be more fruitful to strive to design a policy rule that works reasonably well in a variety of plausible quantitative models, rather than to derive a rule that is optimal in any one particular model.” Rules of the former sort can be described as “robust”; they tend to share the feature of incorporating a substantial degree of policy inertia.7

   An alternative interpretation of robustness has been given in the context of robust control. Various applications have studied the behavior of a policy maker who uses a macroeconomic model, but recognizes that the model is an approximation and is uncertain about the quality of that approximation. In such circumstances, the policy maker will develop a concern for minimizing losses in the worst-case scenarios. Very often, monetary policies developed using the robust control approach have the property of being more aggressive than the optimal policies absent model uncertainty, thus overturning the conservatism principle.8

   A final, recent strand of literature focuses on the aspects of bounded rationality connected to the interplay of adaptive learning and the endogeneity of expectations. This literature also moves from the observation that rational expectations require an unrealistic degree of knowledge of the structure of the true model and of the parameters. It then goes on to suggest that it is more realistic to assume that agents in the economy, such as empirical economists, have to make inference and run regressions in order to learn parameter values and update their results as new data become available. The outcome is adaptive learning, a process by which agents adapt their forecast rules over time. Adaptive learning introduces to the models additional dynamics that are not present under full rationality. As agents update their forecasts and expectations, their optimal policies and, in turn, equilibrium outcomes will also change. As a result, agents’ perceptions of the truth will evolve, generating further changes in behavior and outcomes. If the economy occasionally experiences structural shifts, these additional dynamics will not disappear over time because agents will have to relearn the relevant parameters and processes. In the field of monetary policy, this literature shows that monetary policy must react aggressively to inflationary shocks. The aggressive reaction ensures that agents’ expectations, which are endogenous, remain anchored on the central bank’s inflation objective.9

   To summarize, the burgeoning literature on monetary policy under uncertainty has unveiled a number of important results which can represent useful benchmarks in policy discussions.10 In spite of tremendous progress in a number of directions, however, from an applied perspective most results continue to be model-dependent or based on specific assumptions on the degree of policy makers’ and/or individuals’ knowledge of the functioning of the economy.


Outline of the two lectures

The pervasive nature of uncertainty in actual monetary policy decision making is the key stylized fact motivating both lectures in this volume. How should central banks set monetary policy? In which way should they take uncertainty into account when designing a monetary policy strategy? Should they be aggressive or cautious in their response to shocks?

   Ideally, central banks would like to be able to rely on a set of analytical tools of universal relevance. An example of such analytical tools are national accounts systems, which Richard Stone helped to develop from the early 1940s. Sixty years later it is clear that national accounts systems have greatly contributed to empirical and policy-relevant economic research. Timely and accurate statistics are, in Europe and elsewhere, as important as at the time of Stone’s original contributions. Even more remarkably, these contributions, which were part of the dissemination of Keynesian ideas, have been accepted and used by all schools of economic thought. They are, in this respect, robust analytical tools of the type most useful and necessary to all fields of economics, but especially to monetary policy analysis, in view of our great ignorance about the features of real-world economies.

   Very few tools in monetary policy analysis possess the same degree of robustness as national accounts systems. A fully scientific approach to monetary policy is therefore impossible, given the current state of knowledge.

   The first lecture, a collaboration by Otmar Issing and Oreste Tristani, delivered by Otmar Issing, suggests one possible approach to dealing with the difficulty of bridging the gap between academic knowledge and real-world problems. The approach involves two main components: first, a firm reliance on the few fundamental and robust results of monetary economics; second, a pragmatic attitude to the implementation of policy, which takes due consideration of the lessons learned from central banking experience.

   Reliance on the fundamental and robust results of monetary economics entails, first and foremost, ensuring that price expectations remain anchored and that the probability of occurrence of events such as the Great Depression, German hyperinflation and the great inflation of the 1970s remains negligible. In order to achieve this end, it is crucial to base decisions on a clear price stability objective and to attach paramount importance to the goal of maintaining credibility. An important role for money also helps. Reliance on the strong long-run link between money and prices can help to preempt sustained inflation or deflation and to avoid the sort of monetary collapse associated with the Great Depression.

   The lessons from central banking experience can help offset our lack of knowledge of the determinants of short-run economic dynamics. This implies that it would be a mistake to focus on any single view of the functioning of monetary economies. When designing its reaction to the day-to-day evolution of economic variables, a central bank will have to rely on its judgment and adopt an eclectic approach. These judgmental elements, and the explicit role for the policy maker’s beliefs that they entail, will play an important role in the actual decision-making process. As a result, monetary policy making acquires the “artistic” features that are often emphasized in the literature.

   The aforementioned two elements complement each other. The results of central bank independence and the fundamental importance of price stability can be reflected in the institutional framework of the central bank, so guarding society against the risks of a fully discretionary approach to policy. Centuries ago, Kant (1793) remarked that “No man has the right to pretend that he is practically expert in a science and yet show contempt for theory without revealing that he is an ignoramus in his field.” At the same time, the judgmental elements are necessary to bridge the gap between the simplifications of monetary theory and the complexities of real-world decision making.

   Rather than making the argument along purely conceptual lines, the first lecture couches it in the context of the practical experience of two central banks in two specific historical episodes. The central banks are the Bundesbank and the European Central Bank (ECB) and the episodes are German monetary unification and the start of the single monetary policy in Europe. These episodes have been selected because they are especially interesting in terms of the dimension of uncertainty faced by the central banks. From the viewpoint of monetary policy, both episodes can in fact be classified as examples of “uncharted territory,” i.e. of exceptional shocks, from a historical and political viewpoint, that posed a particularly high challenge for the monetary policy makers. They are therefore particularly useful to illustrate how to keep a firm sense of direction while relying on judgment because of limited information and knowledge.

   The second lecture, a collaboration by Vítor Gaspar and David Vestin, delivered by Vítor Gaspar, focuses on these aspects and studies some of the factors that can make stabilization policies destabilizing for economic activity and private-sector expectations. More specifically, the lecture uses a small laboratory model to analyze economic dynamics when private-sector expectations are endogenous and the central bank does not directly observe potential output. Initial results from this line of research have been obtained by Orphanides and Williams (2002a, 2003a, 2003b), Gaspar and Smets (2002), and Gaspar, Smets and Vestin (2003).

   Based on this literature, the second lecture analyses the consequences of alternative ways of making private-sector expectations endogenous, the role of asymmetric information and the relevance of how the policy makers make inferences about the economy. The results suggest that, when information lags and the possibility of misperceptions of unobservable variables are taken into account, anchoring inflation expectations is of paramount importance for the central bank, in order to avoid sharp deterioration in economic performance.

   In order to achieve this objective, the second lecture argues that the central bank should be conservative in the Rogoff (1985) sense. A very low, and much smaller than society’s, weight on output-gap stabilization is most often the only way to avoid excess output volatility ex post, because it ensures that persistent inflationary or deflationary episodes are avoided and that the economy does not have to bear the large costs of the subsequent return to price stability.

   Many of the central themes of the two lectures echo recommendations put forward by Milton Friedman a long time ago and largely confirmed in recent developments of monetary theory. The importance of setting feasible objectives for monetary policy or, as in Friedman (1968), the conscience of “what monetary policy can and cannot do,” is a clear example of this correspondence. After a long period of inattention to nominal variables, Friedman reminded economists and central banks that “the monetary authority controls nominal quantities . . . In principle, it can use this control to peg a nominal quantity . . . or to peg the rate of change in a nominal quantity . . . It cannot use its control over nominal quantities to peg a real quantity.” This in spite of the fact that “monetary policy can and does have important effects on these real magnitudes” (Friedman, 1968, p. 11).


printer iconPrinter friendly version AddThis