To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter an account of our experiences in modeling and forecasting the annual output growth rates of 18 industrialized countries is presented. A structural econometric modeling, time-series analysis (SEMTSA) approach is described and contrasted with other approaches. Theoretical and applied results relating to variable and model selection and point and turning-point forecasting are discussed. A summary of results and directions for future research concludes the chapter.
Introduction
In this chapter we shall provide an account of some of our experiences in modeling, forecasting, and interpreting time series data. Since the literature on these topics is so extensive, a comprehensive survey would require one or probably more volumes. Thus, we have decided to describe our approach, the experience that we have had with it, and its relation to a part of the statistical and econometric time series literature.
Obtaining good macroeconomic and microeconomic and other time series models is important since they are useful in explanation, prediction, and policy-making or control. The basic issue that is addressed in this chapter is how to produce such good time series models. Our SEMTSA approach (see, e.g., Zellner and Palm 1974, 1975; Plosser 1976, 1978; Zellner 1979, 1984, 1991; Palm 1983; Wallis 1983; Webb 1985; Manas-Anton 1986; Hong 1989; Min 1991) will be briefly compared to several other approaches that have emerged in the literature. Rather than just present theoretical procedures that may be useful in producing good models, an account will be given of both theoretical and applied results in what follows.
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
By
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Chansik Hong, Department of Economics, Sookmyung Women's University, Seoul,
Gaurand M. Gulati, Georgetown University, Law Center, Washington, DC
In a letter commenting on a draft of Zellner (1987), Barnard (1987) wrote, “I very much liked your emphasis on the need for sophisticated, simple model building and testing in social science.” Apparently, Barnard and many other scientists are disturbed by the complexity of many models put forward in econometrics and other social sciences. And indeed we think that they should be disturbed since not a single complicated model has worked very well in explaining past data and in predicting as yet unobserved data. In view of this fact, in Garcia-Ferrer et al. (1987) and Zellner and Hong (1989), a relatively simple, one-equation model for forecasting countries' annual output growth rates was formulated, applied, and found to produce good forecasts year by year, 1974–84 for eighteen countries. This experience supports Barnard's and many others' preference for the use of sophisticatedly simple models and methods. See Zellner (1988) for further discussion of this issue.
In the present chapter, we extend our previous work to consider the problem of forecasting future values and turning points of economic time series given explicit loss structures. Kling (1987, pp. 201–4) has provided a good summary of past work on forecasting turning points by Moore (1961, 1983), Zarnowitz (1967), Wecker (1979), Moore and Zarnowitz (1982), Neftci (1982), and others. In this work there is an emphasis on the importance and difficulty of forecasting turning points.
By
Arnold Zellner, Professor, Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Bin Chen, Chicago Partners, LLC, Chicago, IL
For many years, theoretical and empirical workers have tried to model national economies in order to (1) understand how they operate, (2) forecast future outcomes, and (3) evaluate alternative economic policies. While much progress has been made in the decades since Tinbergen's pioneering work [1940], it is the case that no generally accepted model has as yet appeared. On the theoretical side, there are monetary, neo-monetary, Keynesian, neo-Keynesian, real business cycle, generalized real business cycle, and other theoretical models (see, Belongia and Garfinkel 1992 for an excellent review of many of these models and Min 1992 for a description of a generalized real business cycle model). Some empirical testing of alternative models has appeared in the literature. However, in Fair (1992) and Zellner (1992) (invited contributions to a St. Louis Federal Reserve Bank conference on alternative macroeconomic models), it was concluded that there is a great need for additional empirical testing of alternative macroeconomic models and production of improved models.
Over the years many structural econometric and empirical statistical models have been constructed and used. These include large structural econometric models (e.g. the Tinbergen, Klein, Brookings–SSRC, Federal Reserve–MIT–PENN, OECD, Project Link, and other models). While progress has been made, there does not yet appear to be a structural model that performs satisfactorily in point and turning point forecasting.
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
In the early 1970s we were concerned about the relationships between multivariate and univariate time series models, such as those brilliantly analyzed by Quenouille (1957) and Box and Jenkins (1970) and multivariate dynamic structural econometric models that had been and are widely employed in explanation, prediction and policy-making. Fortunately, we discovered the relationships and reported them in our paper, Zellner and Palm (1974) that is included in part I of this volume (chapter 1). See also the other general chapters in part I discussing general features of our approach, the reactions of leading researchers, and many useful references to the literature.
Having discovered the algebraic relations connecting statistical time series and structural econometric models, we next considered how this discovery might be used to produce improved models. In this connection, we thought it important not only to emphasize a philosophical preference for sophisticatedly simple models that is discussed in several chapters in part I and Zellner, Keuzenkamp, and McAleer (2001), but also operational techniques that would help researchers actually produce improved models. As illustrated in the chapters included in this volume, our approach involves (1) deducing algebraically the implied marginal processes and transfer functions for individual variables in a multi-equation model, e.g. a vector autoregression (VAR) or a structural econometric model (SEM), and (2) comparing these derived equations' forms and properties with those derived from the data by use of empirical model identification and testing techniques.
By
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University,
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL
In this chapter we consider large-sample estimation and testing procedures for parameters of dynamic equation systems with moving average error terms that are frequently encountered in econometric work (see, e.g., Quenouille 1957 and Zellner and Palm 1974). As pointed out in Zellner and Palm (1974), three-equation systems that are particularly relevant in econometric model building are (1) the final equations (FEs), (2) the transfer functions (TFs), and (3) the structural equations (SEs). In the present work, we specify these equation systems and develop large-sample “joint” or “system” estimation and testing procedures for each system of equations. These “joint” or “system” estimation procedures are iterative. They provide asymptotically efficient estimates of the parameters at the second step of iteration. The maximum likelihood (ML) estimator is obtained by iterating until convergence. The “joint” estimation methods provide parameter estimates that are more precise in large samples than those provided by single-equation procedures and the “joint” testing procedures are more powerful in large samples than those based on single-equation methods.
The aim of the chapter is to present a unified approach for estimating and testing FE, TF, and dynamic SE systems. In the chapter we use the results of previous work on the asymptotic properties of the ML estimator of the parameters of a dynamic model. We extend the recent work on efficient two-step estimation of dynamic models (e.g. Dhrymes and Taylor 1976, Hatanaka 1976, Reinsel 1976, 1977, Palm 1977a).
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
Historical studies of periods of rapid and sustained inflation as well as empirical studies of the demand for and supply of money have convinced monetarists that a strong link exists between money and prices. Indeed, Friedman (1968) has argued that movements in the money supply dominate movements in the price level.
One study that monetarists have often cited as strong evidence for this link between money and prices is Phillip Cagan's [1956] study of hyperinflations. With data on the money supplies and price levels of six countries in the throes of hyperinflation, Cagan finds that the hyperinflations were apparently caused by the pressure of a rapidly growing and exogenous money supply against a stable demand for real money balances. Unfortunately, the statistical procedures available to Cagan did not enable him to test the specification of his model adequately. In particular, he tested neither his specification of the mechanism generating expected inflation rates nor his specification of an exogenous money supply. Indeed, he failed even to test for serial correlation of the error terms. The Monte Carlo experiments of Granger and Newbold (1974) amply demonstrate that the goodness of fit of a regression is often greatly over-stated when serial correlation is present in the error term. It is therefore desirable to reassess Cagan's study.
In this chapter, I apply the technique advocated by Zellner and Palm (1974, 1975) to test three specifications of the dynamics of the German hyperinflation:
Model C, Cagan's original model;
Model MC, a modification of Cagan's original model in which expectations of inflation are rational in the sense of Muth (1961); and
By
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Chansik Hong, Department of Economics, Sookmyung Women's University, Seoul,
Chung-ki Min, Department of Economics, Hankuk University of Foreign Studies, Seoul
In previous work (Zellner, Hong, and Gulati 1990 and Zellner and Hong 1989), the problem of forecasting turning points in economic time series was formulated and solved in a Bayesian decision theoretic framework. The methodology was applied using a fixed parameter autoregressive, leading indicator (ARLI) model and unpooled data for eighteen countries to forecast turning points over the period 1974–85. In the present chapter, we investigate the extent to which use of exponential weighting, time-varying parameter ARLI models, and pooling techniques leads to improved results in forecasting turning points for the same eighteen countries over a slightly extended period, 1974–86.
The methodology employed in this work has benefited from earlier work of Wecker (1979), Moore and Zarnowitz (1982), Moore (1983), Zarnowitz (1985), and Kling (1987). Just as Wecker and Kling have done, we employ a model for the observations and an explicit definition of a turning point, for example a downturn (DT) or an upturn (UT). Along with Kling, we allow for parameter uncertainty by adopting a Bayesian approach and computing probabilities of a DT or UT given past data from a model's predictive probability density function (pdf) for future observations. Having computed such probabilities from the data, we use them in a decision theoretic framework with given loss structures to obtain optimal turning point forecasts which can readily be computed.
The plan of our chapter is as follows. In section 2, we explain our models and methods. Section 3 is devoted to a description of our data.
Through the encompassing principle, univariate ARIMA analysis could provide an important tool for diagnosis of VAR models. The univariate ARIMA models implied by the VAR should explain the results from univariate analysis. This comparison is seldom performed, possibly due to the paradox that, while the implied ARIMA models typically contain a very large number of parameters, univariate analysis yields highly parsimonious models. Using a VAR application to six French macroeconomic variables, it is seen how the encompassing check is straightforward to perform, and surprisingly accurate.
Introduction
After the crisis of traditional structural econometric models, a particular multivariate time series specification, the Vector Autoregression or VAR model has become a standard tool used in testing macroeconomic hypotheses. Zellner and Palm (1974, 1975) showed that the reduced form of a dynamic structural econometric model has a multivariate time series model expression, and that this relationship could be exploited empirically as a diagnostic tool in assessing the appropriateness of a structural model. As Hendry and Mizon (1992) state, a well-specified structural model should encompass the results obtained with a VAR model; similar analyses are also found in Monfort and Rabemananjara (1990), Clements and Mizon (1991), and Palm (1986).
It is also well known that a multivariate time series model implies a set of univariate models for each of the series. Thus, as argued by Palm (1986), univariate results can, in turn, provide a benchmark for multivariate models, and should be explained by them.
By
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Chansik Hong, Department of Economics, Sookmyung Women's University, Seoul
In our past work, Garcia-Ferrer et al. (1987), we employed several methods to forecast growth rates of real output (GNP or GDP) for eight European Economic Community (EEC) countries and the United States year by year for the period 1974–81. It was found that diffuse prior or least squares forecasts based on an autoregressive model of order 3 including leading indicator variables, denoted by AR(3)LI, were reasonably good in terms of forecast root mean-squared error (RMSE) relative to those of three naive models and of AR(3) models without leading indicator variables. Also, it was found that certain shrinkage forecasting techniques produced improved forecasting results for many countries and that our simple mechanical forecasts compared favorably with [Organization for Economic Cooperation and Development] (OECD) annual forecasts which were constructed using elaborate models and judgmental adjustments.
In the present chapter our main objectives are to extend our earlier work by (1) providing further analysis of shrinkage forecasting techniques, (2) providing forecasting results for an extended time period, 1974–84, for our past sample of nine countries, (3) applying our forecasting techniques to data relating to nine additional countries, and (4) reporting results of forecasting experiments using a simple modification of our AR(3)LI model.
The importance of checking the forecasting performance of our techniques using new data is reflected in objectives (2) and (3) above.
An important and difficult part of econometric modeling is the specification of the model. Any applied econometrician knows how troublesome it can be to obtain a satisfactory specification of the model. While the problem of specification analysis has received increasing attention in econometric research in recent years, many of the existing econometric textbooks provide few guidelines on how to obtain a satisfactory specification. This is surprising as the specification of the model is necessary in order to justify the choice of an estimation or testing procedure among the large variety of existing procedures, the properties of which are well established given that the true model is known. The consequences of misspecification errors due to the exclusion of relevant explanatory variables are more extensively discussed in standard textbooks on econometrics. Misspecification tests such as the Durbin–Watson test belong to the tools of any empirical econometrician. Among the exceptions to what has been said about the treatment of specification analysis in textbooks, we should mention the book by Leamer (1978), in which he distinguishes six types of specification searches and presents solutions for each of them within a Bayesian framework. But the present state of econometric modeling leads us to stress once more Zellner's (1979, p. 640) conclusion concerning the research on structural econometric models (SEMs): “Most serious is the need for formal, sequential statistical procedures for constructing SEMs.”
By
Arnold Zellner, Professor, Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Justin Tobias, Department of Economics, University of California, Irvine, CA
By
André J. Hoogstrate, Ministry of Justice, Netherlands Forensic Institute, Rijswijk,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Universiteit Maastricht,
Gerard A. Pfann
In this chapter, we analyze issues of pooling models for a given set of N individual units observed over T periods of time. When the parameters of the models are different but exhibit some similarity, pooling may lead to a reduction of the mean squared error of the estimates and forecasts. We investigate theoretically and through simulations the conditions that lead to improved performance of forecasts based on pooled estimates. We show that the superiority of pooled forecasts in small samples can deteriorate as the sample size grows. Empirical results for postwar international real gross domestic product growth rates of 18 Organization for Economic Cooperation and Development countries using a model put forward by Garcia-Ferrer, Highfield, Palm, and Zellner and Hong, among others illustrate these findings. When allowing for contemporaneous residual correlation across countries, pooling restrictions and criteria have to be rejected when formally tested, but generalized least squares (GLS)-based pooled forecasts are found to outperform GLS-based individual and ordinary least squares-based pooled and individual forecasts.
Panel data are used more and more frequently in business and economic studies. Sometimes a given number of entities is observed over a longer period of time, whereas traditionally panel data are available for a large and variable number of entities observed for a fixed number of time periods (e.g. see Baltagi 1995 for a[n] … overview; Maddala 1991; Maddala, Trost, and Li 1994).
The traditional literature on seasonality has mainly focused attention on various statistical procedures for obtaining a seasonally adjusted time series from an observed time series that exhibits seasonal variation. Many of these procedures rely on the notion that an observed time series can be meaningfully divided into several unobserved components. Usually, these components are taken to be a trend or cyclical component, a seasonal component, and an irregular or random component. Unfortunately, this simple specification, in itself, is not sufficient to identify a unique seasonal component, given an observed series. Consequently, there are difficult problems facing those wishing to obtain a seasonally adjusted series. For example, the econometrician or statistician involved in this adjusting process is immediately confronted with several issues. Are the components additive or multiplicative? Are they deterministic or stochastic? Are they independent or are there interaction effects? Are they stable through time or do they vary through time? Either explicitly or implicitly, these types of questions must be dealt with before one can obtain a seasonally adjusted series.
One approach to answering some of these questions would be to incorporate subject-matter considerations into the decision process. In particular, economic concepts may be useful in arriving at a better understanding of seasonality. Within the context of an economic structure (e.g. a simple supply and demand model), the seasonal variation in one set of variables, or in one market, should have implications for the seasonal variation in closely related variables and markets.
By
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration University, Maastricht
In previous work, Zellner and Palm (1974), an approach for building and analyzing dynamic econometric models was presented that is a blend of recently developed time series techniques and traditional econometric methods. This approach was applied in analyzing dynamic variants of a small Keynesian macroeconometric model formulated by Haavelmo (1947). In the present chapter, we apply our approach in the analysis of variants of a dynamic monetary model formulated by Friedman (1970, 1971).
We commence our present analysis by presenting the structural equations of an initial variant of Friedman's model, denoted S0, that is viewed as a starting point for our analyses. That is, as in previous work we set forth a number of testable implications of S0, in particular the implications of S0 for the forms of the final and transfer equations for the variables of S0. Using monthly data for the US economy, 1953–72, and time series analysis, the implications of S0 are checked against the information in the data. As will be seen, some of S0's implications do not square with the information in the data. This leads us to consider other variants of the model whose implications can be checked with the data. In this way we attempt to iterate in on a variant of the model that is in accord with the information in the data. When a variant has been obtained that is in accord with the data information, it can be checked further with new sample information.
We analyse a class of randomized Least Recently Used (LRU) cache replacement algorithms under the independent reference model with generalized Zipf's law request probabilities. The randomization was recently proposed for Web caching as a mechanism that discriminates between different document sizes. In particular, the cache maintains an ordered list of documents in the following way. When a document of size $s$ is requested and found in the cache, then with probability $p_s$ it is moved to the front of the cache; otherwise the cache stays unchanged. Similarly, if the requested document of size $s$ is not found in the cache, the algorithm places it with probability $p_s$ to the front of the cache or leaves the cache unchanged with the complementary probability $(1-p_s)$. The successive randomized decisions are independent and the corresponding success probabilities $p_s$ are completely determined by the size of the currently requested document. In the case of a replacement, the necessary number of documents that are least recently moved to the front of the cache are removed in order to accommodate the newly placed document.
In this framework, we provide explicit asymptotic characterization of the cache fault probability. Using the derived result we prove that the asymptotic performance of this class of algorithms is optimized when the randomization probabilities are chosen to be inversely proportional to document sizes. In addition, for this optimized and easy-to-implement policy, we show that its performance is within a constant factor from the optimal static algorithm.