To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let $C$ be a code of length $n$ over an alphabet of $q$ letters. A codeword $y$ is called a descendant of a set of $t$ codewords $\{x^1,\dots,x^t\}$ if $y_i \in \{x^1_i,\dots,x^t_i\}$ for all $i=1,\dots,n$. A code is said to have the Identifiable Parent Property of order $t$ if, for any word of length $n$ that is a descendant of at most $t$ codewords (parents), it is possible to identify at least one of them. Let $f_t(n,q)$ be the maximum possible cardinality of such a code. We prove that for any $t,n,q$, $(c_1(t)q)^{\frac{n}{s(t)}} < f_t(n,q) < c_2(t)q^{\lceil{\frac{n}{s(t)}}\rceil}$ where $s(t) = \lfloor(\frac{t}{2}+1)^2 \rfloor -1$ and $c_1(t),c_2(t)$ are some functions of $t$. We also show some bounds and constructions for $f_3(5,q)$, $f_3(6,q)$, and $f_t(n,q)$ when $n < s(t)$.
This note presents two results on real zeros of chromatic polynomials. The first result states that if $G$ is a graph containing a $q$-tree as a spanning subgraph, then the chromatic polynomial $P(G,\lambda)$ of $G$ has no non-integer zeros in the interval $(0,q)$. Sokal conjectured that for any graph $G$ and any real $\lambda>\Delta(G)$, $P(G,\lambda)>0$. Our second result confirms that it is true if $\Delta(G)\ge \lfloor n/3\rfloor -1$, where $n$ is the order of $G$.
Two infinite 0–1 sequences are called compatible when it is possible to cast out $0\,$s from both in such a way that they become complementary to each other. Answering a question of Peter Winkler, we show that if the two 0–1 sequences are random i.i.d. and independent from each other, with probability $p$ of $1\,$s, then if $p$ is sufficiently small they are compatible with positive probability. The question is equivalent to a certain dependent percolation with a power-law behaviour: the probability that the origin is blocked at distance $n$ but not closer decreases only polynomially fast and not, as usual, exponentially.
A signing of a graph $G=(V,E)$ is a function $s:E \rightarrow \{-1,1\}$. A signing defines a graph $\widehat{G}$, called a {\em 2-lift of $G$}, with vertex set $V(G)\times\{-1,1\}$. The vertices $(u,x)$ and $(v,y)$ are adjacent iff $(u,v) \in E(G)$, and $x \cdot y = s(u,v)$. The corresponding signed adjacency matrix$A_{G,s}$ is a symmetric $\{-1,0,1\}$-matrix, with $(A_{G,s})_{u,v} = s(u,v)$ if $(u,v) \in E$, and $0$ otherwise.
By
Francis W. Ahking, Department of Economics, University of Connecticut, Storrs, CT,
Stephen M. Miller, Department of Economics, College of Nevada Las Vegas, Las Vegas, NV
Zellner and Palm (1974) show that comparing the actual with the implied stochastic processes generating the endogenous variables in a system of dynamic structural equations provides important information about the system's correct specification. We apply their methodology to structural exchange rate models. We find that the log of the bilateral exchange rate is generally well approximated by a random-walk model. This implies that the stochastic processes generating the exogenous variables should also be random-walk models. Our empirical results, however, show that this is not, in general, the case. We conclude by suggesting a reconciliation of our results based on a technique developed by Beveridge and Nelson (1981).
Introduction
… [T]heories of exchange rate determination have emphasized the asset approach to foreign exchange markets. As an asset price, the exchange rate is seen as adjusting rapidly and freely to maintain stock equilibrium. For example, in the pure monetary approach, the exchange rate is determined when the total stocks of outstanding foreign and domestic moneys are held willingly by economic agents (see, e.g., Dornbusch 1976; Bilson 1978; Frenkel 1978; Frankel 1983; Hoffman and Schlagenhauf 1983; Huang 1984). Moreover, expectations of future exchange rate movements play a dominant role in determining the current spot rate. This linkage of expectations to the current spot rate is usually accomplished by assuming that uncovered interest rate parity holds, that the foreign exchange market is efficient, and that expectations about the future spot rate are formed rationally.
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor, University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
By
Arnold Zellner, Professor, Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Chung-ki Min, Department of Economics, Hankuk University of Foreign Studies, Seoul
In our past work (Zellner, Hong, and Min, 1991), we used variants of a simple autoregressive-leading indicator (ARLI) model and a Bayesian decision theoretic method to obtain correct forecasts in about 70 per cent of 158 turning point forecasts for eighteen industrialized countries' annual output growth rates during the period 1974–86. IMF data for 1951–73 were employed to estimate our models that were then employed to forecast downturns and upturns in annual growth rates for the period 1974–86. When Milton Friedman learned of our positive results, in a personal communication he challenged us to check our methods with an extended data set. This is indeed an important challenge since it is possible that we were just “lucky” in getting the positive results reported above. Earlier, we recognized such problems in that we began our forecasting experiments with just nine countries' data and forecasted for the period 1974–81. Later, in Zellner and Hong (1989) and in Zellner, Hong, and Min (1991), we expanded the number of countries from nine to eighteen and extended the forecast period to 1974–86 to check that the earlier positive results held up with an expanded sample of countries and data. Fortunately, results were positive and now we report such new results for eighteen countries' revised data involving 211 turning point episodes during the forecast period 1974–90.
By
James P. LeSage, Professor of Economics, Department of Economics, University of Toledo, Toledo, OH,
Michael Magura, Department of Economics, University of Toledo, Toledo, OH
This chapter adapts to the regional level a multi-country technique … used by Garcia-Ferrer, Highfield, Palm, and Zellner (1987) (hereafter GHPZ) and extended by Zellner and Hong (1987) (hereafter ZH) to forecast the growth rates in GNP across nine countries. We apply this forecasting methodology to a model of payroll formation in seven Ohio metropolitan areas. The technique applied to our regional setting involves using a Bayesian shrinkage scheme that imposes stochastic restrictions that shrink the parameters of the individual metropolitan-area models toward the estimates arising from a pooled model of all areas. This approach is motivated by the prior belief that all of the individual equations of the model reflect the same parameter values. Lindley and Smith (1972) labeled this an “exchangeable” prior.
There are several reasons to believe that the multi-country, exchangeable-priors forecasting methodology introduced by GHPZ will be successful in our multi-regional setting. First, it is well known that dependencies exist among regional economies. Numerous econometric modeling approaches have been proposed to exploit this information. Most multi-regional models take a structural approach, employing linkage variables such as relative cost, adjacent-state demand, and gravity variables. Ballard and Glickman (1977), Ballard, Glickman, and Gustely (1980), Milne, Glickman, and Adams (1980), and Baird (1983) presented multiregional models of this type. LeSage and Magura (1986) investigated a non-structural approach using statistical time series techniques to link regional models.
Modern financial economic theory suggests that changes in speculative prices should follow simple time series processes in an informationally efficient capital market. Moreover, this theoretical implication enjoys substantial support in the empirical financial economic literature (see Fama 1970). Yet, the implications of the observed time series behavior of speculative price changes for the structure of equilibrium models of asset pricing or information theory do not appear to be fully appreciated. Simply stated, financial economists have not attempted to integrate time series analysis with econometric model building along the lines suggested by Zellner (1979b) and Zellner and Palm (1974).
Changes in speculative prices in an efficient capital market: theory and evidence
Fama (1970) has defined an efficient capital market as one in which speculative prices fully (and correctly) reflect available information. In such a market, changes in speculative prices occur only in response to new information or to reassessments of existing information. The pioneering empirical work of Working (1934), Kendall (1953), Roberts (1959), and Bachelier (1964) suggested that changes in various speculative price series appeared to follow simple time series processes or “fair game” models (in particular, random or quasi-random walks). Excepting Bachelier, the first rigorous economic theory consistent with the above empirical evidence was developed by Samuelson (1965).
Samuelson demonstrates that changes in futures prices will follow martingale or submartingale processes under very general assumptions concerning the stochastic behavior of spot commodity prices.
Zellner, Hong, and Gulati (1990) and Zellner and Hong (1989) formulated the problem of forecasting turning points in economic time series using a Bayesian decision theoretic framework. The methodology was … applied by Zellner, Hong, and Min (1990) (hereafter ZHM) to a host of models to forecast turning points in the international growth rates of real output for eighteen countries over the period 1974–86. They compared the performance of fixed parameter autoregressive leading indicator models (FP/ARLI), time-varying parameter autoregressive leading indicator models (TVP/ARLI), exponentially weighted autoregressive leading indicator models (EW/ARLI), and a version of each of these models that includes a world income variable – FP/ARLI/WI, TVP/ARLI/WI, EW/ARLI/WI. In addition, they implemented a pooling scheme for each of the models. A similar host of models is analysed here in order to assess whether these techniques hold promise for forecasting turning points in regional labor markets.
The innovative aspect of the ZHM study is not the models employed, but the use of the observations along with an explicit definition of a turning point, either a downturn (DT) or upturn (UT). This allows for a Bayesian computation of probabilities of a DT or UT given the past data from a model's predictive probability density function (pdf) for future observations. After computing these probabilities from the data, they can be used in a decision theoretic framework along with a loss structure in order to produce an optimal turning point forecast.
In this paper, background information on the origins and features of the Marshallian Macroeconomic Model (MMM) are presented. MMMs based on two alternative production functions are presented and compared. In addition, some empirical forecasting results for one of them are reviewed. Last, attention is focused on further development and implementation of the MMM.
Introduction
It is an honor and a pleasure to present my paper at this research conference honoring Professor Ryuzo Sato, a superb colleague and most productive scholar. His outstanding research analyzing production and technological change, Sato (1999a, 1999b) has been appreciated worldwide. Indeed, these topics play a central role in almost all models of industrial sectors and economies, including the models to be discussed below.
On the origins of the [Marshallian Macroeconomic Model] (MMM), in my experience it was a pleasure teaching undergraduate and graduate students the properties and uses of the Marshallian model of a competitive industry. On the other hand, teaching students macroeconomics was quite a different matter since there was no such comparable, operationally successful model available (See, e.g., Belongia and Garfinkel 1992 for an excellent review of alternative macroeconomic models, including monetarist, neo-monetarist, Keynesian, post-Keynesian, and real business cycle models and Fair 1992 and Zellner 1992, who pointed out that not enough empirical testing of alternative models had been done and more is needed to produce macroeconomic models that explain the past, predict well, and are useful in making policy.)
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
By
Arnold Zellner, Professor of Economics and Statistics, Graduate School of Business, University of Chicago; Professor University of California at Berkeley,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
Substantial progress has been made in developing data, concepts, and techniques for the construction and statistical analysis of econometric models. Comprehensive data systems, including national income and product accounts, price, wage and interest rate data, monetary data, and many other measures, have been developed for almost all countries. In many cases, annual measurements have been augmented by quarterly and monthly measurements of a broad array of economic variables. In recent years, scientifically designed sample surveys have been employed to expand the data bases of a number of countries. While research continues to improve data bases, we must recognize that the work that produced our current, extensive data bases is a major accomplishment in the field of scientific measurement and enables economic analysts to avoid the charge of “theory without measurement.”
In reviewing the development of concepts for the statistical analysis of econometric models, it is very easy to forget that in the opening decades of [the twentieth] century a major issue was whether a statistical approach was appropriate for the analysis of economic phenomena. Fortunately, the recognition of the scientific value of sophisticated statistical methods in economics and business has buried this issue. To use statistics in a sophisticated way required much research on basic concepts of econometric modeling that we take for granted today. It was necessary to develop fundamental concepts such as complete model, identification, autonomous structural relationships, exogeneity, dynamic multipliers, and stochastic equilibrium, to name a few, that play an important role in linking statistical analyzes and economic theory.
By
Chung-Ki Min, Department of Economics, Hankuk University of Foreign Studies, Seoul,
Arnold Zellner, Professor, Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL
In past work, Garcia-Ferrer et al. (1987) and Zellner and Hong (1989), variants of a relatively simple autoregressive model of order three containing lagged leading indicator variables, called an ARLI model, provided good one-year-ahead forecasts of annual output growth rates for eighteen industrial countries, 1974–84. In Zellner, Hong, and Gulati (1990) and Zellner, Hong, and Min (1991), this ARLI model and variants of it produced good turning point forecasts, about 70–80 percent of 158 turning points correctly forecasted. In Hong (1989), the ARLI model's cyclical properties were analyzed and its forecasting performance was shown to be slightly superior to that of a version of Barro's “money surprise” model. LeSage (1989) and LeSage and Magura (1990) have used ARLI models to forecast employment growth rates and turning points in them for eight metropolitan labor markets with satisfactory results. Blattberg and George (1991) used similar techniques in successfully forecasting sales of different brands of a product.
Some of our past work has involved use of fixed parameter models (FPMs) and time-varying parameter models (TVPMs). In the present chapter, we derive and compute posterior odds relating to our FPMs and TVPMs using data for eighteen countries, 1973–87. While there are many reasons – Lucas effects, aggregation effects, wars, etc. – for believing that parameters may be time-varying, economic theorists' models are generally fixed parameter models. Our calculated posterior odds will shed some light on the parameter constancy issue and are used to choose between FPMs' and TVPMs' forecasts year by year.
By
Antonio Garcia-Ferrer, Professor of Economics, Departamento de Analsis Economico: Economia Cuantitava, Universidad Autonoma de Madrid,
Richard A. Highfield, Dean School of Business Administration, School of Business, New York University at Albany, NY,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Universiteit Maastricht,
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL
It has long been recognized that national economies are economically interdependent (see, e.g., Burns and Mitchell 1946 for evidence of comovements of business activity in several countries and Zarnowitz 1985 for a summary of recent evidence). Recognition of such interdependence raises the question: Can such interdependence be exploited econometrically to produce improved forecasts of countries' macroeconomic variables such as rates of growth of output, and so forth? This is the problem that we address in this chapter, using annual and quarterly data for a sample of European Economic Community (EEC) countries and the United States.
We recognize that there are several alternative approaches to the problem of obtaining improved international macroeconomic forecasts. First, there is the approach of Project Link that attempts to link together elaborate structural models of national economies in an effort to produce a world structural econometric model. A recent report on this ambitious effort was given by Klein (1985). We refer to this approach as a “top-down” approach, since it uses highly elaborate country models to approach the international forecasting problem. In our work, we report results based on a “bottom-up” approach that involves examining the properties of particular macroeconomic time series variables, building simple forecasting models for them, and appraising the quality of forecasts yielded by them. We regard this as a first step in the process of constructing more elaborate models in the structural econometric modeling time series analysis (SEMTSA) approach described by Palm (1983), Zellner (1979), and Zellner and Palm (1974).
By
Arnold Zellner, Professor Emeritus of Economics and Statistics, Graduate School of Business, University of Chicago, Chicago, IL,
Franz C. Palm, Professor of Econometrics, Faculty of Economics and Business Administration, Maastricht University
In this chapter we take up the analysis of dynamic simultaneous equation models (SEMs) within the context of general linear multiple time series processes such as studied by Quenouille (1957). As noted by Quenouille, if a set of variables is generated by a multiple time series process, it is often possible to solve for the processes generating individual variables, namely the “final equations” of Tinbergen (1940), and these are in the autoregressive-moving average (ARMA) form. ARMA processes have been studied intensively by Box and Jenkins (1970). Further, if a general multiple time series process is appropriately specialized, we obtain a usual dynamic SEM in structural form. By algebraic manipulations, the associated reduced form and transfer function equation systems can be derived. In what follows, these equation systems are presented and their properties and uses are indicated.
It will be shown that assumptions about variables being exogenous, about lags in structural equations of SEMs, and about serial correlation properties of structural disturbance terms have strong implications for the properties of transfer functions and final equations that can be tested. Further, we show how large sample posterior odds and likelihood ratios can be used to appraise alternative hypotheses. In agreement with Pierce and Mason (1971), we believe that testing the implications of structural assumptions for transfer functions and, we add, final equations is an important element in the process of iterating in on a model that is reasonably in accord with the information in a sample of data.