To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book provides a general framework for specifying, estimating and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasimaximum likelihood estimation, generalised method of moments, nonparametrics and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships.
In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem proof presentation, this book is very concerned with implementation issues in order to provide a fast track between theory and applied work. Consequently many of the econometric methods discussed in the book are illustrated by means of a suite of programs written in GAUSS, MATLABR® and R. The computer code emphasises the computational side of econometrics and follows the notation in the book as closely as possible, thereby reinforcing the principles presented in the text. More generally, the computer code also helps to bridge the gap between theory and practice by enabling the reproduction of both theoretical and empirical results published in recent journal articles. The reader, as a result, may build on the code and tailor it to more involved applications.
The vector autoregression model (VAR) discussed in Chapter 13 provides a convenient framework for modelling dynamic systems of equations. Maximum likelihood estimation of the model is performed one equation at a time using ordinary least squares, while the dynamics of the system are analysed using Granger causality, impulse response analysis and variance decompositions. Although the VAR framework is widely applied in econometrics, it requires the imposition of additional structure on the model in order to give the impulse responses and variance decompositions structural interpretations. For example, in macro econometric applications, the key focus is often on understanding the effects of a monetary shock on the economy, but this requires the ability to identify precisely what the monetary shock is. In Chapter 13, a recursive structure known as a triangular ordering is adopted to identify shocks. This is a purely statistical approach to identification that imposes a very strict and rigid structure on the dynamics of the model that may not necessarily be consistent with the true structure of the underlying processes. This approach becomes even more problematic when alternative orderings of variables are tried, since the number of combinations of orderings increases dramatically as the number of variables in the model increases.
Structural vector autoregressive (SVAR) models alleviate the problems of imposing a strict recursive structure on the model by specifying restrictions that, in general, are motivated by economic theory. Four common sets of restrictions are used to identify SVARs, namely, short-run restrictions, long-run restrictions, a combination of the two and sign restrictions. Despite the additional acronyms associated with the SVAR literature and the fact that the nature of the applications may seem different at first glance, SVARs simply represent a subset of the class of dynamic linear simultaneous equations models discussed in Part TWO.
The stationary time series models developed in Part FOUR and the nonstationary time series models developed in Part FIVE are characterised by the mean being a linear function of the lagged dependent variables (autoregressive) and/or the lagged disturbances (moving average). These models are able to capture many of the characteristics observed in time series data, including randomness, cycles and stochastic trends. Where these models come up short, however, is in capturing more extreme events such as jumps and asymmetric adjustments across cycles that cannot be captured adequately by a linear representation. This chapter deals with models in which the linear mean specification is augmented by the inclusion of nonlinear terms so that the conditional mean becomes nonlinear in the lagged dependent variables and lagged disturbances.
Examples of nonlinear models investigated are thresholds time series models (TAR), artificial neural networks (ANN), bilinear models and Markov switching models. Nonparametric methods are also investigated where a parametric specification of the nonlinearity is not imposed on the structure of the model. Further nonlinear specifications are investigated in Chapters 20 and 21. In Chapter 20, nonlinearities in variance are introduced and developed in the context of GARCH and MGARCH models. In Chapter 21, nonlinearities arise from the specification of time series models of discrete random variables.
Motivating Examples
The class of stationary linear time series models presented in Chapter 13 yields solutions that are characterised by convergence to a single equilibrium point, with the trajectory path exhibiting either monotonic or oscillatory behaviour.
This textbook explains the basic ideas of subjective probability and shows how subjective probabilities must obey the usual rules of probability to ensure coherency. It defines the likelihood function, prior distributions and posterior distributions. It explains how posterior distributions are the basis for inference and explores their basic properties. Various methods of specifying prior distributions are considered, with special emphasis on subject-matter considerations and exchange ability. The regression model is examined to show how analytical methods may fail in the derivation of marginal posterior distributions. The remainder of the book is concerned with applications of the theory to important models that are used in economics, political science, biostatistics and other applied fields. New to the second edition is a chapter on semiparametric regression and new sections on the ordinal probit, item response, factor analysis, ARCH-GARCH and stochastic volatility models. The new edition also emphasizes the R programming language.
Let k be a positive integer such that k≡3 mod 4, and let N be a positive square-free integer. In this paper, we compute a basis for the two-dimensional subspace Sk/2(Γ0(4N),F) of half-integral weight modular forms associated, via the Shimura correspondence, to a newform F∈Sk−1(Γ0(N)), which satisfies . This is accomplished by using a result of Waldspurger, which allows one to produce a basis for the forms that correspond to a given F via local considerations, once a form in the Kohnen space has been determined.
We revise the matching algorithm of Noeske (LMS J. Comput. Math. 11 (2008) 213–222) and introduce a new approach via composition series to expedite the calculations. Furthermore, we show how the matching algorithm may be applied in the more general and frequently occurring setting that we are only given subalgebras of the condensed algebras which each contain the separable algebra of one of their Wedderburn–Malcev decompositions.
We study the differential structure of the ring of modular forms for the unit group of the quaternion algebra over ℚ of discriminant 6. Using these results we give an explicit formula for Taylor expansions of the modular forms at the elliptic points. Using appropriate normalizations we show that the Taylor coefficients at the elliptic points of the generators of the ring of modular forms are all rational and 6-integral. This gives a rational structure on the ring of modular forms. We give a recursive formula for computing the Taylor coefficients of modular forms at elliptic points and, as an application, give an algorithm for computing modular polynomials.
Let S1=S1(v0,…,vr+1) be the space of compactly supported C0 piecewise linear functions on a mesh M of lines through ℤ2 in directions v0,…,vr+1, possibly satisfying some restrictions on the jumps of the first order derivative. A sequence ϕ=(ϕ1,…,ϕr) of elements of S1 is called a multi-box spline if every element of S1 is a finite linear combination of shifts of (the components of) ϕ. We give some examples for multi-box splines and show that they are stable. It is further shown that any multi-box spline is not always symmetric
We present a practical algorithm to compute models of rational functions with minimal resultant under conjugation by fractional linear transformations. We also report on a search for rational functions of degrees 2 and 3 with rational coefficients that have many integers in a single orbit. We find several minimal quadratic rational functions with eight integers in an orbit and several minimal cubic rational functions with ten integers in an orbit. We also make some elementary observations on possibilities of an analogue of Szpiro’s conjecture in a dynamical setting and on the structure of the set of minimal models for a given rational function.
We present a new algorithm for constructing a Chevalley basis for any Chevalley Lie algebra over a finite field. This is a necessary component for some constructive recognition algorithms of exceptional quasisimple groups of Lie type. When applied to a simple Chevalley Lie algebra in characteristic p⩾5, our algorithm has complexity involving the seventh power of the Lie rank, which is likely to be close to best possible.
THE PREVIOUS CHAPTER discussed methods that generate independent observations from standard probability distributions. But you still have the problem of what to do when faced with a nonstandard distribution such as the posterior distribution of parameters of the conditionally conjugate linear regression model. Although themethods previously described can, in principle, deal with nonstandard distributions, doing so presents major practical difficulties. In particular, they are not easy to implement in the multivariate case, and finding a suitable importance function for the importance sampling algorithm or a majorizing density for the AR algorithm may require a very large investment of time whenever a new nonstandard distribution is encountered.
These considerations impeded the progress of Bayesian statistics until the development of Markov chain Monte Carlo (MCMC) simulation, a method that became known and available to statisticians in the early 1990s. MCMC methods have proven extremely effective and have greatly increased the scope of Bayesian methods. Although a disadvantage of these methods is that they do not provide independent samples, they have the great advantage of flexibility: they can be implemented for a great variety of distributions without having to undertake an intensive analysis of the special features of the distribution. Note, however, that an analysis of the distribution may shed light on the best algorithm to use when more than one is available.
Because these methods rely on Markov chains, a type of stochastic process, this chapter presents some basic concepts of the theory, and the next chapter utilizes these concepts to explain MCMC methods.