To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We show that the problem of deciding if there is a scheduleof length three for the multiprocessor scheduling problem on identicalmachines and unit execution time tasks in -complete even for bipartitegraphs, i.e. for precedence graphs of depth one. This complexity resultextends a classical result of Lenstra and Rinnoy Kan [5].
Single server queues with repeated attempts are useful inthe modeling of computer and telecommunication systems. In addition, weconsider in this paper the possibility of disasters. When a disasteroccurs, all the customers present in the system are destroyedimmediately. Using a regenerative approach, we derive a numericallystable recursion scheme for the state probabilities. This model can beemployed to analyze the behaviour of a buffer in computers with virusinfections.
In this paper, a discrete-event simulation model iscoupled with a genetic algorithm to treat highly combinatorialscheduling problems encountered in a production campaign of a finechemistry plant. The main constraints and features of fine chemistryhave been taken into account in the development of the model, thusallowing a realistic evaluation of the objective function used in thestochastic optimization procedure. After a presentation of problemcombinatorics, the coupling strategy is then proposed and illustrated byan example of industrial size (24 equipment items, 140 products, 12different production recipes and 40 products to be recycled during thecampaign). This example serves as an incentive to show how the approachcan improve production performance. Three technical criteria have beenstudied: campaign completion time, average product cycle time, respectof due-dates. Two kinds of optimization variables have been considered:product input order and/or allocation of heuristics for conflittreatment. The results obtained are then analysed and some perspectivesof this work are presented.
The standard multiple criteria optimization starts with anassumption that the criteria are incomparable. However, there are manyapplications in which the criteria express ideas of allocation ofresources meant to achieve some equitable distribution. This paperfocuses on solving linear multiple criteria optimization problems withuniform criteria treated in an equitable way. An axiomatic definition ofequitable efficiency is introduced as an refinement ofPareto-optimality. Various generation techniques are examined and thestructure of the equitably efficient set is analyzed.
In time series analysis, the basic univariate model is the autoregressive moving average (ARMA) one. The estimation of ARMA models has been the subject of a vast literature over many years. If a pure autoregressive (AR) model is considered then ordinary least squares (OLS) estimation is appropriate and is asymptotically equivalent to maximum likelihood when the errors are normally distributed. However, the introduction of moving average (MA) components to the model complicates the estimation problem because the least squares criterion is no longer linear in the parameters. Both least squares and maximum likelihood estimation for models involving MA terms involves numerical optimisation and is relatively computationally difficult. As a result, a variety of techniques for the estimation of models with MA terms have been suggested that do not involve numerical optimisation. These techniques have generally made use (implicitly or explicitly) of moment conditions implied by the ARMA model, and therefore fall within the class of GMM estimators. This chapter has two aims. The first is to provide an introduction to some of these moments–based estimators. The second is a pedagogic one to illustrate the general theory of GMM presented in Chapter 1 as applied to a relatively simple time series model.
An outline of the chapter is as follows. In Section 6.1 we discuss the estimation of pure MA models. For simplicity we focus mostly on first order MA models, and indicate how extensions to higher order models follow.
The estimation of unknown parameters generally involves optimizing a criterion function based on the likelihood function or a set of moment restrictions. Unfortunately, for many econometric models the likelihood function and/or the relevant moment restrictions do not have a tractable analytical form in terms of the unknown parameters rendering thereby the estimation by maximum likelihood (ML) or the generalized method of moments (GMM) infeasible. This estimation problem typically arises when unobservable variables enter the model nonlinearly, leading to multiple integrals in the criterion function, which cannot be evaluated by standard numerical procedures. Prominent examples of such models in financial econometrics are continous–time models of stock prices or interest rates and discrete–time stochastic volatility models.
Until recently, estimation problems due to the lack of some kind tractable criterion function were often circumvented by using approximations of the model producing criterion functions simple enough to be evaluated. However, using such approximations may lead to inconsistent estimates of the parameters of interest. An alternative solution in such cases which has received increased attention over the last few years, is the use of Monte Carlo simulation methods to compute an otherwise intractable criterion function. Seminal for the development of this type of estimation procedures were the contributions of McFadden [1989] and Pakes and Pollard [1989] who introduced the Method of Simulated Moments (MSM) in a cross sectional context. This approach, which was extended to time series applications by Lee and Ingram [1991] and Duffie and Singleton [1993], modifies the traditional GMM estimator by using moments computed from simulated data rather than the analytical ones.
The standard econometric modelling practice for quite a long time was founded on strong assumptions concerning the underlying data generating process. Based on these assumptions, estimation and hypothesis testing techniques were derived with known desirable, and in many cases optimal, properties. Frequently, these assumptions were highly unrealistic and unlikely to be true. These shortcomings were attributed to the simplification involved in any modelling process and therefore inevitable and acceptable. The crisis of econometric modelling in the seventies led to many well known new, sometimes revolutionary, developments in the way econometrics was undertaken. Unrealistically strong assumptions were no longer acceptable. Techniques and procedures able to deal with data and models within a more realistic framework were badly required. Just at the right time, i.e., the early eighties when all this became obvious, Lars Peter Hansen's seminal paper on the asymtotic properties of the generalized method of moments (GMM) estimator was published in Econometrica. Although the basic idea of the GMM can be traced back to the work of Denis Sargan in the late fifties, Hansen's paper provided a ready to use, very flexible tool applicable to a large number of models, which relied on mild and plausible assumptions. The die was cast. Applications of the GMM approach have mushroomed since in the literature, which has been, as so many things, further boosted recently by the increased availability of computing power.
Nowadays there are so many different theoretical and practical applications of the GMM principle that it is almost impossible to keep track of them.
Since the mid eighties, alongside the literature arising on GMM, a large number of papers emerged on cointegration as well. This is due to the fact that cointegration models combine two features which many economic time series possess, i.e., random walk individual behavior and stationary linear combinations of multiple series.
Cointegration models are essentially linear models with reduced rank parameters. The reduced forms of the traditional simultaneous equation models have also this reduced rank property (see Hausman [1983]). The estimation techniques used in cointegration and simultaneous equation models are therefore very similar. Maximum likelihood estimators for both models use, for example, canonical correlations, (see Anderson and Rubin [1949] and Johansen [1991]), and maximum likelihood reduced rank regression therefore amounts to the use of canonical correlations and vectors. This chapter shows that GMM reduced rank regression amounts to the use of two stage least squares (2SLS) estimators. The asymptotic properties of the 2SLS estimators used in simultaneous equation models are in general identical to the properties of maximum likelihood estimators (see, for example, Phillips [1983]). This chapter shows that this also holds for cointegration models. Furthermore, the GMM objective function has asymptotic properties which are identical to a likelihood ratio statistic for cointegration, the Johansen trace statistic (Johansen [1991]), and it can thus be used in a similar way. The similarities between GMM and maximum likelihood estimators in reduced rank models are therefore quite large. The GMM, however, also allows for the derivation of the asymptotic properties in the more complex reduced rank models, which is not true for the maximum likelihood estimators.
Use of panel data regression methods has become increasingly popular as the availability of longitudinal data sets has grown. Panel data contain repeated time series observations (T) for a large number (N) of cross sectional units (e.g., individuals, households, or firms). An important advantage of using such data is that they allow researchers to control for unobservable heterogeneity, that is, systematic differences across cross sectional units. Regressions using aggregated time series and pure cross section data are likely to be contaminated by these effects, and statistical inferences obtained by ignoring these effects could be seriously biased. When panel data are available, error components models can be used to control for these individual differences. Such a model typically assumes that the stochastic error term has two components: a time invariant individual effect which captures the unobservable individual heterogeneity and the usual random noise term. Some explanatory variables (e.g., years of schooling in the earnings equation) are likely to be correlated with the individual effects (e.g., unobservable talent or IQ). A simple treatment to this problem is the within estimator which is equivalent to least squares after transformation of the data to deviations from means.
Unfortunately, the within method has two serious defects. First, the within transformation of a model wipes out time invariant regressors as well as the individual effect, so that it is not possible to estimate the effects of time invariant regressors on the dependent variable. Second, consistency of the within estimator requires that all the regressors in a given model be strictly exogenous with respect to the random noise.
Although GMM estimators are consistent and asymptotically normally distributed under general regularity conditions, it has long been recognized that this first–order asymptotic distribution may provide a poor approximation to the finite sample distribution. In particular, GMM estimators may be badly biased, and asymptotic tests based on these estimators may have true sizes substantially different from presumed nominal sizes.
This chapter reviews these finite sample properties, from both the theoretical perspective, and from simulation evidence of Monte Carlo studies. The theoretical literature on the finite sample behavior of instrumental variables estimators and tests is seen to provide valuable insights into the finite sample behavior of GMM estimators and tests.
The chapter then considers Monte Carlo simulation evidence of the finite sample performance of GMM techniques. Such studies have often focussed on applications of GMM to estimating particular models in economics and finance, e.g., business cycle models, inventory models, asset pricing models, and stochastic volatility models. This survey reviews and summarizes the lessons from this simulation evidence.
The final section examines how this knowledge of the finite sample behavior might be used to conduct improved inference. For example, bias corrected estimators may be obtained. Also, properly implemented bootstrap techniques can deliver modified critical values or improved test statistics with rather better finite sample behavior. Alternatively, analytical techniques might be used to obtain corrected test statistics.
In recent years the GMM approach became increasingly popular for the analysis of panel data (e.g., Avery, Hansen and Hotz [1983], Arrelano and Bond [1991], Keane [1989], Lechner and Breitung [1996]). Combining popular nonlinear models used in microeconometric applications with typical panel data features like an error component structure yields complex models which are too complicated or even intractable to be estimated by maximum likelihood. In such cases the GMM approach is an attractive alternative.
A well known example is the probit model, which is one of the work horses whenever models with binary dependent variables are analyzed. Although the nonrobustness of the probit estimates to the model's tight statistical assumptions is widely acknowledged, the ease of computation of the maximum likelihood estimator (MLE)—combined with the availability of specification tests—make it an attractive choice for many empirical studies based on cross sectional data. The panel data version of the probit model allows for serial correlation of the errors in the latent equations. The problem with these types of specifications is, however, that the MLE becomes much more complicated as in the case of uncorrelated errors.
Two ways to deal with that sort of general problems have emerged in the literature. One is the simulated maximum likelihood estimation (SMLE). The idea of this technique is to find an estimator that only approximates the MLE but retains the asymptotic efficiency property of the exact MLE. SMLE uses stochastic simulation procedures to obtain approximate choice probabilities (see e.g., Börsch-Supan and Hajivassiliou [1993], or Hajivassiliou, McFadden and Ruud [1996]).
Simultaneous equations models involving limited dependent variables can have nonunique reduced forms, a problem called logical inconsistency in the econometrics literature. In response to that problem, such models can be compelled to be recursive (Maddala [1983], Amemiya [1985]) or recast in terms of the latent variables (Mallar [1977]). In labor economics and elsewhere, this approach is often contrary to structural modelling; theory involving education, childbearing, and work, for example, naturally leads to models with simultaneously related limited dependent variables. Restricting these models to be recursive is inconsistent with the theory.
It is widely believed among economists that logically inconsistent models cannot be data generating processes (see Amemiya [1974] and Maddala [1983]). However, Jovanovic [1989] showed that the structural form of a model with nonunique reduced forms can be identified. That raises the possibility that these models can produce outcomes which are random variables even if the process is logically inconsistent.
An alternative interpretation of these models is that they can generate more than one equilibrium for the endogenous variables for some values of the exogenous variables and disturbances. Viewed this way, the problem can be solved by using a selection rule (Dagsvik and Jovanovic [1991], Goldfeld and Quandt [1968]], Hamilton and Whiteman [1985]) or collapsing the possibly nonunique equilibria into one outcome for purposes of estimation (Bresnahan and Reiss [1991]).
This chapter combines the use of a selection rule to choose among alternative equilibria with the insights of Jovanovic [1989] and standard GMM estimation theory to suggest an alternative to the method of Bresnahan and Reiss [1991] to identify and estimate simultaneous equations models of limited dependent variables.