While traditionally considered for non-stationary and cointegrated data, DeBoef and Keele suggest applying a General Error Correction Model (GECM) to stationary data with or without cointegration. The GECM has since become extremely popular in political science but practitioners have confused essential points. For one, the model is treated as perfectly flexible when, in fact, the opposite is true. Time series of various orders of integration–stationary, non-stationary, explosive, near- and fractionally integrated–should not be analyzed together but researchers consistently make this mistake. That is, without equation balance the model is misspecified and hypothesis tests and long-run-multipliers are unreliable. Another problem is that the error correction term's sampling distribution moves dramatically depending upon the order of integration, sample size, number of covariates, and the boundedness of Yt. This means that practitioners are likely to overstate evidence of error correction, especially when using a traditional t-test. We evaluate common GECM practices with six types of data, 746 simulations, and five paper replications.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between 4th January 2017 - 15th August 2018. This data will be updated every 24 hours.