We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Using a forecasting model based on economic pessimism and recognizing the difficulties of making such a forecast in such atypical times, the forecasting model predicts a narrow loss for the incumbent presidential party and a loss of 12 seats in the House of Representatives. Even with the unusual nature of politics in the United States over the past decade, this model does a good job of predicting election outcomes. The more pessimistic people are, the worse the incumbent party does in presidential and House elections. Moreover, the power of incumbency shows strongly.
At its most basic level, this forecast of the 2012 presidential election performed successfully. The model forecasted the reelection of President Obama. With 53.8% of the two-party vote; he ended up with 51.8% (as of November 28, 2012), yielding an error of 2.0 percentage points. Prior to 2012 the average out-of-sample forecasting error was 3.34 points. Here, the model does better than the average of the previous years by almost a point. Running the model with the added data point does not change matters appreciably. The economic item increases slightly and the logged time in the White House item weakens slightly. The overall fit of the model is just a bit lower.
Do our models of political behavior bear any resemblance to reality? Forecastingelections is one opportunity to assess whether our models of voting behavior areaccurate. Over the past few decades, political scientists have been willing to putthemselves out there to forecast elections. Explaining a past event allows us theability to retrofit our models before we make them available to the broadercommunity. In short, forecasting elections provides us the opportunity to develophumility. The forecasting community has done a reasonable job over the past fewelections. Aside from 2000, forecasters have been largely accurate. Even in 2000,the forecasting community can claim a modest victory. The community was right aboutthe popular vote winner; it just happened that the popular vote winner lost theelection that counts—the Electoral College.
Different methodological approaches sometimes lead to different substantive conclusions. Nowhere is this more evident than in studies relating assessments of presidential skill to legislative success. Scholars of the historical, traditionalist school of presidency research argue that presidents who are perceived to be adept at getting what they want are more likely to achieve their legislative goals than are those perceived as less adept. Neustadt identifies perceived skill, or what he calls ‘professional reputation’, as one of the three resources that are the essence of presidential power. Yet students of the presidency who employ quantitative methods have found little or no systematic relationship between variations in skill evaluations and variations in success. George Edwards reports thai similarly situated Congressmen are not especially more likely to support highly esteemed presidents than lowly esteemed presidents. Fleisher and Bond similarly find that once contextual variables have been controlled for, there is no pattern suggesting that presidents thought to be highly skilled do better with Congress.
The October 2008 issue of PS published a symposiumof presidential and congressional forecasts made in the summerleading up to the election. This article is an assessment of theaccuracy of their models.
At its most basic level, the Economic Expectations andTime-for-a-Change Model performed well in that it successfullyforecast a Barack Obama victory. It was in estimating the two-partyvote that the model underperformed. The point estimate was off byjust under five percentage points. While not terrible, the model didnot perform as well as it did in earlier years.
This article is about a simple two-variable equation forecasting presidential election outcomes and a three-variable equation forecasting seat change in House elections. Over the past two decades a cottage industry of political forecasting has developed (Lewis-Beck and Rice 1992; Campbell and Garand 2000). At the 1994 meeting of the Southern Political Science Association, several participants offered their forecasts of the upcoming midterm House elections. Unfortunately, not one of the forecasters was within 20 seats of the actual outcome. If, however, these forecasts had been pooled, as Gaddie (1997) points out, then they would have come remarkably close to the actual seat change that occurred. Moving forward, at the 1996 APSA Annual Meeting the collection of forecasters did a much better job with that year's presidential election. The forecasters also got the overall popular vote outcome correct at the 2000 APSA Annual Meeting for that year's presidential election. We all forecasted a victory for Al Gore, with James Campbell coming the closest to the actual total (50.2%) at 52.8%. At the panel at the 2004 APSA Annual Meeting almost every forecaster predicted the actual outcome correctly. Forecasting elections holds us accountable—we cannot go back and change our forecast for an election after it has occurred. Moreover, if we stick with one forecast, it easy to judge the overall accuracy of our equations.
There are two components to my model of presidential election forecasting: Given the
enormous amount of work on voting behavior that finds prospective assessments of the economy
to be strongly related to vote choice (e.g., Abramowitz 1985;
Kuklinski and West 1981; Lewis-Beck 1988; Lockerbie 1992), I make use of a prospective economic item from the Survey of
Consumer Attitudes and Behavior that asks if the next year will be better, worse, or the
same for the respondent. I take the average of the negative responses to this question from
the first quarter of the election year as my economic measure. These data are available in
late April.
Forecasting provides the opportunity to put one's self to the test. Are our models of voting behavior accurate? It is easy to retrofit an explanation for what has happened in the past. Taking a chance on a forecast that can go wrong does not afford us that luxury. Forecasting can also teach a lesson in humility. Over the last decade, political scientists have been willing to gamble on their models. We have had some success. Everyone on the forecasting panel at the 1996 APSA Annual Meeting correctly forecast a Clinton victory. The forecasting of the 2000 presidential election was clearly a lesson in humility (at least for this author). None of the authors of this symposium forecast a Bush victory. Moreover, many forecast a rather substantial victory for Al Gore.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.