4EA77AFD-D097-F0AB-B333A88EBE6D22F1
E831F9E4-F21F-6424-9AAB0C69E530970D

Executive Summary

Introduction

In this paper, we report on the first-ever test of the accuracy of figures who made political predictions. We sampled the predictions of 26 individuals who wrote columns in major newspapers and/or appeared on the three major Sunday television news shows (Face the Nation, Meet the Press, and This Week) over a 16 month period from September 2007 to December 2008. Collectively, we called these pundits and politicians “prognosticators.” We evaluated each of the 472 predictions we recorded, testing it for its accuracy. Based on an analysis of these predictions, we answer three questions:

  1. Which prognosticators are most accurate? We found wide disparities in the predictive accuracy of these individuals, and we divided them into “the good, the bad, and the ugly.”
     
  2. Which characteristics are associated with predictive accuracy? We examined the effects of age, education, ideology, and other factors on accuracy.
     
  3. What is the purpose of media pundits? We discuss whether the ordinary citizen should look to pundits for deeper analysis of events, or whether pundits are simply a more enjoyable way to learn about the events of the day. We also consider alternative viewpoints, including the notion that pundits are useful as representatives of opposing points of view in the country, and the idea that they are simply entertainers.

Methodology

In order to test the accuracy of prognosticators we needed a data set of their predictions.  We chose to evaluate pundits during a 16-month period between September 2007 and December 31st 2008. This period was selected because we believed many predictions would be made as a result of the 2008 elections.  Additionally, picking a period that ended a few years ago gave us ample time to test each prediction’s accuracy.

To obtain our final prognosticator sample we first generated a sample of 22 print media columnists and 36 TV prognosticators based on who was most widely syndicated and who appeared on the network Sunday morning talk shows Meet the Press, This Week, and Face the Nation more than 5 times within our evaluation period.  From our full sample of 58 we then randomly selected 25 prognosticators to create our final sample.  Later, we added George Will as our final (26th)prognosticator, due to his enormous presence in the media scene.

We randomly selected a sample of columns and transcripts for each prognosticator, which we scoured for predictions.  To identify predictions, we compiled a “dictionary” of predictive language.  We rated each predictive word on a scale from 1 (will not happen) to 5 (will absolutely happen,) with 2 through 4 ranking words in between.

Other variables recorded for each prediction include the demographic information about the individual who made the prediction, the topic of the prediction, whether the prediction was made on TV or in a newspaper, and a few related variables.  Finally, we evaluated each prediction to see whether it happened or not.


Results

We discovered that a few factors impacted a prediction's accuracy.  The first is whether or not the prediction is a conditional; conditional predictions were more likely to not come true.  The second was partisanship; liberals were more likely than conservatives to predict correctly.  The final significant factor in a prediction's outcome was having a law degree; lawyers predicted incorrectly more often. (R-square of .157)  Partisanship had an impact on predictions even when removing political predictions about the Presidential, Vice Presidential, House, and Senate elections.

A number of factors impacted whether a prediction was extreme.  In other words, we measured whether certain things led to pundits saying that an event absolutely will or will not happen.  Again, conditional predictions were significant, and made something more extreme.  Predictions about the GOP primaries were less extreme, as were predictions about the Vice Presidency.  Predictions about the partisan makeup of the House were more likely to be extreme.  As prognosticators aged, they became less extreme.  (R-square of .145)


Implications

We have discovered a number of implications from our regressions and analysis of the data.  First, we have discovered that six of the analyzed prognosticators are better than a coin flip (with statistical significance.)  Four are worse, and the other 16 are not statistically significant.  A larger sample can provide better evidence addressing the question of if prognosticators on the whole are better than a coin flip.  We understand that being better than a coin flip is not a high bar to set, but it is a serious indictment of prognosticators if they are, on average, no better than a flipped coin.

According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.  Whether this holds true from election season to election season needs further evaluation; liberals may have implicit benefits from Obama winning the 2008 election.  Tentatively, however, we can assert that liberals are better predictors than conservatives.  Additionally, individuals with law degrees were less accurate than those who did not possess law degrees.

A final important implications is that we did not discover that certain types of predictions tended to be more or less accurate than others.  For example, we did not see that economic predictions were more accurate than healthcare predictions.  This suggests that prognosticators on the whole have no unique expertise in any area—even on political predictions, like the Presidential or party primary elections.

Help us provide an accessible education, offer innovative resources and programs, and foster intellectual exploration.

Site Search