You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

538 Is All Wrong About My Turnout Projections

Conventional outlets don't get to look at voter files. I do.

Alex Wong/Getty Images

Last month, my company, Clarity Campaign Labs, collaborated with Sasha Issenberg and The New Republic to produce an in-depth analysis of the 2014 electoral landscape, with a particular eye towards the Senate map.

Not surprisingly, among the positive feedback we’ve received, some criticisms emerged. Harry Enten, a blogger at 538 whose work I deeply admire and enjoy, published one such criticism a week ago and a related story that was posted today. In Enten’s original piece, he questioned the accuracy of the Senate race rankings that appeared as part of the New Republic piece. A large part of his criticism stemmed from his skepticism towards the survey that we conducted as part of the story.

To take a step back here: This survey, and the analysis of the results that appeared in the story, was presented in a very different fashion than the sort of data that most journalists have access to, or are accustomed to seeing. For that reason, I can understand the confusion Enten displayed in his initial criticism. 

Our survey, a very large sample (3,879 respondents) national survey of registered voters, matched every single respondent back to their voter file. As has been reported widely, these voter files are rich with individual level information, from the history of which elections a voter has cast a ballot in, to predictive models of each individual’s partisanship, ideology, and position on various key issues. As far as I am aware, our story marked the first time that a survey making use of this wealth of data was released for public consumption.

In his May 1st piece, Enten criticized the survey, offering as evidence the fact that Republicans showed such a large lead in the generic congressional ballot test among “midterm voters.” That’s not precisely the segmentation that our poll used. Issenberg’s New Republic piece relied on a depiction of two broad groups of voters—“reflex” and “unreliable” voters. In our survey, we tagged any respondent who voted in both 2010 and 2012 as a reflex voter, and any voter who voted in 2012 but not 2010 an “unreliable” voter. Logic would dictate that some percentage of reflex voters will fail to cast a ballot in 2014, just as some percentage of unreliable voters will show up at the polls. Campaign activities do have some impact, after all. The distinction between these two broad universes is, however, quite illustrative of the dynamic at play between presidential and off-year elections.

Enten takes on this question in his post today, entitled “Midterm Election Turnout Isn’t So Different From Presidential Year Turnout.” As one could gather from the descriptive title, Enten posits that, to the extent there is a difference in turnout between presidential and off-year elections, the variation is not all that impactful.

Again, it’s difficult to fault Enten for this analysis. The sort of data he has access to is severely deficient for an analyst seeking to answer the questions he sets out to answer. To prove his point, Enten relies entirely upon the Census’s Current Population Survey. The CPS produces data on voter turnout. Self-reported voter turnout. As the documentation for the CPS dataset notes, the turnout data reported by CPS is subject to sampling and non-sampling error.

This particular example demonstrates the (generally unavoidable) divide between electoral journalism and campaigns. The disparity in the data and tools available to each can lead to flawed analysis and conclusions from journalists lacking access to such information. Issenberg’s New Republic piece stands as a rare glimpse into that divide—while journalists must focus on broad electoral cohorts, campaigns are able to target at the individual level, not limited to the types of groupings you would find in census data, or even in traditional poll data. His analysis is based on the notion that campaigns can change the electorate by using the same tools that he relied on for his rankings.

Another potential source of significant error in Enten’s calculations is his reliance on broad demographic groups for his analysis, forcing him to assume that these groups perform monolithically—so that the only real point of analysis is what share of overall turnout each group comprised. Again, the problem is the shortcomings in the data Enten has access to. Conventional data doesn’t allow him to consider that there are nuanced variations in turnout at the sub-demographic level that drive these broader variations in partisan turnout.

Those of us lucky enough to have access to voter files with individual-level vote history don’t have to rely on the sort of data that Enten uses for this particular analysis. What’s more, we can put theories (like Enten’s) to the test.

The bottom line is, Enten’s theory doesn’t hold up under the scrutiny of individual vote history. For example, Enten looks at the variation in turnout among younger voters between 2010 and 2012, and then considers the partisan vote share of that demographic in order to assign some sense of partisan impact of these turnout changes. But what he’s missing is an understanding of which younger voters cast a ballot in each year. By using vote history and partisan models, we can gain a better sense of this dynamic. For example, in Ohio in 2012, the average modeled partisanship of registered voters under the age of 30 who cast a ballot was 57.3%. The same statistic for that group for 2010 voters was 50.5%. So while the overall share of the electorate that younger voters comprised in each election could be largely unchanged, that would mask the sub-demographic dynamic that is truly impactful, from a partisan vote perspective.

The last six years have seen a trend in journalism towards handicapping election outcomes, with an eye towards predictive metrics. One shortcoming of these prognostications has been the lack of credible data upon which to base their assumptions. They must make use of what is publicly available: A mish-mash of public surveys with a wide range of veracity, and a smattering of internal campaign surveys, selectively released generally with the hopes of driving the narrative in that particular campaign. Issenberg’s piece had access to something much richer.