We have generally avoided presenting “horse race” results from CHIP50, in part because that’s not our primary mission; and in part because that’s a tough business that experienced political pollsters will do better at than us; plus we run our survey over an extended period of time (a month or more), meaning that our survey represents average opinion over that period, not opinion today, which is what pollsters are aiming to provide. That said, our data allow us the opportunity to reflect on the role of method in the measurement of public opinion.

First, we note that generally we have done a pretty good job at matching objective benchmarks with our data. We did very well matching COVID infection rates during the pandemic, as well as vaccination rates. Indeed, it is likely the CHIP50 data are currently better than the official government data with respect to both COVID infections and vaccination rates. We also match gun ownership rates pretty well. 

Horse race polls, however, are especially tricky, because: (1) there may be biases in who is selected into surveys that are quite plausibly related to voting predisposition; (2) respondents often mislead regarding whether they are likely to vote; and (3) they have to be very precise if the race is close (and for forecasting, it involves predicting behaviors for some voters who are uncertain themselves until the moment they vote). Across time and space, research suggests that surveys are really good at predicting election outcomes (correctly predicting outcomes 90+% of the time); however, the dirty little secret of that research is that most races are not that close, thus allowing imprecise surveys to be correct. One of the major problems of 21st century horse race polling is that Presidential elections have been so very, very close this century, requiring a precision that has never existed in the industry, nor plausibly could exist.

In Figure 1, we present the predictions that would have been gleaned from the CHIP50 survey conducted during the two months before the 2020 Presidential election, comparing them to actual outcomes. The nice thing about our survey is that we produce predictions for all states (though some, from small states, are somewhat underpowered), allowing us to do more than just compare a single point (national popular vote outcome) to other surveys. We note that we conduct non-probability surveys, based on existing online market research panels. We apply very plain vanilla methods to the data – the point here is to produce reasonable estimates, not the very best possible estimates. As is standard, we reweight the sample, so that underrepresented (as compared to the census) demographics are weighted more than over-represented demographics. For 2020 we didn’t adjust for partisanship or prior Presidential vote. We also do not adjust for likelihood of voting, which many pollsters do, beyond excluding from the analysis the respondents who state that they will not vote.

Figure 1

A few observations: (1) we systematically overshot Biden’s vote share, by about 3 points; (2) except for a few crazy outliers (especially South Dakota) our predictions were strongly correlated with actual outcomes. In terms of the first observation, we note that our systematic miss was similar to that of the 538 aggregation of polls in October, 2020 (see figure 2 for plot of our predictions vs 538; and figure 3 for 538 predictions vs outcomes).

Figure 2

Figure 3

One would have done far better predicting the 2020 outcomes simply by using the 2016 outcomes (see figure 4). The average (and median) miss of using 2016 outcomes would have been about 2 points; whereas for 538 it would have been about 3 points (median: 2.4), and for CHIP50 about 4 (median: 2.8). Note that there was still incremental signal value in surveys– if you run a statistical regression on 2020 outcomes on 2016 outcomes and either CHIP50 or 538, there is a significant relationship with survey predictions (though 2016 outcomes is far more significant). That is, surveys were moderately predictive of shifts at the state level.

Figure 4

As Nate Cohn noted in a column about how pollsters have adjusted to having substantially underestimated Trump in two elections, many (most?) pollsters now also reweight their samples so that they are reflective of the voters in 2020 (also see related Nate Silver column). This means that if the sample over- or under-represents 2020 Trump supporters, responses will be reweighted so that the reweighted sample (or, at least the subsample that reports having voted in 2020) precisely reflects the 2020 outcomes. There are some issues with that– to be discussed below– but let’s see what happens to our 2024 predictions when we shift from reweighting just on demographics (figure 3) to reweighting on demographics plus prior vote (figure 4). In both cases we compare with the 2020 vote outcomes (since we shouldn’t expect large changes from 2020, though the small changes that occur might be quite consequential). The way we do this is simply to reweight the people who say they voted in 2020, so that this subset of respondents perfectly reflects the 2020 outcome in a given state. People who say they didn’t vote in 2020 are not reweighted relative to those that say they did vote in 2020.

Figure 5

Figure 6

There are two effects from the vote reweighting. First, our sample, even after all demographic reweighting, is slightly tilted towards 2020 Biden voters. Thus, the reweighting tilts the prediction away from Harris by about 2 points. Second, it greatly reduces the variation around the 45 degree line, with only 2 states beyond the +/- 5 point range: Alabama, and Kansas. This is simply because: (1) prior vote is enormously predictive of future vote; and (2) if we happen to draw a really lousy sample of too many 2020 Biden or Trump voters in a given state, reweighting by prior vote will substantially “fix” that (e.g. Kansas). 

We also note that weighting on prior vote somewhat changes the partisan composition of the sample as compared to demographic weighting only. Unsurprisingly, it drops the representation of Democrats from 37% to 35%. Republicans stay at 30% of the sample, while Independents bump up from 33% to 35%

All of this does suggest that reweighting by prior vote does reduce the odds of Trump significantly overshooting expectations a third time. That said, there are still many reasons why surveys might miss, most prominently with respect to who actually turns out to vote (certainly, this is a reason why we should expect CHIP50 to miss, since we didn’t build in turnout at all).

There are few solutions in life, however, that don’t create other problems; and so it is with reweighting based on prior vote. There are a few issues here, some of which we can provide insight into, and others not. First, people who say they voted in 2020 may be lying– that is a systematic bias in surveys, and if the people who lie have systematically different vote preferences than those who don’t, this will affect estimates. One could try to validate 2020 voting against voter records, which some pollsters are hopefully doing; but we are not (and likely most pollsters do not either). Second, even if everyone was being truthful, the eligible voters in 2024 who voted in 2020 will not be quite representative of those who voted in 2020. This is in part because turnout patterns may be different in 2024 than in 2020, and, in part because some 2020 voters have since died. Back of the envelope calculation, we might expect 5% or so of the 2020 electorate to have passed away. Third, a new set of voters has come of age to vote.

Thus, in evaluating changes from 2020 to 2024, there are four key things to evaluate:

What is the defection rate from 2020 Trump voters to Harris; and 2020 Biden voters to Trump?

  1. How did the 2020 voters who have since died vote?
  2. How are new voters going to vote?
  3. Will turnout patterns in 2024 differ from 2020?

We consider each question in turn– except the last, which is beyond our capacity to evaluate. In table 1 we show the predicted vote shares for Biden and Harris given self reports of voters of their 2020 vote.

First, the vast majority of respondents had voted for Biden or Trump in 2020; and the defection rates to the other party are very small (noting that small asymmetries here could be very consequential). We find that the patterns of defection, in any case, are nearly identical– about 5% for each candidate, with another 5-6% wavering. Among the small number who supported another candidate or are not sure if they voted in 2020, Harris has a small, statistically marginal, edge.

Second, among self reported “new” voters (who make up 15% of the total sample; but we would guess that these individuals are substantially less likely to vote than repeat voters), how do Harris and Trump compare? Here we find a marginal edge for Harris (43% to 40%).

Finally, obviously we have no survey data of deceased voters; however if we examine the preferences of older (65+) voters from our 2020 survey, they were somewhat  more supportive of Trump than the general electorate, by about 6% points. This would be a fairly trivial bias towards Trump, we note, tenths of a percentage point. Combining this and the last point, one would guess that new voters are slightly more Democratic than old. 

In any case, the answer to all three of these questions suggests that the outcome of the 2024 election will be very close to that in 2020– indeed, it is likely that the 2024 election popular vote percentages at the state level will be closer to the 2020 outcome than to any given survey, and, perhaps, to any aggregation of the surveys. Repeat voters will be, net, nearly identical in their voting patterns (with Harris and Trump swapping tiny numbers of defectors); new voters, best we can tell, roughly voting evenly for the two candidates; and where 5% or so of voters that have exited the electorate may be slightly more supportive of Trump, but only enough to shift the net vote by a few tenths of a percentage point or less.

And, of course, the patterns of turnover in the electorate and defection rates may differ by state. Very tiny differences in these patterns in the swing states could be determinative of who wins on election day. Trump could well improve his popular vote share nationally but do worse in the swing states, for example. But, all of this does suggest: 2024 will be very close to the outcomes in 2020 and 2016, where the difference in effective margin will be too tiny for survey science to discern.

Read journal articleRead reportLearn more