As Democrats’ prospects for the midterms have improved — they’re now up to a 71 percent chance of keeping the Senate and a 29 percent chance of retaining the House, according to the 2022 FiveThirtyEight midterm election forecast — I’ve observed a corresponding increase in concern among liberals that the polls might overestimate Democrats’ position again, as they did in 2016 and 2020. Even among commenters who are analyzing the race from an arm’s-length distance, there sometimes seems to be a presumption that the polls will be biased toward Democrats.
The best version of this argument comes from Other Nate (Nate Cohn, of The New York Times). He pointed out in a piece on Monday that states such as Wisconsin and Ohio where Democratic Senate candidates are outperforming FiveThirtyEight’s fundamentals index — like how the state has voted in other recent elections — were also prone to significant polling errors in 2020. Cohn’s analysis is worth reading in full.
Here, I’m going to present something of a rebuttal. Not necessarily to Cohn’s specific claims, but rather to the presumption I often see in discussion about polling that polling bias is predictable and necessarily favors Democrats. My contention is that while the polls could have another bad year, it’s hard to know right now whether that bias will benefit Democrats or Republicans. People’s guesses about this are often wrong. In 2014, for example, there was a lot of discussion about whether the polls would have a pro-Republican bias, as they did in 2012. But they turned out to have a pro-Democratic bias instead.
There’s one important complication to this, however. Our model actually assumes that current polling probably does overstate the case for Democrats. It’s just not necessarily for the reasons people assume.
As I mentioned, the Deluxe version of our forecast gives Democrats a 71 percent and 29 percent chance of keeping the Senate and House, respectively. But the Deluxe forecast isn’t just based on polls: It incorporates the fundamentals I mentioned earlier, along with expert ratings about these races. Furthermore, it accounts for the historical tendency of the president’s party to perform poorly at the midterms, President Biden’s mediocre (although improving) approval rating and the fact that Democrats may not perform as well in polls of likely voters as among registered voters. As the election approaches, it tends to put more weight on the polls and less on these other factors, but it never zeros them out completely. (In this respect, it differs from our presidential forecast.)
By contrast, the Lite version of our forecast, which is more or less a “polls-only” view of the race, gives Democrats an 81 percent chance of keeping the Senate and a 41 percent chance of keeping the House. It also suggests that they’ll win somewhat more seats: There are 52.4 Democratic Senate seats in an average Lite simulation as compared with 50.8 in a Deluxe simulation, or 212 Democratic House seats in an average Lite simulation versus 209 in a Deluxe simulation. Notably, this corresponds to current polls overstating Democrats’ position by the equivalent of 1.5 or 2 percentage points. Put another way, we should think of a race in which the polling average shows Democrats 2 points ahead as being tied.
That’s not quite the same thing as saying that the polls are systematically biased, though. Polls reflect a snapshot of what is happening today, and Democrats might indeed do very well if the election were held now instead of in November. In states like Ohio, for instance, they’ve enjoyed a significant advertising advantage thanks to superior fundraising, but that will probably even out to some extent by Election Day.
Meanwhile, Biden and Democrats have also been on something of a winning streak lately, between a series of policy accomplishments, inflation trending downward and the political backlash to the Supreme Court’s unpopular decision to overturn Roe v. Wade. But a worse-than-expected inflation report this week and a narrowly averted rail workers’ strike, which could have caused substantial supply chain disruptions, are reminders that uncertain real-world events won’t necessarily continue to play out in Democrats’ favor.
It’s also the case that in individual races, information besides the polls can help make a more accurate prediction, even when you have a lot of polls. For example, the partisan lean of a state still tells you something. Let’s say the polling average has the Democrat ahead by 10 points in a state where the fundamentals put the Republican up by 2. Empirically, the best forecast in a race like this uses a blend of mostly polls and some fundamentals (exactly how much weight is given to the polls depends on how many polls there are and how close it is to the election). And you might end up with a forecast that has the Democrat winning by 7 or 8 points rather than 10 points, for instance. In that sense, in races such as Wisconsin and Ohio where there is a significant divergence between polls and fundamentals, Democrats probably should have concerns.
What I resist, though, is the implication that it can be presumed that the polls have a predictable, persistent, systematic bias toward Democrats. Is Rep. Tim Ryan going to underperform his current polls in Ohio’s Senate race? We’ll see, but more likely than not, the answer is yes. But is it just a thing now that polls always overrate Democrats?
I’m skeptical. Here are seven reasons why:
1. Polling bias hasn’t been predictable historically
Our historical database of polls shows that there’s not much in the way of consistent polling bias. Two cycles of a pro-Republican bias in 1998 and 2000 were followed by a Democratic bias in 2002. A fairly sharp Republican bias in 2012 reversed itself, and the polls were biased toward Democrats in both 2014 and 2016.
Historically, the correlation between the polling bias in a given cycle and the bias in the previous cycle is either essentially zero or slightly negative, depending on whether you define “previous cycle” as “two years ago” or “four years ago.”
2. Pollsters have a strong incentive to be unbiased
Pollsters get a lot of crap from people, but one nice thing about their job is that they regularly get to compare their results against reality. Sure, it’s possible for a pollster to get unlucky because of sampling error — if you survey 500 people, sometimes you’ll draw a sample showing the Republican winning even if the Democrat is really up by 5 points. For the most part, though, pollsters can and do consider changes to their methodology based on errors in past elections.
And precisely because pollsters are subject to public scrutiny and there are relatively objective ways to measure their performance, they have strong financial and professional incentives to scrutinize their methods for potential sources of error and fix them if they can. It’s the same incentive that a professional golfer has to fix his swing: If he’s consistently hitting every shot to the left side of the fairway, for instance, at some point he’ll make adjustments. Maybe he’ll even overcompensate and start hitting everything to the right side instead.
3. The way polling averages weight pollsters changes
Even if pollsters don’t change their methods, the market will change the polling landscape on its own, at least to some degree. Pollsters who performed well in previous elections will get more business, and those who performed poorly will lose it.
For instance, we’ve seen relatively few traditional “gold standard” polls sponsored by major media organizations this cycle, perhaps because those polls tended to have a Democratic bias in 2020. That’s a shame, because most of these polling organizations have good long-term track records despite some recent problems. But it does mean that polling averages are more weighted toward Republican-leaning firms that have done comparatively well in recent election cycles, such as Rasmussen Reports and Trafalgar Group. This is especially true for FiveThirtyEight’s polling averages, which weight polls in part based on their historical accuracy. Groups like Rasmussen, for instance, get more say in the polling average than they did in 2020 because their rating is now higher.
4. Polls haven’t had a Democratic bias in elections without Trump on the ballot
As you can see in the table in the first point, polls did not have a systematic Democratic bias in 2018. That seems relevant, considering that was the most recent midterm.
Polls have also generally not had a Democratic bias in other elections in the Trump era when Trump himself was not on the ballot. They didn’t have one in the Alabama Senate special election in 2017, for instance, or the Georgia Senate runoffs in January 2021, or in last year’s Virginia gubernatorial race.
There have also been some races where Democrats have overperformed their polls, such as in last year’s California gubernatorial recall election and in the 2017 governor’s race in Virginia. But these errors don’t tend to get as much attention from the media as those that underestimated Republicans.
It may be that Republicans benefit from higher turnout only when Trump himself is on the ballot. A certain number of voters were willing to walk over glass to vote for Trump: Would they do the same for J.D. Vance, Mehmet Oz, Ron Johnson or Blake Masters? Evidence from non-Trump elections in the Trump era suggests maybe not. I tend not to buy the so-called “shy Trump” theory, or that voters are reluctant to state their preference for Trump. But it may nonetheless be hard to reach Trump voters, who may be more socially isolated, or who may be irregular voters who are screened out by likely voter models.
5. Polls have been unbiased or underestimated Democrats in recent elections in 2022
Democrats have had a lot of success in elections since the Supreme Court’s Dobbs decision — and importantly for our purposes, they’ve done as well or better than polls predicted in these races:
- In the Kansas abortion referendum, the only poll showed the “yes” side — which would have enabled the Kansas Legislature to implement additional abortion restrictions — winning by 4 percentage points, but it lost by 18 points (!) instead.
- Meanwhile, the only nonpartisan poll of the special election in Minnesota’s 1st Congressional District showed Republican Brad Finstad winning by 8 points, but he actually won by 4, although an internal poll released by his opponent did underestimate Finstad’s margin.
- That said, polls of the special election in Alaska’s at-large House seat were quite accurate in predicting both the first-round vote and that Democrat Mary Peltola would narrowly defeat former Alaska Gov. Sarah Palin if they were the final two candidates using Alaska’s ranked-choice system.
- Finally, of the five polls of the special election in New York’s 19th Congressional District, which were from a mix of Democratic and Republican firms, all showed Republican Marc Molinaro winning by margins ranging from 3 points to 14 points. But Democrat Pat Ryan won by 2 points instead.
I couldn’t find any polls for the special elections in New York’s 23rd Congressional District or Nebraka’s 1st Congressional District, also held since the Dobbs decision.
6. Pollsters have a semi-decent excuse for 2020: COVID-19
Ironically, polls conducted before large parts of the country were shut down in March 2020 in response to the COVID-19 pandemic were more accurate than those conducted immediately before Election Day in 2020. Take the FiveThirtyEight polling average on March 1, 2020. It showed Biden up by 4.1 percentage points nationally, very close to his eventual 4.5-point popular vote margin. Our polling averages also correctly showed a very close race in states such as Pennsylvania and Wisconsin.
This may be because the pandemic profoundly affected who answered the polls. Specifically, Democrats were more likely to be in jurisdictions that implemented stay-at-home orders, and liberals were otherwise more likely to voluntarily limit their social interactions. Having more time at home on their hands, they may have been more likely to respond to polls. That’s less of a concern this year, with few voters treating COVID-19 as a high priority and few government restrictions in place.
7. People are too hung up on polls from 2016 and 2020
Elections have consequences, and they’re relatively infrequent events. So the second-guessing and recriminations tend to linger for a while.
But that doesn’t change the fact that people’s concerns about the polls stem mostly from a sample of exactly two elections, 2020 and 2016. You can point out that polls also had a Democratic bias in 2014. But, of course, they had a Republican bias in 2012, were largely unbiased in 2018, and have either tended to be unbiased or had a Republican bias in recent special elections.
True, in 2020 and 2016, polls were off the mark in a large number of races and states. But the whole notion of a systematic polling error is that it’s, well, systematic: It affects nearly all races, or at least the large majority of them. There just isn’t a meaningful sample size to work with here, or anything close to it.
Again, that doesn’t mean you should expect the polls to be spot-on. It may be that we’re living in a universe with larger polling errors than before in response to declining response rates. And there are some decent reasons to suspect that Democrats won’t perform as well in November as they would in an election right now. Still, I’ll stick to my usual advice: Prepare for the polls to be wrong — in either direction.