Mysteries of Public Opinion Polling

Insiders May Know … But the Rest of Us?

This November’s election for President looks really, really close! For months now, despite felony convictions, unhinged rhetoric, and polarized media coverage, the polls haven’t budged much. Joe Biden and Donald Trump are each a coin toss away from a January 2025 inauguration. I, for one, can’t imagine that the upcoming debate will move those polls, either.

What accounts for these tight polling numbers? To many of us, it seems the choice should be clear. Why the discrepancy among respondents’ answers to questions of who would do a better job addressing their concerns? We hear a lot about “double-haters” among voters – they simply don’t like any of their choices. Will this result in unusually low turnout? 2020 had unusually high turnout. 2024 is a rematch! Have things changed that much in four years?

We know about the tightness of the race, about those double-haters, and we speculate about causes and trends largely because a handful of national public opinion polling organizations tell us what they find. They all have good non-partisan reputations, and strong standing in the news media (as well as in the respective political campaigns!). And they don’t communicate too much detail to the public about their methods or sampling strategies. I have not been able to ascertain anything, as an ordinary citizen, about what may differentiate one sample from another. Is Pew Research more or less likely to contact “people like me” than Gallup, New York Times/Siena, or Marist? (Full disclosure: none has ever contacted me about any presidential race in recent memory.) Who constructs these scientifically determined random samples, anyway? And who devises the wording of the questions — especially regarding policy preferences?

I can imagine vast gulfs among responses to a question like: “Who do you think would be better at handling the economy?”, depending on the nuances of the wording – definitions of “handling” or even “economy” come to mind. We know that our hyper-polarized world features many trigger words that will elicit one kind of response or another. Or they could lead to a non-response bias if aimed at the wrong level of understanding – or the wrong sample audience. Do the big polling organizations (Pew, Gallup, NYT/Siena, Marist, Monmouth, Quinnipiac) ever try to defend their poll design vs. their competitors? The answer seems to be NO! Do we know, for instance, whether Pew or Marist or Quinnipiac have a greater instance of public sector workers in their samples? Or engineering degrees? Or young people with skin conditions or weight issues? There must be differences in the samples which could skew results.

Non-response bias should be subject to the same design principles as selection criteria for the sample. What group is more or less likely to respond? But, in addition to participation bias, there are always issues about whether respondents are telling the TRUTH … and then, whether they will change their minds. All these variables introduce great complexity into the media-saturated environment we inhabit. There has also been talk lately about the division between “high trust” and “low trust” cohorts of the population. High trusters tend to defend the status quo and may tailor their responses to what they perceive as the mainstream. Low trusters might be more inclined to lie in their responses, intentionally undermining the efforts of the pollster. And if they perceived it as a game – nothing terribly important – changing their mind might be more likely. So, what to make of these big national polls, anyway?

It’s clear that the ultimate customer of election polling is not the general public (like me) but political campaigns themselves. Those campaigns have the most invested in all possible statistical interpretations of even marginal results (“within the margin of error”) – they would love to dig down into the bowels of the sample design and wording of questions. But do they have the expertise for this? Or are they hopelessly dependent on outsiders whose affiliation is ostensibly non-partisan, but how trustworthy? The stakes in state-level polling for House and Senate seats are just as great as presidential campaigns. Many political campaigns thus hire their own in-house polling teams. It’s probably fair to say that losing campaigns did not pay for the best polling! But there is still the same trust problem as with the national polls. What does it mean, really, when poll after poll indicates people don’t feel good about the economy – despite all conventional economic indicators pointing to a true boom cycle in 2024 America? Why do all national polls (not just border states) show immigration as the next greatest concern for voters, after the “economy?” Perhaps we’ll get a rise in concern over “democracy” as Trump’s campaign cranks into high gear – but that remains to be seen. Biden’s campaign ought to be able to influence these poll results … better than it has thus far, in my opinion.

Right now, according to the latest Marist poll (mid-June), Biden and Trump are in a dead heat – 49/49 (Fox News released theirs the next day showing Biden up two over Trump – horrors!). My guess is the debate on June 27 won’t have much of an effect, and we probably won’t get any definitive movement due to the remaining Trump criminal indictments for the remainder of the campaign. So, this is where we are. Party affiliations in Congress ought to be more important than the flawed personalities of the presidential candidates. But, despite partisanship arguably being far more critical for the course of American politics over the next four years (at least), most voters see themselves as merely “leaning” Republican or Democratic, if not truly “independent.” Hard core partisans have traditionally been the exception rather than the rule in American politics. We’re not a multi-party parliamentary system where voters choose a party slate rather than an individual representative. So, personalities rule, both for the President and down-ballot. Seems those polls ought to ask more questions about personality than about public policy issues – since few voters understand the latter, anyway.

Does this mean that public opinion polling is on the wrong track? Or are the major national polling organizations merely trying to raise our consciousness about what should count in their polls, not what does count? They claim that they are constantly improving – that 2022 polls were generally better than 2016 or 2020, — we’ll see what 2024 polls looked like in the aggregate come 2025. Meanwhile, I’m sweating …

Leave a comment