Is it me, or do more people than ever want to know what I think?
– about my last doctor visit;
– my experience ordering dog food;
– my latest Amazon delivery (including pictures taken of the boxes on my front porch); and
– whether Doris answered all my questions when I called.
After awarding 5 stars for “ease of purchasing” something I bought–because (really) nothing is easier than purchasing in America–I’m invariably dropped into a multipart questionnaire that wants to know all kinds of other things, like whether I’d buy it again or tell friends about it, “only a couple of minutes of your time, but it will mean so much to us.”
I’ve noticed that I’m deleting more and more of these feedback requests, and that even when I want to say “Good job,” I now exit after that first screen and before the multi-question follow-ups, hoping that the sender gets my message–which is:
Yes, I had a good experience, but you’re asking more than you should be asking by thrusting 12 more questions (along with text boxes to elaborate) in my direction. Besides, your time is being paid-for by a customer service department (if you’re not just “a bot”), while I’m getting nothing more than a Thank You in advance for my feed-back. (We just don’t have that great of a relationship.) So if you really want to know all these things, how about 5% off on my next purchase or an expedited helpline when I need to talk to somebody at your place of business who’s actually alive?
I suspect you’re somewhere in this same back-and-forth with companies and service providers these days. And, of course, it’s not just with them.
For years now as voters, more of us are being approached by (but declining to tell) pollsters “whether we’re likely to vote,” “whom we might support,” and “why.” It’s a slammed door or unanswered phone that have made national elections in the U.S. “a prediction nightmare” since at least 2016. Among other things, that’s because a sizable percentage of Trump-leaning voters that year were unlikely to tell outside data-gatherers what they were planning to do on election day or if they’d be voting at all. Given the growing reluctance by these and other voters to cooperate, it’s fair to ask: Will any of the presidential election forecasts between now and November 5th be less unreliable than in recent election cycles—or will they merely usher in our next election-night (-week or months-long) nail-bitter?
Maybe it’s time to ask: are today’s mostly-wrong election polls playing us, even harming us with their up-today-down-tomorrow speculations, making it better for the blood pressure to ignore them altogether?
Of course, people shutting up like clams about their views or preferences doesn’t end with shopping and electing. At a time when the world seems awash in data—including excruciatingly personal information that we’ve “exchanged” with marketers for our use of social media platforms and on-line search engines—governments are finding it increasingly difficult to gather the statistical data needed for their most basic operations and planning. Think of information like “who lives here?” and how difficult it’s become to gather basic census data. More of us simply “never get around to getting back” or feel “it’s none of the government’s business,” while other non-responders may have concluded: “I’m being vandalized enough when it comes to my personal information and I’m just not providing you with any more of it.”
This defiance or disregard actually matters a lot because “reliable information” (like how many of us are out here and the key concerns that we have) is needed for sound decision-making in the communities where we live and work. Without it, our political leaders and civil servants are left to set policy based on their hunches, feelings or who’s been screaming the loudest, all of which may have little to do with the actual majorities they’re supposed to be representing.
More and more, we’ve been turning off the spigots of input that help any democratic society to run smoothly. The blare of voices/opinions/diatribes on social media masks the fact that too many of the rest of us are no longer “speaking up” at all with any regularity. As a result, we’ve become a new kind of silent majority and increasingly disenfranchised by our silence.
Unlike the feed-back loop after I buy something, when our information flows in the public sector slow or stop altogether, their systems cannot respond in the ways that they need to—even when we’re lucky enough to have elected or appointed leaders who actually want them to.
It’s one more clog in the arteries of democracy and we don’t seem to want to unclog the pipes anytime soon. Instead, it appears that tens of millions of us would rather send “a Swollen/Senescent Middle-Finger” to the White House as a kind of resounding “No,” than to identify and entrust a future leader with the real nitty-gritty about their hopes, dreams and daily lives.
So why do I care so much about this?
As it happens, there are several reasons, including a job I once had as a good-government advocate.
Along with a “steering committee” of local stakeholders and a growing roster of “voter-members,” Philadelphians for Good Government (or PGG, for short) aimed to leverage information it gathered from polling our neighbors about their priorities and concerns and to use both “the clout of that knowledge” and its activist membership to hold Philadelphia’s elected representatives accountable to the folks who had put them in office. They were years when those elected leaders seemed particularly oblivious.
Back then, I remember word-smithing questions with our pollster (which was also polling for ABC News at the time) so that we got the information we wanted without influencing the answers. (There’s an art to it.) I remember defining groups of City residents that we wanted to hear from and then developing outreach—like focus groups and press-driven conversations—that enabled us to muster a “random sample of them” to call and to query. The effort revealed a ground-swell of interest in running the City more like a business and less like a patronage swamp, and how too many of the respondents would have lived someplace else if they could have.
PGG aimed to produce reliable data to enable better governance through the involvement of voter-members who had an ownership stake in their information. We were convinced that as our neighbors became more invested in the democratic process—by leveraging their own “preferences”—more of them would want to build a more responsive City instead of wishing to escape an unresponsive one.
Among other things, PGG played a role in electing a reform-minded mayor while unlocking the citizen engagement that made his most lasting reforms possible. On a more personal level, doing my poll-driven job made me feel more like a stakeholder in the place where I was living and raising a family at a time when I too had considered leaving.
So it was with more than a little interest that I read a “Numbers” column about polls in the Wall Street Journal a few months ago, listened to a recent interview with historian and journalist Rick Perlstein about the 2024 polls, and waded into the weeds of election polling with New York Times polling guru Nate Cohn this past week. Each of them described a breakdown in polling that I’d wanted to believe had been serving good governance for decades.
With a flair for assembling his arguments before making them, Josh Zumbrun regularly explores stories about the numbers that suffuse our lives—“where they originate, what they mean, what they omit, how they’re used and how they’re abused.” For me, it’s been a must-read, especially this one: “Data Quality Is Getting Worse When We Might Need the Numbers Most,” In it, he writes:
Our overarching problem is that so much data is based on surveys to which people no longer respond. One example is the Current Population Survey, from the Census Bureau and Bureau of Labor Statistics. The survey underpins the monthly jobs report and is very good, but its response rate has fallen to 71% this year from 90% a decade ago.
Nearly every other major survey has fared worse. The White House Office of Management and Budget once articulated a standard that survey response rates should be above 80%. Today, nearly no surveys remain above that standard.
In the relatively recent era of cellphones (ubiquitous for only about a decade or so in the U.S, and somewhat longer in places like South Korea or parts of Europe), people in general no longer answer their phones, either screening their calls or ignoring their demands altogether, so polling over the phone has increasinly become a dead-end. By comparison, in the 1990s when I polled at PGG, nearly everyone we called eventually answered and Zumbrun tells us that as recently as 2000, “over 90% of national polls relied on randomly calling people on the phone.”
By contrast, today’s approach to polling seems almost jury-rigged. It tries to find “a random sample” of opinion in two stages: by repeatedly quizzing panels of willing respondents that the pollsters have assembled, and then attempting to weight their responses so the resulting data is as close to what used to be obtained by random surveys as possible.
Of course “the margin of error” always discloses how much a random sample of a certain size might differ from statistical accuracy, but the asterisk almost never reveals the differences that result when large numbers (or whole categories) of people decline to participate at all. As a result, Zumbrun correctly says: “we’re kidding ourselves to believe that the quality of data remains the same” as in the good ole days when folks actually took random phone calls on occasion.
In a recent podcast sponsored by The Nation, historian and journalist Rick Perlstein drills is even more judgmental. He says polls are “always wrong” because of the weights that pollsters assign to various sub-groups in the electorate (like suburban women or white men without a college degree). Despite every poll’s forward look, the weighting process relies almost entirely on “subjective decisions based on the past”—in particular, how pollsters believed that these groups voted in, say, 2016, 2020 or the 2022 mid-terms. But both common sense and Kierkegaard suggest to Perlstein that past performance is a poor guarantee of future results. (You’ll have to listen to the interview to hear how he enlists the Danish existentialist to support his conclusion.)
“Always wrong” is a viewpoint that Nate Cohn, the polling guru for the New York Times, comes close to sharing–although he doesn’t want to put himself out of a job. Instead, Cohn wrote last week that “over-reliance” on how individual voters say they voted in the past is distorting the polling estimates more than our most respected pollsters would like to admit. For one thing:
A surprising number of respondents don’t remember how they voted; they seem likelier to remember voting for the winner; and they sometimes report voting when voting records show they did not.
In addition, while these recollections are “being used to help address the tendency for polls to understate Mr. Trump’s strength over the last eight years,” even when memories are accurate they’re hardly predictive of how his former supporters will vote (or not vote) this time around. For example, over the past few weeks I’ve heard several, self-identified 2020 Trump voters talking about how the January 6th assault on the Capitol, his felony conviction or some other outrage has made it impossible to support him again. So when even the most vaunted election polls “look back” to weight likely voting groups, it’s hardly a sure-fire way to make their predictions more reliable.
But as the perceptive Perlstein reminds us: what we think we’re getting from these “always-wrong” polls is even more problematic.
According to Perlstein, at least since 2016 (but likely as long as we’ve had to pick between the-lesser-of- two-evils for president), election polls have become a part of our “psychological apparatus,” and depending on the amount of “politics” that’s coursing through your veins, not a small part.
(He’s talking here about the past 70 or so years of election polling. I’m talking about those who have come to follow politics with increasingly religious fervor over that same time span.)
Perlstein rightly argues that even as the quality of election polling has declined, the daily/weekly/monthly election polls have increasingly become “a substitute for civic discussion”—as if breathlessly following the numbers of one candidate over the other somehow satisfies our obligations to be informed citizens. In other words, compulsive poll-watching and analyzing gives us the illusion of knowledge, that we’re civically “in the know,” when they’re actually providing us with no more meaning as citizens than the latest cat video on TikTok.
Even more troublesome is the illusion of participation that poll-watching provides, making it worse than useless in Perlstein’s view. Because my team’s “fluctuations in the polls” feel immediate and trigger my emotions, it’s easy to think that the newest update gives me a pass from actual engagement in the political process: learning about candidates and meeting them, knocking on doors for them, talking to my neighbors, “being the change that I want.” It could be different than this.
We could begin by seeing our election-poll fixations as the shallow and meaningless encounters that they are.
We could remind ourselves that the feed-back loop of our elections is a far different animal than those annoying customer-service surveys, and worthy of much more of our time and effort.
We could use the tech tools we have at our disposal (like social networks, but increasingly AI and other digital innovations) to enable more robust democratic exchange and collaboration, as I recently discussed in “Making Technology Serve Democracy.”
When I worked at PGG several years ago, I sensed that I was building a conduit between my priorities and the office-holders who needed to take them more seriously.
I felt more like an owner of possibilities and less like a victim of circumstances when I found my voice and my feet in the community again.
There was nothing illusory about that at all.
This post was adapted from my January 21, 2024 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here in lightly edited form. You can subscribe by leaving your email address in the column to the right.