Political pollsters sitting out the holidays in Georgia

The Senate run-offs in Georgia are attracting a lot of attention but pollsters are largely not participating:

Photo by Nate Hovee on Pexels.com

After a disastrous November election for the polling industry, when the polls again underestimated President Donald Trump (who lost regardless) as well as GOP candidates down the ballot, pollsters are mostly sidelined in the run-up to the Jan. 5 Georgia elections, which most observers regard as toss-ups.

The public polls that drove so much of the news coverage ahead of November — and generated tremendous distrust afterward — have all but disappeared in Georgia, and they are set to stay that way:Some of the most prolific, best-regarded media and academic pollsters told POLITICO they have no plans to conduct pre-election surveys in Georgia…

Part of the reason public pollsters are staying away from Georgia is the awkward timing of the races. With the elections being held on Jan. 5, the final two weeks of the race are coinciding with the Christmas and New Year’s holidays — typically a time when pollsters refrain from calling Americans on the phone. The voters who would answer a telephone poll or participate in an internet survey over the holidays might be meaningfully different from those who wouldn’t, which would skew the results.

Most major public pollsters are choosing not to field surveys over that time period, but the four campaigns don’t have a choice in the matter. The closing stretch of the races represents their final chances to shift resources or make changes to the television and digital advertising — decisions that will be made using multiple data streams, including polling.

Trying to reach members of the public via telephone or text or web is already hard enough. Response rates have been dropping for years. New devices have new norms. Figuring out who will actually vote is not easy.

Imagine trying to get a good sample during the holidays. On one hand, more people are likely not working and at home. On the other hand, this is time for family, getting away from the daily grind, relaxing. How many people will want to respond to talk about politics? Add in the post-national election letdown, COVID-19 worries, and this could be an extra challenging task during December 2020.

I know answering the door is not in vogue, even before COVID-19, but I wonder how well a door-to-door strategy for polling in Georgia might work. Such an approach would require more work but the races are limited to Georgia. Given that people are likely to be at home, this could reach some people.

Combating abysmally low response rates for political polling

One pollster describes the difficulty today in reaching potential voters:

Photo by Breakingpic on Pexels.com

As the years drifted by, it took more and more voters per cluster for us to get a single voter to agree to an interview. Between 1984 and 1989, when caller ID was rolled out, more voters began to ignore our calls. The advent of answering machines and then voicemail further reduced responses. Voters screen their calls more aggressively, so cooperation with pollsters has steadily declined year-by-year. Whereas once I could extract one complete interview from five voters, it can now take calls to as many as 100 voters to complete a single interview, even more in some segments of the electorate…

I offer my own experience from Florida in the 2020 election to illustrate the problem. I conducted tracking polls in the weeks leading up to the presidential election. To complete 1,510 interviews over several weeks, we had to call 136,688 voters. In hard-to-interview Florida, only 1 in 90-odd voters would speak with our interviewers. Most calls to voters went unanswered or rolled over to answering machines or voicemail, never to be interviewed despite multiple attempts.

The final wave of polling, conducted Oct. 25-27 to complete 500 interviews, was the worst for cooperation. We could finish interviews with only four-tenths of one percent from our pool of potential respondents. As a result, this supposed “random sample survey” seemingly yielded, as did most all Florida polls, lower support for President Trump than he earned on Election Day.

After the election, I noted wide variations in completion rates across different categories of voters, but nearly all were still too low for any actual randomness to be assumed or implied.

This is a basic Research Methods class issue: if you cannot collect a good sample, you are going to have a hard time reflecting reality for the population.

Here is the part I understand less. This is not a new issue. As noted above, response rates have been falling for decades. Part of it is new technology. Some of it involves new behavior, such as ignoring phone calls or distrust of political polling. The amount of polling and data collection that takes place now can lead to survey fatigue.

But, it is interesting that the techniques used to collect this data are roughly the same. Of course, it has moved from land lines to cell phones and perhaps even texting or recruited online pools of potential voters. The technology has changed some but the idea is similar in trying to reach out to a broad set of people and hope a representative enough sample responds.

Perhaps it is time for new techniques. The old ones have some advantages including the ability to relatively quickly reach a large number of people and researchers and consultants are used to these techniques. And I do not have the answers for what might work better. Researchers embedded in different communities who could collect data over time? Finding public spaces frequented by diverse populations and approaching people there? Working more closely with bellwhether or representative places or populations to track what is going on there?

Even with these low response rates, polling can still tell us something. It is not as bad as picking randomly or flipping a coin. Yet, it is not accurate enough in recent years. If researchers want to collect valid and reliable polling data in the future, new approaches may be in order.

Font sizes, randomly ordered names, and an uncertain Iowa poll

Ahead of the Iowa caucuses yesterday, the Des Moines Register had to cancel a final poll just ahead of the voting due to problems with administering the survey:

Sources told several news outlets that they figured out the whole problem was due to an issue with font size. Specifically, one operator working at the call center used for the poll enlarged the font size on their computer screen of the script that included candidates’ names and it appears Buttigieg’s name was cut out from the list of options. After every call the list of candidates’ names is reordered randomly so it isn’t clear whether other candidates may have been affected as well but the organizers were not able to figure out whether it was an isolated incident. “We are unable to know how many times this might have happened, because we don’t know how long that monitor was in that setting,” a source told Politico. “Because we do not know for certain—and may not ever be able to know for certain—we don’t have confidence to release the poll.”…

In their official statements announcing the decision to nix the poll, the organizers did not mention the font issue, focusing instead on the need to maintain the integrity of the survey. “Today, a respondent raised an issue with the way the survey was administered, which could have compromised the results of the poll. It appears a candidate’s name was omitted in at least one interview in which the respondent was asked to name their preferred candidate,” Register executive editor Carol Hunter said in a statement. “While this appears to be isolated to one surveyor, we cannot confirm that with certainty. Therefore, the partners made the difficult decision to not to move forward with releasing the Iowa Poll.” CNN also issued a statement saying that the decision was made as part of their “aim to uphold the highest standards of survey research.”

This provides some insight into how these polls are conducted. The process can include call centers, randomly ordered names, and a system in place so that the administrators of the poll can feel confident in the results (even as there is always a margin of error). If there is a problem in the system, the opinions of those polled may not match what the data says. Will the future processes not allow individual callers to change the font size?

More broadly, a move like this could provide more transparency and ultimately trust regarding political polling. The industry faces a number of challenges. Would revealing this particular issue cause people to wonder how often this happens or reassure them that pollsters are concerned about good data?

At the same time, it appears that the unreported numbers still had an influence:

Indeed, the numbers widely circulating aren’t that different from last month’s edition of the same poll, or some other recent polls. But to other people, both journalists and operatives, milling around the lobby of the Des Moines Marriott Sunday night, the impact had been obvious.

Here are what some reporters told me about how the poll affected their work:

• One reporter for a major newspaper told me they inserted a few paragraphs into a story to anticipate results predicted by the poll.

• A reporter for another major national outlet said they covered an Elizabeth Warren event in part because she looked strong in the secret poll.

• Another outlet had been trying to figure out whether Amy Klobuchar was surging; the poll, which looked similar to other recent polling, steered coverage away from that conclusion.

• “You can’t help it affecting how you’re thinking,” said another reporter.

asdf

“Pollsters defend craft amid string of high-profile misses”

Researchers and polling organizations continue to defend their efforts:

Pollsters widely acknowledge the challenges and limitations taxing their craft. The universality of cellphones, the prevalence of the Internet and a growing reluctance among voters to respond to questions are “huge issues” confronting the field, said Ashley Koning, assistant director at Rutgers University’s Eagleton Center for Public Interest Polling…

“Not every poll,” Koning added, “is a poll worth reading.”

Scott Keeter, director of survey research at the Pew Research Center, agreed. Placing too much trust in early surveys, when few voters are paying close attention and the candidate pools are their largest, “is asking more of a poll than what it can really do.”…

Kathryn Bowman, a public opinion specialist at the American Enterprise Institute, also downplayed the importance of early primary polls, saying they have “very little predictive value at this stage of the campaign.” Still, she said, the blame is widespread, lamenting the rise of pollsters who prioritize close races to gain coverage, journalists too eager to cover those results and news consumers who flock to those types of stories.

Given the reliance on data in today’s world, particularly in political campaigns, polls are unlikely to go away. But, there will be likely be changes in the future that might include:

  1. More consumers of polls, the media and potential voters, learn what exactly polls are saying and what they are not. Since the media seems to love polls and horse races, I’m not sure much will change in that realm. But, we need great numeracy among Americans to sort through all of these numbers.
  2. Continued efforts to improve methodology when it is harder to reach people and obtain representative samples and predict who will be voting.
  3. A consolidation of efforts by researchers and poling organizations as (a) some are knocked out by a string of bad results or high-profile wrong predictions and (b) groups try to pool their resources (money, knowledge, data) to improve their accuracy. Or, perhaps (c) polling will just become a partisan effort as more objective observers realize their efforts won’t be used correctly (see #1 above).

Growing troubles in surveying Americans

International difficulties in polling are also present in the United States with fewer responses to telephone queries:

With sample sizes often small, fluctuations in polling numbers can be caused by less than a handful of people. A new NBC News/Wall Street Journal national survey of the Republican race out this week, for instance, represents the preferences of only 230 likely GOP voters. Analysis of certain subgroups, like evangelicals, could be shaped by the response of a single voter.Shifting demographics are also playing a role. In the U.S., non-whites, who have historically voted at a lower rate than whites, are likely to comprise a majority of the population by mid-century. As their share of the electorate grows, so might their tendency to vote. No one knows by how much, making turnout estimates hard…

To save money, more polling is done using robocalls, Internet-based surveys, and other non-standard methods. Such alternatives may prove useful but they come with real risks. Robocalls, for example, are forbidden by law from dialing mobile phones. Online polling may oversample young people or Democratic Party voters. While such methods don’t necessarily produce inaccurate results, Franklin and others note, their newness makes it harder to predict reliability…

As response rates have declined, the need to rely on risky mathematical maneuvers has increased. To compensate for under-represented groups, like younger voters, some pollsters adjust their results to better reflect the population — or their assessment of who will vote. Different firms have different models that factor in things like voter age, education, income, and historical election data to make up for the all the voters they couldn’t query.

The telephone provided new means of communication in society but also helped make national mass surveys possible once a majority of Americans had them. Yet, even with cell phone adoption increasing to over 90% in 2013 and cell phones spreading as fast as any technology (comparable to the television in the early 1950s), the era of the telephone as an instrument for survey data may be coming to an end.

Three other thoughts:

  1. Another issue at this point of the election cycle: there are so many candidates involved that it is difficult to get good data on all of them.
  2. If the costs of telephone surveys keep going up, might we see more door-to-door surveys? Given the increase in contractor/part-time work, couldn’t polling organizations get good idea from all over the country?
  3. If polls aren’t quite as accurate as they might have bee in the past, does this mean election outcomes will be more exciting for the public? If so, would voter turnout increase?

SurveyMonkey made good 2014 election predictions based on experimental web polls

Here is an overview of some experimental work at SurveyMonkey in doing political polls ahead of the 2014 elections:

For this project, SurveyMonkey took a somewhat different approach. They did not draw participants from a pre-recruited panel. Instead, they solicited respondents from the millions of people that complete SurveyMonkey’s “do it yourself” surveys every day run by their customers for companies, schools and community organizations. At the very end of these customer surveys, they asked respondents if they could answer additional questions to “help us predict the 2014 elections.” That process yielded over 130,000 completed interviews across the 45 states with contested races for Senate or governor.

SurveyMonkey tabulated the results for all adult respondents in each state after weighting to match Census estimates for gender, age, education and race for adults — a relatively simple approach analogous to the way most pollsters weight random sample telephone polls. SurveyMonkey provided HuffPollster with results for each contest tabulated among all respondents as well as among subgroups of self-identified registered voters and among “likely voters — those who said they had either already voted or were absolutely certain or very likely to vote (full results are published here).

“We sliced the data by these traditional cuts so we could easily compare them with other surveys,” explains Jon Cohen, SurveyMonkey’s vice president of survey research, “but there’s growing evidence that we shouldn’t necessarily use voters’ own assessments of whether or not they’ll vote.” In future elections, Cohen adds, they plan “to dig in and build more sophisticated models that leverage the particular attributes of the data we collect.” (In a blog post published separately on Thursday, Cohen adds more detail about how the surveys were conducted).

The results are relatively straightforward. The full SurveyMonkey samples did very well in forecasting winners, showing the ultimate victor ahead in all 36 Senate races and missing in just three contests for Governor (Connecticut, Florida and Maryland)…

The more impressive finding is the way the SurveyMonkey samples outperformed the estimates produced by HuffPost Pollster’s poll tracking model. Our models, which are essentially averages of public polls, were based on all available surveys and calibrated to corresponded to results from the non-partisan polls that had performed well in previous elections. SurveyMonkey’s full samples in each state showed virtually no bias, on average. By comparison, the Pollster models overstated the Democrats’ margins against Republican candidates by an average 4 percent. And while SurveyMonkey’s margins were off in individual contests, the spread of those errors was slightly smaller than the spread of those for the Pollster averages (as indicated by the total error, the average of the absolute values of the error on the Democrat vs Republican margins).

The general concerns with web surveys involve obtaining a representative sample, either because it is difficult to identify the particular respondents who would meet the appropriate demographics or the survey is open to everyone. But, SurveyMonkey was able to produce good predictions for this past election cycle. Was it because they had (a) large enough samples that their data was a better approximation of the general population (they were able to reach a large number of people who use their services or (b) their weighting was particularly good?

The real test of this will be when a major organization, particularly a media outlet, solely utilizes web polls ahead of a major election. Given these positive results, perhaps we will see this in 2016. Yet, I imagine there may be some kinks to work out of the system or some organizations would only be willing to do that if they paired the web data with more traditional forms of polling.

The bias toward one party in 2014 election polls is a common problem

Nate Silver writes that 2014 election polls were generally skewed toward Democrats. However, this isn’t an unusual problem in election years:

This type of error is not unprecedented — instead it’s rather common. As I mentioned, a similar error occurred in 1994, 1998, 2002, 2006 and 2012. It’s been about as likely as not, historically. That the polls had relatively little bias in a number of recent election years — including 2004, 2008 and 2010 — may have lulled some analysts into a false sense of security about the polls.

Interestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

It’s equally important for polling analysts to recognize that this bias can just as easily run in either direction. It probably isn’t predictable ahead of time.

The key to the issue here seems to be the assumptions that pollsters make before the election: who is going to turn out? Who is most energized? How do we predict who exactly is a likely voter? What percentage of a voting district identifies as Republican, Democrat, or Independent?

One thing that Silver doesn’t address is how this affects both perceptions of and reliance on such political polls. To have a large number of these polls lean in one direction (or lean in Republican directions in previous election cycles) suggests there is more work to do in perfecting such polls. All of this isn’t an exact science yet the numbers seem to matter more than ever; both parties jump on the results to either trumpet their coming success or to try to get their base out to reverse the tide. I’ll be curious to see what innovations are introduced heading into 2016 when the polls matter even more for a presidential race.

2014 Democrats echo 2012 Republicans in arguing political polls are skewed

Apparently, this is a strategy common to both political parties: when the poll numbers aren’t in your favor on the national stage, argue that the numbers are flawed.

The [Democratic] party is stoking skepticism in the final stretch of the midterm campaign, providing a mirror image of conservative complaints in 2012 about “skewed” polls in the presidential race between President Obama and Republican Mitt Romney.

Democrats who do not want their party faithful to lose hope — particularly in a midterm election that will be largely decided on voter turnout — are taking aim at the pollsters, arguing that they are underestimating the party’s chances in November.

At the center of the storm, just as he was in 2012, is Nate Silver of fivethirtyeight.com…

This year, Democrats have been upset with Silver’s predictions that Republicans are likely to retake the Senate. Sen. Heidi Heitkamp (D-N.D.) mocked Silver at a fundraising luncheon in Seattle that was also addressed by Vice President Biden, according to a White House pool report on Thursday.

“Pollsters and polling have sort of elbowed their way to the table in terms of coverage,” Berkovitz said. “Pollsters have become high profile: They are showing up on cable TV all the time.”

This phenomenon, in turn, has led to greatly increased media coverage of the differences between polling analyses. In recent days, a public spat played out between Silver and the Princeton Election Consortium’s Sam Wang, which in turn elicited headlines such as The Daily Beast’s “Why is Nate Silver so afraid of Sam Wang?”

There are lots of good questions to ask about political polls, including looking at their sampling, the questions they ask, and how they make their projections. Yet, that doesn’t automatically mean that everything has been manipulated to lead to a certain outcome.

One way around this? Try to aggregate among various polls and projections. RealClearPolitics has a variety of polls in many races for the 2014 elections. Aggregation also helps get around the issue of celebrity where people like Nate Silver build careers on being right – until they are wrong.

At the most basic level, the argument about flawed polls is probably about turning out the base to vote. If some people won’t vote because they think their vote won’t overturn the majority, then you have to find ways to convince them that their vote still matters.

Odd poll: Rahm Emanuel more negatively rated than Eisenhower traffic

One challenger to Chicago Mayor Rahm Emanuel used some dubious questions to find how the mayor ranks compared to other disliked things:

The poll, with questions tailor-made to grab headlines, was paid for by Ald. Bob Fioretti (2nd) and conducted Sept. 26-29 by Washington D.C.-based Hamilton Campaigns…

Fioretti’s pollster was apparently looking to put a new twist on the issue by testing the mayor’s unfavorable ratings against some high-profile enemies, including the Bears’ archrival Green Bay Packers.

Of the 500 likely Chicago voters surveyed, 23 percent had a “somewhat unfavorable” opinion of Emanuel and 28 percent had a “very unfavorable” view of the mayor.

That’s an overall negative rating of 51 percent, compared to 49 percent overall for morning traffic on the Eisenhower. Conservative-leaning Fox News Channel had a slightly higher unfavorable rating in Democratic-dominated Chicago while the Packers stood at 59 percent.

Odd comparisons of apples to oranges. As the article notes, it sounds like a publicity stunt – which appears to work because the article then goes on to give Fioretti more space. Giving space to bad statistics is not a good thing in the long run with a public (and media) that suffers from innumeracy.

Two thoughts:

1. I could imagine where this might go if Emanuel or others commission similar polls. How about: “Chicago’s Mayor is more favorably rated than Ebola”?

2. How did the Packers only get a negative rating of 59% in Chicago? Are there that many transplanted Wisconsin residents or are Chicago residents not that adamant about their primary football rival?

Poll figures on how the Rapture would have affected the Republican presidential field

Even as the news cycle winds down on Harold Camping and his prediction about the Rapture, Public Policy Polling (PPP) digs through some data to determine how the Rapture would have affected the field of Republican presidential candidates:

First off- no one really believed the Rapture was going to happen last weekend, or at least they won’t admit it. Just 2% of voters say they thought that was coming on Saturday to 98% who say they did not. It’s really close to impossible to ask a question on a poll that only 2% of people say yes to. A national poll we did in September 2009 found that 10% of voters thought Barack Obama was the Anti-Christ, or at least said they thought so. That 2% number is remarkably low.

11% of voters though think the Rapture will occur in their lifetimes, even if it didn’t happen last weekend. 66% think it will not happen and 23% are unsure. If the true believers who think the Rapture will happen in their lifetime are correct- and they’re the ones who had the strongest enough faith to get taken up into heaven- then that’s going to be worth a 2-5 point boost to Obama’s reelection prospects. That’s because while only 6% of independents and 10% of Democrats think the Rapture will happen during their lifetime, 16% of Republicans do. We always talk about demographic change helping Democrats with the rise of the Hispanic vote, but if the Rapture occurs it would be an even more immediate boost to Democratic electoral prospects.

Obama’s lead over Romney is 7 points with all voters, but if you take out the ones who think the Rapture will occur in their lifetime his advantage increases to 9 points. That’s because the Rapture voters support Romney by a 49-35 margin. Against Gingrich Obama’s 14 point lead overall becomes a 17 point one if you take out take the ‘Rapturers’ because they support Gingrich 50-37. And Obama’s 17 point lead over Palin becomes a 22 point spread without those voters because they support Palin 54-37.

Palin is the only person we tested on this poll who is actually popular with people who think the Rapture is going to happen. She has a 53/38 favorability with them, compared to 33/41 for Romney, 26/48 for Gingrich, and a 31/58 approval for Obama. Palin’s problem is that her favorability with everyone who doesn’t think the Rapture will happen is 27/66.

What a great way to combine two of the media’s recent fascinations. I would guess PPP put this poll together solely to take advantage of this news cycle. Should we conclude that Democrats should have wished the Rapture to actually happen to improve their political chances?

Of course, all of this data should be taken with a grain of salt as only 2% of the voters believed the Rapture was going to happen this past weekend and 11% believe it will happen in their lifetimes. These small numbers are out of a total sample of 600 people, meaning that about 12 people thought the Rapture would happen on Saturday and about 66 thought it would happen while they are alive. And this is all with a margin of error of plus or minus 4 percent, suggesting all of these numbers could be really, really small and not generalizable.

Do polls/surveys like these help contribute to giving all polls/surveys a bad reputation?