What do those post-debate snap polls tell us?

Ed Driscoll comments on the results of some of the post-debate snap polls:

TRUMP WINS MOST IMMEDIATE POLLS: “The newspaper collected screen shots of 19 ‘snap’ polls conducted immediately after the debate, and in 17 of them, most respondents said Trump won the debate, often by a wide margin. It isn’t just Drudge and Breitbart; Trump also got more votes than Clinton in instant polls at Time, Slate, Variety and other liberal outlets. I can’t explain it, other than to say that perhaps it tells us more about how people view Hillary Clinton than about how Donald Trump actually performed.”

Well, certainly one explanation is a repeat of the “Ron Paul Revolution” days of early 2008 – but as with Paul’s quixotic presidential bid, having a large enough group of dedicated zealots to tilt Internet polls does not necessarily translate into sufficient votes at the ballot box where it counts.

It seems safe to say that Trump’s core followers are much more passionate than Hillary’s. We’ll know soon enough if there are a majority of them.

The large issue with these snap polls is that they are unrepresentative. We don’t know who answered them and in what numbers. As suggested here, perhaps Donald Trump has more active followers who take such polls.

At the same time, if there are consistent patterns in non-helpful polls like this, perhaps they can provide insights into concerted online efforts. They may not reveal much about the electorate at large but they could help us understand patterns of partisans. Why is it important to “win” such snap polls? Are there dedicated efforts to win and how are these efforts organized?

Ultimately, does this suggest that snap polls are even worse than being unrepresentative: they are regularly used by particular groups to push a message? Winning in any arena is simply too important to be left to real survey methods…

Nate Silver: “The World May Have A Polling Problem”

In looking at the disparities between polls and recent election results in the United States and UK, Nate Silver suggests the polling industry may be in some trouble:

Consider what are probably the four highest-profile elections of the past year, at least from the standpoint of the U.S. and U.K. media:

  • The final polls showed a close result in the Scottish independence referendum, with the “no” side projected to win by just 2 to 3 percentage points. In fact, “no” won by almost 11 percentage points.
  • Although polls correctly implied that Republicans were favored to win the Senate in the 2014 U.S. midterms, they nevertheless significantly underestimated the GOP’s performance. Republicans’ margins over Democrats were about 4 points better than the polls in the average Senate race.
  • Pre-election polls badly underestimated Likud’s performance in the Israeli legislative elections earlier this year, projecting the party to about 22 seats in the Knesset when it in fact won 30. (Exit polls on election night weren’t very good either.)

At least the polls got the 2012 U.S. presidential election right? Well, sort of. They correctly predicted President Obama to be re-elected. But Obama beat the final polling averages by about 3 points nationwide. Had the error run in the other direction, Mitt Romney would have won the popular vote and perhaps the Electoral College.

Perhaps it’s just been a run of bad luck. But there are lots of reasons to worry about the state of the polling industry. Voters are becoming harder to contact, especially on landline telephones. Online polls have become commonplace, but some eschew probability sampling, historically the bedrock of polling methodology. And in the U.S., some pollsters have been caught withholding results when they differ from other surveys, “herding” toward a false consensus about a race instead of behaving independently. There may be more difficult times ahead for the polling industry.

It sounds like there are multiple areas for improvement:

1. Methodology. How can polls reach the average citizen two decades into the 21st century? How can they collect representative samples?

2. Behavior across the pollsters, the media, and political operatives. How are these polls reported? Is the media more interested in political horse races than accurate poll results? Who can be viewed as an objective polling organization? Who can be viewed as an objective source for reporting and interpreting polling figures?

3. A decision for academics as well as pollsters: how accurate should polls be (what are the upper bounds for margins of error)? Should there be penalties for work that doesn’t accurately reflect public opinion?

Gans says “public opinion polls do not always report public opinion”

Sociologist Herbert Gans suggests public opinion polls tells us something but may not really uncover public opinion:

The pollsters typically ask people whether they favor or oppose, agree or disagree, approve or disapprove of an issue, and their wording generally follows the centrist bias of the mainstream news media. They offer respondents only two sides (along with the opportunity to say “don’t know” or “unsure”), thus leaving out alternatives proposed by people with minority political views. Occasionally, one side is presented in stronger or more approving language — but by and large, poll questions maintains the balanced neutrality of the mainstream news media.

The pollsters’ reports and press releases usually begin with the asked question and then present tables with the statistical proportions of poll respondents giving each of the possible answers. However, the news media stories about the polls usually report only the results, and by leaving out the questions and the don’t knows, transform answers into opinions. When these opinions are shared by a majority, the news stories turn poll respondents into the public, thus giving birth to public opinion…

To be sure, poll respondents favor what they tell the pollsters they favor. But still, poll answers are not quite the same as their opinions. While their answers may reflect their already determined opinions, they may also express what they feel, or believe they ought to feel, at the moment. Pollsters should therefore distinguish between respondents with previously determined opinion and those with spur-of-the-moment answers to pollster questions.

However, only rarely do pollsters ask whether the respondents have thought about the question before the pollsters called, or whether they will ever do so again. In addition, polls usually do not tell us whether respondents have talked about the issue with family or friends, or whether they have expressed their answer cum opinion in other, more directly political ways.

Interesting thoughts. As far as surveys and polls go, they are only as good as the questions asked. But, I wonder if Gans’ suggestions might backfire: what if a majority of Americans don’t have intense feelings about an issue or haven’t thought about the issue before? What then should be done with the data? Polls today may suggest a majority of Americans care about an issue but the reverse might really be true: a lower percentage of Americans actually follow all of the issues. Gans seems to suggest it is the active opinions that matter more but this seems like it could lead to all sorts of legislation and other action based on a minority of public opinion. Of course, this may be it really works now through the actions and lobbying of influential people…

It sounds like the real issue here is how much public opinion, however it is measured, should factor into the decisions of politicians.

Pollster provides concise defense of polls

The chief pollster for Fox News defends polls succinctly here. The conclusion:

Likewise, we don’t need to contact every American — more than 230 million adults — to find out what the public is thinking. Suffice it to say that with proper sampling and random selection of respondents so that every person has an equal chance of being contacted, a poll of 800-1,000 people provides an incredibly accurate representation of the country as a whole. It’s a pretty amazing process if you think about it.

Still, many people seem to have a love-hate relationship with polls. Even if they enjoy reading the polls, some people can turn into skeptics if they personally don’t feel the same as the majority. Maybe they don’t even know anyone who feels the same as the majority.  Yet assuming everyone shares your views and those of your friends and neighbors would be like the cook skimming a taste from just the top of the pot without stirring the soup first.

Basic but a staple of many a statistics and research methods course. Unfortunately, more people need this kind of education in a world where statistics are becoming more and more common.

Why Public Policy Polling (PPP) should not conduct “goofy polls”

Here is an explanation why the polling firm Public Policy Polling (PPP) conducts “goofy polls”:

But over the past year, PPP has been regularly releasing goofy, sometimes pointless polls about every other month. In early January, one such survey showed that Congress was less popular than traffic jams, France and used-car salesmen. According to their food-centric surveys released this week, Americans clearly prefer Ronald McDonald over Burger King for President; Democrats are more likely to get their chicken at KFC than Chick-fil-A, and Republicans are more apt to order pancakes than waffles. “We’re obviously doing a lot of polling on the key 2014 races,” says Jensen. “That kind of polling is important. We also like to do some fun polls.”

PPP, which has a left-leaning reputation, releases fun polls in part because they’re entertaining but mostly in an attempt to set themselves apart as an approachable polling company. Questions for polls are sometimes crowd-sourced via Twitter. The outfit does informal on-site surveys about what state they should survey next. And when the results of offbeat polls come out, the tidbits have potential to go viral. “We’re not trying to be the next Gallup or trying to be the next Pew,” Jensen says. “We’re really following a completely different model where we’re known for being willing to poll on stuff other people aren’t willing to poll on.” Like whether Republicans are willing to eat sushi (a solid 64% are certainly not).

Which means polls about “Mexican food favorability” are a publicity stunt on some level. Jensen says PPP, which has about 150 clients, gets more business from silly surveys and the ethos it implies than they do cold-calling. One such client was outspoken liberal Bill Maher, who hired PPP to poll for numbers he could use on his HBO show Real Time. That survey, released during the 2012 Republican primaries, found that Republicans were more likely to vote for a gay candidate than an atheist candidate—and that conservative virgins preferred Mitt Romney, while Republicans with 20 or more sexual partners strongly favored Ron Paul.

Jensen argues that the offbeat polls do provide some useful information. One query from the food survey, for instance, asks respondents whether they consider themselves obese: about 20% of men and women said yes, well under the actual American obesity rate of 35.7%.  Information like that could give health crusaders some fodder for, say, crafting public education PSAs. Still, the vast majority of people are only going to use these polls to procrastinate at work: goodness knows it’s hard to resist a “scientific” analysis of partisans’ favorite pizza toppings (Republicans like olives twice as much!).

Here is my problem with this strategy: it is short-sighted and privileges PPP. While polling firms do need to market themselves as there are a number of organizations that conduct national polls, this strategy can harm the whole field. When the average American sees the results of “goofy polls,” is it likely to improve their view of the polling in general? I argue there is already enough suspicion in America about polls and their validity without throwing in polls that don’t tell us as much. This suspicion contributes to lower response rates across the board, a problem for all survey researchers.

In the end, the scientific nature of polling takes a hit when any firm is willing to reduce polling to marketing.

Republicans (and Democrats) need to pay attention to data rather than just spinning a story

Conor Friedersdorf suggests conservatives clearly had their own misinformed echo chambers ahead of this week’s elections:

Before rank-and-file conservatives ask, “What went wrong?”, they should ask themselves a question every bit as important: “Why were we the last to realize that things were going wrong for us?”

Barack Obama just trounced a Republican opponent for the second time. But unlike four years ago, when most conservatives saw it coming, Tuesday’s result was, for them, an unpleasant surprise. So many on the right had predicted a Mitt Romney victory, or even a blowout — Dick Morris, George Will, and Michael Barone all predicted the GOP would break 300 electoral votes. Joe Scarborough scoffed at the notion that the election was anything other than a toss-up. Peggy Noonan insisted that those predicting an Obama victory were ignoring the world around them. Even Karl Rove, supposed political genius, missed the bulls-eye. These voices drove the coverage on Fox News, talk radio, the Drudge Report, and conservative blogs.

Those audiences were misinformed.

Outside the conservative media, the narrative was completely different. Its driving force was Nate Silver, whose performance forecasting Election ’08 gave him credibility as he daily explained why his model showed that President Obama enjoyed a very good chance of being reelected. Other experts echoed his findings. Readers of The New York Times, The Atlantic, and other “mainstream media” sites besides knew the expert predictions, which have been largely born out. The conclusions of experts are not sacrosanct. But Silver’s expertise was always a better bet than relying on ideological hacks like Morris or the anecdotal impressions of Noonan.

But I think Friedersdorf misses the most important point here in the rest of his piece: it isn’t just about Republicans veering off into ideological territory into which many Americans did not want to follow or wasting time on inconsequential issues that did not affect many voters. The misinformation was the result of ignoring or downplaying the data that showed President Obama had a lead in the months leading up to the election. The data predictions from “The Poll Quants” were not wrong, no matter how many conservative pundits wanted to suggest otherwise.

This could lead to bigger questions about what political parties and candidates should do if the data is not in their favor in the days and weeks leading up to an election. Change course and bring up new ideas and positions? This could lead to questions about political expediency and flip-flopping. Double-down on core issues? This might ignore the key things voters care about or reinforce negative impressions. Ignore the data and try to spin the story? It didn’t work this time. Push even harder in the get-out-the-vote ground game? This sounds like the most reasonable option…

Three changes that come with “The Rise of Poll Quants”

Nate Silver isn’t the only one making election predictions based on poll data; there are now a number of “poll quants” who are using similar techniques.

So what exactly do these guys do? Basically, they take polls, aggregate the results, and make predictions. They each do it somewhat differently. Silver factors in state polls and national polls, along with other indicators, like monthly job numbers. Wang focuses on state polls exclusively. Linzer’s model looks at historical factors several months before the election but, as voting draws nearer, weights polls more heavily.

At the heart of all their models, though, are the state polls. That makes sense because, thanks to the Electoral College system, it’s the state outcomes that matter. It’s possible to win the national vote and still end up as the head of a cable-television channel rather than the leader of the free world. But also, as Wang explains, it’s easier for pollsters to find representative samples in a particular state. Figuring out which way Arizona or even Florida might go isn’t as tough as sizing up a country as big and diverse as the United States.”The race is so close that, at a national level, it’s easy to make a small error and be a little off,” Wang says. “So it’s easier to call states. They give us a sharper, more accurate picture.”

But the forecasters don’t just look at one state poll. While most news organizations trot out the latest, freshest poll and discuss it in isolation, these guys plug it into their models. One poll might be an outlier; a whole bunch of polls are likely to get closer to the truth. Or so the idea goes. Wang uses all the state polls, but gives more weight to those that survey likely voters, as opposed to those who are just registered to vote. Silver has his own special sauce that he doesn’t entirely divulge.

Both Wang and Linzer find it annoying that individual polls are hyped to make it seem as if the race is closer than it is, or to create the illusion that Romney and Obama are trading the lead from day to day. They’re not. According to the state polls, when taken together, the race has been fairly stable for weeks, and Obama has remained well ahead and, going into Election Day, is a strong favorite. “The best information comes from combining all the polls together,” says Linzer, who projects that Obama will get 326 electoral votes, well over the 270 required to win. “I want to give readers the right information, even if it’s more boring.”

While it may not seem likely, poll aggregation is a threat to the supremacy of the punditocracy. In the past week, you could sense that some high-profile media types were being made slightly uncomfortable by the bespectacled quants, with their confusing mathematical models and zippy computer programs. The New York Times columnist David Brooks said pollsters who offered projections were citizens of “sillyland.”

Three things strike me from reading these “poll quants” leading up to the election:

1. This is what is possible when data is widely available: these pundits use different methods for their models but it wouldn’t be possible without accessible data, consistent and regular polling (at the state and national level), and relatively easy to use statistical programs. In other words, could this scenario have taken place even 20 years ago?

2. It will be fascinating to watch how the media deals with these predictive models. Can they incorporate these predictions into their typical entertainment presentation? Will we have a new kind of pundit in the next few years? The article still noted the need for these quantitative pundits to have personality and style so it their results are not too dry for the larger public. Could we end up in a world where CNN has the exclusive rights to Silver’s model, Fox News has rights to another model, and so on?

3. All of this conversation about statistics, predictions, and modeling has the potential to really show where the American public and elite stands in terms of statistical knowledge. Can people understand the basics of these models? Do they simply blindly trust the models because they are “scientific proof” or do they automatically reject them because all numbers can be manipulated? Do some pundits know just enough to be dangerous and ask endless numbers of questions about the assumptions of different models? There is a lot of potential here to push quantitative literacy as a key part of living in the 21st century world. And it is only going to get more statistical as more organizations collect more data and new research and prediction opportunities arise.

Sociologist defends statistical predictions for elections and other important information

Political polling has come under a lot of recent fire but a sociologist defends these predictions and reminds us that we rely on many such predictions:

We rely on statistical models for many decisions every single day, including, crucially: weather, medicine, and pretty much any complex system in which there’s an element of uncertainty to the outcome. In fact, these are the same methods by which scientists could tell Hurricane Sandy was about to hit the United States many days in advance…

This isn’t wizardry, this is the sound science of complex systems. Uncertainty is an integral part of it. But that uncertainty shouldn’t suggest that we don’t know anything, that we’re completely in the dark, that everything’s a toss-up.

Polls tell you the likely outcome with some uncertainty and some sources of (both known and unknown) error. Statistical models take a bunch of factors and run lots of simulations of elections by varying those outcomes according to what we know (such as other polls, structural factors like the economy, what we know about turnout, demographics, etc.) and what we can reasonably infer about the range of uncertainty (given historical precedents and our logical models). These models then produce probability distributions…

Refusing to run statistical models simply because they produce probability distributions rather than absolute certainty is irresponsible. For many important issues (climate change!), statistical models are all we have and all we can have. We still need to take them seriously and act on them (well, if you care about life on Earth as we know it, blah, blah, blah).

A key point here: statistical models have uncertainty (we are making inferences about larger populations or systems from samples that we can collect) but that doesn’t necessarily mean they are flawed.

A second key point: because of what I stated above, we should expect that some statistical predictions will be wrong. But this is how science works: you tweak models, take in more information, perhaps change your data collection, perhaps use different methods of analysis, and hope to get better. While it may not be exciting, confirming what we don’t know does help us get to an outcome.

I’ve become more convinced in recent years that one of the reasons polls are not used effectively in reporting is that many in the media don’t know exactly how they work. Journalists need to be trained in how to read, interpret, and report on data. This could also be a time issue; how much time to those in the media have to pore over the details of research findings or do they simply have to scan for new findings? Scientists can pump out study after study but part of the dissemination of this information to the public requires a media who understands how scientific research and the scientific process work. This includes understanding how models are consistently refined, collecting the right data to answer the questions we want to answer, and looking at the accumulated scientific research rather than just grabbing the latest attention-getting finding.

An alternative to this idea about media statistical illiteracy is presented in the article: perhaps the media perhaps knows how polls work but likes a political horse race. This may also be true but there is a lot of reporting on statistics on data outside of political elections that also needs work.

Cell phone users now comprise half of Gallup’s polling contacts

Even as Americans are less interested in participating in telephone surveys, polling firms are trying to keep up. Gallup has responded by making sure 50% of people contacted for polling samples are cell phone users:

Polling works only when it is truly representative of the population it seeks to understand. So, naturally, Gallup’s daily tracking political surveys include cellphone numbers, given how many Americans have given up on land lines altogether. But what’s kind of amazing is that it now makes sure that 50 percent of respondents in each poll are contacted via mobile numbers.

Gallup’s editor in chief, Frank Newport, wrote yesterday about the evolution of Gallup’s methods to remain “consistent with changes in the communication behavior and habits of those we are interviewing.” In the 1980s the company moved from door-to-door polling to phone calls. In 2008 it added cellphones. To reflect the growing number of Americans who have gone mobile-only, it has steadily increased the percentage of those numbers it contacts.

“If we were starting from scratch today,” Newport told Wired, “we would start with cellphones.”…

Although it may be a better reflection of society, mobile-phone polling is more expensive, says Newport. They have to call more numbers because the response rate is lower due to the nature of mobile communication.

As technology and social conventions change, researchers have to try and keep up. This is a difficult task, particularly if fewer people want to participate and technologies offer more and more options to screen out unknown requests. Where are we going next: polling by text? Utilizing well-used platforms like Facebook (where we know many people are turning every day)?

Political operative discusses which polls he thought were reliable, unreliable while working for Edwards 2008 campaign

Amidst discussions of whether current polls are accurately weighting their samples for Democrats and Republicans, a former political operative for Al Gore and John Edward talks about how the Edwards campaign used polls:

However, under cross-examination by lead prosecutor David Harbach, Hickman acknowledged sending a series of emails in November and December, and even into January, endorsing or promoting polls that made Edwards look good. Asked about what appeared to be a New York Times/CBS poll released in mid-November showing an effective “three-way tie” in Iowa with Hillary Clinton at 25 percent, Edwards at 23 percent and Obama at 22 percent, Hickman acknowledged he circulated it but insisted he didn’t think it was correct.

“The business I’m in is a business any fool can get into, and a lot can happen. I’m sure there was a poll like that,” the folksy Hickman told jurors when first asked about a poll showing the race tied. “I kept up with every poll that was done, including our own, and there may have been a few that showed them a tie, but… that’s not really what my analysis is. Campaigns are about trajectory, and… there could have been a point at which it was a tie in the sense that we were coming down, and Obama was going up, and Clinton was going up.”

Hickman also indicated that senior campaign staffers knew many of the polls were poorly done and of little value. “We didn’t take these dog and cat and baby-sitter polls seriously,” he said.

Hickman acknowledged that on January 2, 2008, a day before the Iowa caucuses, he sent out a summary of nine post-Christmas Iowa polls showing Edwards in contention in the Hawkeye State. However, he testified two-thirds of them were from firms he considered “ones we typically would not put a lot of credence in.” Hickman put Mason-Dixon, Strategic Vision, Insider Advantage, Zogby and Research 2000 in the “less reputable” group. He also told the court that ARG polls “have a miserable track record.”

Hickman said he considered the Des Moines Register polls, CNN and Los Angeles Times polls more accurate.

This seems like typical politics: an operative is supposed to spin the best news they can about their candidate, even if they don’t think this is the whole story. However, it is fascinating to see his opinion of different polling organizations. I wish he went on to describe why some of these polls were better than others: better samples, more reliable and/or predictive results, they lined up with other reputable polls? At the same time, I think the DrudgeReport’s headline for this story, “Under oath, Edwards pollster admits polls were ‘propaganda,'” is a bit misleading.  Hickman wasn’t disparaging all polls; he was admitting to using some polls that he thought were inaccurate to tell a particular political story.

If we got a bunch of current political operatives in a room, here are questions we could ask that would revealing:

1. Are there certain polls that you all consider to be reliable? (I hope the answer is yes. But I would also guess that each political party thinks certain polls tend to lean in their direction.)

2. What information do you all work with regularly that helps give you a better picture of what is going beyond the polls? In other words, the American public doesn’t get much of an inside view while the campaign is happening beyond a stream of polls reported by the media but the campaigns themselves have more information that matters. How much should the public pay attention to these polls or can they pick up clues from what is really going on elsewhere? (The media seems to like polls but there are other ways to get information.)

3. In the long run, who is helped or harmed by having a lot of polling organizations? Hickman suggests some polls aren’t that worthwhile so if this is the case, should they not be reported to the American public? (Americans can look at a variety of polls; should there be that many to choose from?)

Unfortunately, this story feeds a growing mistrust of polls. Generally, it is not good for social science if 42% of Americans think polls are biased for one candidate or another. On one hand, these 42% may simply not like what the polls are reporting, have little idea how polls work, and simply want their candidate to win (and won’t like the polls until this happens). On the other hand, perceptions matter and decisions about polls should be made on scientific grounds, not on ideological or partisan affections. And, surely this has to play into the finding that only 9% of Americans are willing to respond to telephone surveys.