Cell phone users now comprise half of Gallup’s polling contacts

Even as Americans are less interested in participating in telephone surveys, polling firms are trying to keep up. Gallup has responded by making sure 50% of people contacted for polling samples are cell phone users:

Polling works only when it is truly representative of the population it seeks to understand. So, naturally, Gallup’s daily tracking political surveys include cellphone numbers, given how many Americans have given up on land lines altogether. But what’s kind of amazing is that it now makes sure that 50 percent of respondents in each poll are contacted via mobile numbers.

Gallup’s editor in chief, Frank Newport, wrote yesterday about the evolution of Gallup’s methods to remain “consistent with changes in the communication behavior and habits of those we are interviewing.” In the 1980s the company moved from door-to-door polling to phone calls. In 2008 it added cellphones. To reflect the growing number of Americans who have gone mobile-only, it has steadily increased the percentage of those numbers it contacts.

“If we were starting from scratch today,” Newport told Wired, “we would start with cellphones.”…

Although it may be a better reflection of society, mobile-phone polling is more expensive, says Newport. They have to call more numbers because the response rate is lower due to the nature of mobile communication.

As technology and social conventions change, researchers have to try and keep up. This is a difficult task, particularly if fewer people want to participate and technologies offer more and more options to screen out unknown requests. Where are we going next: polling by text? Utilizing well-used platforms like Facebook (where we know many people are turning every day)?

Political operative discusses which polls he thought were reliable, unreliable while working for Edwards 2008 campaign

Amidst discussions of whether current polls are accurately weighting their samples for Democrats and Republicans, a former political operative for Al Gore and John Edward talks about how the Edwards campaign used polls:

However, under cross-examination by lead prosecutor David Harbach, Hickman acknowledged sending a series of emails in November and December, and even into January, endorsing or promoting polls that made Edwards look good. Asked about what appeared to be a New York Times/CBS poll released in mid-November showing an effective “three-way tie” in Iowa with Hillary Clinton at 25 percent, Edwards at 23 percent and Obama at 22 percent, Hickman acknowledged he circulated it but insisted he didn’t think it was correct.

“The business I’m in is a business any fool can get into, and a lot can happen. I’m sure there was a poll like that,” the folksy Hickman told jurors when first asked about a poll showing the race tied. “I kept up with every poll that was done, including our own, and there may have been a few that showed them a tie, but… that’s not really what my analysis is. Campaigns are about trajectory, and… there could have been a point at which it was a tie in the sense that we were coming down, and Obama was going up, and Clinton was going up.”

Hickman also indicated that senior campaign staffers knew many of the polls were poorly done and of little value. “We didn’t take these dog and cat and baby-sitter polls seriously,” he said.

Hickman acknowledged that on January 2, 2008, a day before the Iowa caucuses, he sent out a summary of nine post-Christmas Iowa polls showing Edwards in contention in the Hawkeye State. However, he testified two-thirds of them were from firms he considered “ones we typically would not put a lot of credence in.” Hickman put Mason-Dixon, Strategic Vision, Insider Advantage, Zogby and Research 2000 in the “less reputable” group. He also told the court that ARG polls “have a miserable track record.”

Hickman said he considered the Des Moines Register polls, CNN and Los Angeles Times polls more accurate.

This seems like typical politics: an operative is supposed to spin the best news they can about their candidate, even if they don’t think this is the whole story. However, it is fascinating to see his opinion of different polling organizations. I wish he went on to describe why some of these polls were better than others: better samples, more reliable and/or predictive results, they lined up with other reputable polls? At the same time, I think the DrudgeReport’s headline for this story, “Under oath, Edwards pollster admits polls were ‘propaganda,'” is a bit misleading.  Hickman wasn’t disparaging all polls; he was admitting to using some polls that he thought were inaccurate to tell a particular political story.

If we got a bunch of current political operatives in a room, here are questions we could ask that would revealing:

1. Are there certain polls that you all consider to be reliable? (I hope the answer is yes. But I would also guess that each political party thinks certain polls tend to lean in their direction.)

2. What information do you all work with regularly that helps give you a better picture of what is going beyond the polls? In other words, the American public doesn’t get much of an inside view while the campaign is happening beyond a stream of polls reported by the media but the campaigns themselves have more information that matters. How much should the public pay attention to these polls or can they pick up clues from what is really going on elsewhere? (The media seems to like polls but there are other ways to get information.)

3. In the long run, who is helped or harmed by having a lot of polling organizations? Hickman suggests some polls aren’t that worthwhile so if this is the case, should they not be reported to the American public? (Americans can look at a variety of polls; should there be that many to choose from?)

Unfortunately, this story feeds a growing mistrust of polls. Generally, it is not good for social science if 42% of Americans think polls are biased for one candidate or another. On one hand, these 42% may simply not like what the polls are reporting, have little idea how polls work, and simply want their candidate to win (and won’t like the polls until this happens). On the other hand, perceptions matter and decisions about polls should be made on scientific grounds, not on ideological or partisan affections. And, surely this has to play into the finding that only 9% of Americans are willing to respond to telephone surveys.

Pew Research: the response rate for a typical phone survey is now 9% and response rates are down across the board

Earlier this year, Pew Research described a growing problem for pollsters: over 90% of the  public that doesn’t want to participate in telephone surveys.

It has become increasingly difficult to contact potential respondents and to persuade them to participate. The percentage of households in a sample that are successfully interviewed – the response rate – has fallen dramatically. At Pew Research, the response rate of a typical telephone survey was 36% in 1997 and is just 9% today.

The general decline in response rates is evident across nearly all types of surveys, in the United States and abroad. At the same time, greater effort and expense are required to achieve even the diminished response rates of today. These challenges have led many to question whether surveys are still providing accurate and unbiased information. Although response rates have decreased in landline surveys, the inclusion of cell phones – necessitated by the rapid rise of households with cell phones but no landline – has further contributed to the overall decline in response rates for telephone surveys.

A new study by the Pew Research Center for the People & the Press finds that, despite declining response rates, telephone surveys that include landlines and cell phones and are weighted to match the demographic composition of the population continue to provide accurate data on most political, social and economic measures. This comports with the consistent record of accuracy achieved by major polls when it comes to estimating election outcomes, among other things.

This is not to say that declining response rates are without consequence. One significant area of potential non-response bias identified in the study is that survey participants tend to be significantly more engaged in civic activity than those who do not participate, confirming what previous research has shown. People who volunteer are more likely to agree to take part in surveys than those who do not do these things. This has serious implications for a survey’s ability to accurately gauge behaviors related to volunteerism and civic activity. For example, telephone surveys may overestimate such behaviors as church attendance, contacting elected officials, or attending campaign events.

Read on for more comparisons between those who do tend to participate in telephone surveys and those who do not.

This has been a growing problem for years now: more people don’t want to be contacted and it is more difficult to contact cell phone users. One way this might be combated is to offer participants small incentives. This is already done with some online panels and it is more commonly used in mail surveys. These incentives wouldn’t be large enough to sway opinion or perhaps just get a sample of people who want the incentive but would be enough to raise response rates. It could be thought of as just enough to acknowledge and thank people for their time. I don’t know what the profit margins of firms like Gallup or Pew are but I imagine they could offer these small incentives quite easily.

This does suggest that the science of weighting is increasingly important. Having government benchmarks is really important, hence, the need for updated Census figures. However, it is not inconceivable that the Census could be scaled back: this is often a conservative proposal either based on the money spent on the Census Bureau or the “invasive” questions asked. And, it also may make the Census even more political as years of polling might be dependent on getting the figures “right,” depending on what side of the political aisle one is one.

Analyst looks at “racial breakdown of [presidential election] polls”

An analyst for RealClearPolitics takes a look at possible issues with the racial breakdown in the samples of  presidential election polls. A few of the issues:

First, as Chait repeatedly concedes, we don’t know what the ultimate electorate will look like this November. That really should be the end of the argument — if we don’t know what the racial breakdown is going to be, it’s hard to criticize the pollsters for under-sampling minorities. After all, almost all pollsters weight their base sample of adults to CPS (current population survey) estimates to ensure the base sample reflects the actual population; after that, the data simply are what they are.

It’s true that the minority share of the electorate increased every year from 1996 through 2008. But there’s a reason that 1996 is always used as a start date: After declining every election from 1980 through 1988, the white share of the vote suddenly ticked up two points in 1992. In other words, these things aren’t one-way ratchets (and while there is no H. Ross Perot this year, the underlying white working-class angst that propelled his candidacy is very much present, as writers on the left repeatedly have observed)…

“The U.S. Census Bureau allows for multiple responses when it asks respondents what race they are, and Gallup attempts to replicate the Census in that respect. While most pollsters ask two separate questions about race and Hispanic ancestry, Gallup goes a step further, asking five separate questions about race. They ask respondents to answer whether or not they consider themselves White; Black or African American; Asian; Native American or Alaska Native; and Native Hawaiian or Pacific Islander.”

In other words, how you ask the question could impact how people self-identify with regard to race and ethnicity, which could in turn affect how your weighted data look. This is a polling issue that will likely become more significant as the nation grows more diverse, and more multi-racial.

Trying to figure out who exactly is going to vote is a tricky proposition and it is little surprise that different polling organizations have slightly different figures.

I hope people don’t see stories like this and conclude that polls can’t be trusted after all. Polling is not an exact science; all polls contain small margins of error. However, polling is so widely used because it is incredibly difficult to capture information about whole populations. Even one of the most comprehensive surveys we have, the US Census, was only able to get about 70-75% cooperation and that was with a large amount of money and workers. Websites like RealClearPolitics are helpful here because you can see averages of the major polls which can help smooth out some of their differences.

A final note: this is another reminder that measuring race and ethnicity is difficult. As noted above, the Census Bureau and some of these polling organizations use different measures and therefore get different results. Of course, because race and ethnicity are fluid, the measures have to change over time.

Do politicians understand how polls work?

A recent CBS News/New York Times poll showed 80% of Americans do not think their family is financially better than four years ago:

Just 20 percent of Americans feel their family’s financial situation is better today than it was four years ago. Another 37 percent say it is worse, and 43 percent say it is about the same.

When asked about these specific results, Harry Reid has this to say about polls in general:

“I’m not much of a pollster guy. As everyone knows, there isn’t a poll in America that had me having any chance of being re-elected, but I got re-elected,” he told TheDC.

“I think this poll is so meaningless. It is trying to give the American people an idea of what 300 million people feel by testing several hundred people. I think the poll is flawed in so many different ways including a way that questions were asked. I don’t believe in polls generally and specifically not in this one.”

The cynical take on this is that Reid and politicians in general like polls when they are supportive of their positions and don’t like them when they do not favor them. If this is true, then you might expect politicians to cite polls when they are good but to ignore them or even try to discredit them if they are bad.

But, I would like to ask a more fundamental question: are politicians any better than average Americans in understanding polls? Reid seems to suggest that this poll has two major problems: it doesn’t ask questions of enough people to really understand all Americans (a sampling issue) and the questions are poor which leads to biased answers (an issue of how the questions are worded). Is Reid right? From the information at the bottom of the CBS story about the poll, it seems pretty standard:

This poll was conducted by telephone from March 7-11, 2012 among 1009 adults nationwide.

878 interviews were conducted with registered voters, including 301 with voters who said they plan to vote in a Republican primary. Phone numbers were dialed from samples of both standard land-line and cell phones. The error due to sampling for results based on the entire sample could be plus or minus three percentage points. The margin of error for the sample of registered voters could be plus or minus three points and six points for the sample of Republican primary voters. The error for subgroups may be higher. This poll release conforms to the Standards of Disclosure of the National Council on Public Polls.

Yes, the number of respondents seems low to be able to talk about all Americans but this is how all major polls work: you select a representative sample based on standard demographic factors (gender, race, age, etc.) and then you estimate how close the survey results are to the actual results if we asked all American adults these questions. This is why all polls have a margin of error: if you ask less people, you are less confident in the generalizability of the results (which is why there is a larger 6% gap for the smaller Republican primary voters subgroup) and if you ask more people, you can be more confident (though the payoff of asking more people usually diminishes between 1200-1500 respondents so it is not worth asking more at some point).

I don’t think Reid sounds very good in this soundbite: he attacks the scientific basis of polls with common objections. While polls may not “feel right” and may contradict anecdotal or personal evidence, they can be done well and with a good sample of around 1,000 people, you can be confident that the results are generalizable to the American people. If Reid does understand how polls work, he could raise other issues. For example, he could insist that this is a one-time poll and you would want to measure this again and again to see how it changes (perhaps this is an unusual trough?) or you would want other polling organizations to ask the same question and triangulate the results between the surveys (like what Real Clear Politics does by taking averages of polls). Or he could suggest that this question doesn’t matter much because asking about four years ago is a rather arbitrary point and philosophically, does life always have to get better over time?

Pew again asks for one-word survey responses regarding budget negotiations

I highlighted this survey technique in April but here it is again: Pew asked Americans to provide a one-word response to Congress’ debt negotiations.

Asked for single-word characterizations of the budget negotiations, the top words in the poll — conducted in the days before an apparent deal was struck — were “ridiculous,” “disgusting” and “stupid.” Overall, nearly three-quarters of Americans offered a negative word; just 2 percent had anything nice to say.

“Ridiculous” was the most frequently mentioned word among Democrats, Republicans and independents alike. It was also No. 1 in an April poll about the just-averted government shutdown. In the new poll, the top 27 words are negative ones, with “frustrating,” “poor,” “terrible,” “disappointing,“ “childish,” “messy” and “joke” rounding out the top 10.

And then we are presented a word cloud.

On the whole, I think this technique can suggest that Americans have generally unfavorable responses. But the reliance on particular terms is better for headlines than it is for collecting data. What would happen if public responses were split more evenly: what words/responses would then be used to summarize the data? The Washington Post headline (and Pew Research as well) can now use forceful and emotional words like “ridiculous” and “disgusting” rather than the more accurate numerical figures than about “three-quarters of Americans offered a negative word.” Why not also include an ordinal question (strongly disapprove to strongly approve) about American’s general opinion of debt negotiations in order to corroborate this open ended question?

This is a possibly interesting technique in order to take advantage of open ended questions without allowing respondents to give possibly lengthy responses. Open ended questions can produce a lot of data: there were over 330 responses in this survey alone. I’ll be interested to see if other organizations adopt this approach.

Claim of social desirability bias in immigration polls

Social desirability bias is the idea that people responding to surveys or other forms of data collection will say the socially correct answer rather than what they really think. A sociologist argues that this is the case for immigration polls:

A Gallup survey taken last year found 45 percent believe immigration should be decreased, compared to 17 percent saying it should be increased and 34 percent saying it should be kept at present levels. But should such figures be taken at face value? University of California, Berkeley, sociologist Alexander Janus argues not. Using a polling technique designed to uncover hidden bias, he concluded about 61 percent of Americans support a cutoff of immigration. Janus, who published his findings in the journal Social Science Quarterly, argues that “social desirability pressures” lead many on the left to lie about their true feelings on immigration — even when asked in an anonymous poll. In an interview, he discussed the survey he conducted in late 2005 and early 2006:

THE SURVEY: “The survey participants were first split into two similar groups. Individuals in one of the groups were presented with three concepts — ‘The federal government increasing assistance to the poor,’ ‘Professional athletes making millions of dollars per year,’ and ‘Large corporations polluting the environment’ — and asked how many of the three they opposed. Individuals in the second group were given the same three items as individuals in the first group, plus an immigration item: ‘Cutting off immigration to the United States.’ They were asked how many of the four they opposed. The difference in the average number of items named between the two groups can be attributed to opposition to the immigration item. The list experiment is superior to traditional questioning techniques in the sense that survey participants are never required to reveal to the interviewer their true attitudes or feelings.”…

I estimated that about 6 in 10 college graduates and more than 6 in 10 liberals hide their opposition to immigration when asked directly, using traditional survey measures.”

This sounds like an interesting technique because as he mentions, the respondents never have to say exactly which ideas they are opposed to.

In the long run for immigration policy, does it matter that much for liberals if people are secretly against immigration if they are willing to support it publicly? Of course, it could influence individual or small group interactions and how willing people are to participate in rallies and public events. But if people are still willing to vote in a socially desirable way, is this good enough?

I wonder if there are other numbers out there that are influenced by social desirability bias…

Pew using word frequencies to describe public’s opinion of budget negotiations

In the wake of the standoff over a federal government shutdown last week, Pew conducted a poll of Americans regarding their opinions on this event. One of the key pieces of data that Pew is reporting is a one-word opinion of the proceedings:

The public has an overwhelmingly negative reaction to the budget negotiations that narrowly avoided a government shutdown. A weekend survey by the Pew Research Center for the People & the Press and the Washington Post finds that “ridiculous” is the word used most frequently to describe the budget negotiations [29 respondents], followed by “disgusting,” [22 respondents] “frustrating,” [14 respondents] “messy,” [14 respondents] “disappointing” [13 respondents] and “stupid.” [13 respondents]

Overall, 69% of respondents use negative terms to describe the budget talks, while just 3% use positive words; 16% use neutral words to characterize their impressions of the negotiations. Large majorities of independents (74%), Democrats (69%) and Republicans (65%) offer negative terms to describe the negotiations.

The full survey was conducted April 7-10 among 1,004 adults; people were asked their impressions of the budget talks in interviews conducted April 9-10, following the April 8 agreement that averted a government shutdown.

I would be hesitant about leading off an article or headline (“Budget Negotiations in a Word – “Ridiculous”) with these word frequencies since they generally were used by few respondents: the most common response, “ridiculous,” was only given by 2.9% of the survey respondents (based on the figures here of 1,004 total respondents). I think the better figures to use would be the broader ones about negative responses where 69% used negative terms and a majority of all political stripes used a negative descriptor.

You also have to dig into the complete report for some more information. Here is the exact wording of the question:

PEW.2A If you had to use one single word to describe your impression of the budget negotiations in Washington, what would that one word be? [IF “DON’T KNOW” PROBE ONCE: It can be anything, just the first word that comes to mind…] [OPEN END: ENTER VERBATIM RESPONSE]

Additionally, the full report says that this descriptor question was only asked of 427 respondents on April 9-10 (so my above percentage should be altered: it should be 29/427 = 6.8%). So this is a smaller sample answering this particular question; how generalizable are the results? And the most common response to this question is the other category with 202 respondents. Presumably, the “others” are mostly negative since we are told 69% use negative terms. (As a side note, why not separate out the “don’t knows” and “refused”? There are 45 people in this category but these seem like different answers.)

One additional thought I have: at least this wasn’t put into a word cloud in order to display the data.

Pew finds that landline-only surveys are biased toward Republicans

Polling techniques have become more complicated in recent years with the introduction of cell phones. In the past, researchers could reasonably assume most US residents could be accessed through a landline. However, Pew now suggests there may be a political bias in surveys that only access people though landlines:

Across three Pew Research polls conducted in fall 2010 — conducted among 5,216 likely voters, including 1,712 interviewed on cell phones — the GOP held a lead that was on average 5.1 percentage points larger in the landline sample than in the combined landline and cell phone sample…

The difference in estimates produced by landline and dual frame samples is a consequence not only of the inclusion of the cell phone-only voters who are missed by landline surveys, but also of those with both landline and cell phones — so called dual users — who are reached by cell phone. Dual users reached on their cell phone differ demographically and attitudinally from those reached on their landline phone. They are younger, more likely to be black or Hispanic, less likely to be college graduates, less conservative and more Democratic in their vote preference than dual users reached by landline…

Cell phones pose a particular challenge for getting accurate estimates of young people’s vote preferences and related political opinions and behavior. Young people are difficult to reach by landline phone, both because many have no landline and because of their lifestyles. In Pew Research Center surveys this year about twice as many interviews with people younger than age 30 are conducted by cell phone than by landline, despite the fact that Pew Research samples include twice as many landlines as cell phones.

This seems to make sense: those who have cell phones and don’t have landlines are likely to be different than those who are reached by landlines.

A few questions that I have: does this issue exist in all phone surveys today (and this article suggests there was a sizable differences between landline people and cell phone people in five of six surveys)? Have other polling firms had similar findings? If Pew now has some ideas about the extent of this issue, is the proper long-term response to call more cell phones or to weight the results more toward cell phone users?

One possible response would be to include multiple methods for more surveys. This might include samples of landline respondents, cell phone respondents, and web respondents. While this is more costly and time-consuming, research firms could then triangulate results.

Considering “polls gone wild”

The Associated Press released a story yesterday with this headline: “Polls gone wild: Political gripes in Internet age.” It is an interesting read about the role polls have played in the 2010 election season and I have a few interpretations regarding the story.

1. The griping of politicians about polls does not often seem to be based on the methodology of the poll. Rather, I think the politicians are trying to curry favor with supporters and voters who are also suspicious of polls. I would guess many Americans are suspicious of polls because they think they can be manipulated (which is true) and then throw out all poll results (when there are methods that make the polls better or worse). Some of this could be dealt with by dealing with innumeracy and educating citizens about how good polls are done.

2. There is a claim that earlier polls affect later polls and elections and that overall, polls help determine election outcomes. Are there studies that prove this? Or is this just more smoke and mirrors from politicians?

3. If there are charges to be made about manipulation, it sounds like the political campaigns are manipulating the figures more than the reputable polling firms which are aiming to be statistically sound.

4. Stories like this remind me of the genius of RealClearPolitics.com where multiple polls about the same races are put side by side. If one doesn’t trust polls as much, just look at how polls compare over time. The more reputable companies show generally similar results over time. Basing news stories and campaign literature on just one poll may look silly in a few years with all of these companies producing numerous polls on almost a daily basis.