A short overview of recent survey questions about Holocaust knowledge in the US

Although this article leads with recent survey results about what Americans know and think about the Holocaust, I’ll start with the summary of earlier surveys and move forward in time to the recent results:

Whether or not the assumptions in the Claims Conference survey are fair, and how to tell, is at the core of a decades long debate over Holocaust knowledge surveys, which are notoriously difficult to design. In 1994, Roper Starch Worldwide, which conducted a poll for the American Jewish Committee, admitted that its widely publicized Holocaust denial question was “flawed.” Initially, it appeared that 1 in 5, or 22 percent, of Americans thought it was possible the Holocaust never happened. But pollsters later determined that the question—“Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?”—was confusing and biased the sample. In a subsequent Gallup poll, when asked to explain their views on the Holocaust in their own words, “only about 4 percent [of Americans] have real doubts about the Holocaust; the others are just insecure about their historical knowledge or won’t believe anything they have not experienced themselves,” according to an Associated Press report at the time. More recently, the Anti-Defamation League was criticized for a 2014 worldwide study that asked respondents to rate 11 statements—“People hate Jews because of the way they behave, for example”—as “probably true” or “probably false.” If respondents said “probably true” to six or more of the statements, they were considered to harbor anti-Semitic views, a line that many experts said could not adequately represent real beliefs…

Just two years ago, the Claims Conference released another survey of Americans that found “Two-Thirds of Millennials Don’t Know What Auschwitz Is,” as a Washington Post headline summarized it. The New York Times reported on the numbers at the time as proof that the “Holocaust is fading from memory.” Lest it appear the group is singling out Americans, the Claims Conference also released surveys with “stunning” results from Canada, France, and Austria.

But a deeper look at the Claims Conference data, which was collected by the firm Schoen Cooperman Research, reveals methodological choices that conflate specific terms (the ability to ID Auschwitz) and figures (that 6 million Jews were murdered) about the Holocaust with general knowledge of it, and knowledge with attitudes or beliefs toward Jews and Judaism. This is not to discount the real issues of anti-Semitism in the United States. But it is an important reminder that the Claims Conference, which seeks restitution for the victims of Nazi persecution and also to “ensure that future generations learn the lessons of the Holocaust,” is doing its job: generating data and headlines that it hopes will support its worthy cause.

The new Claims Conference survey is actually divided into two, with one set of data from a 1,000-person national survey and another set from 50 state-by-state surveys of 200 people each. In both iterations, the pollsters aimed to assess Holocaust knowledge according to three foundational criteria: the ability to recognize the term the Holocaust, name a concentration camp, and state the number of Jews murdered. The results weren’t great—fully 12 percent of national survey respondents had not or did not think they had heard the term Holocaust—but some of the questions weren’t necessarily written to help respondents succeed. Only 44 percent were “familiar with Auschwitz,” according to the executive summary of the data, but that statistic was determined by an open-ended question: “Can you name any concentration camps, death camps, or ghettos you have heard of?” This type of active, as opposed to passive, recall is not necessarily indicative of real knowledge. The Claims Conference also emphasized that 36 percent of respondents “believe” 2 million or fewer Jews were killed in the Holocaust (the correct answer is 6 million), but respondents were actually given a multiple-choice question with seven options—25,000, 100,000, 1 million, 2 million, 6 million, 20 million, and “not sure”—four of which were lowball figures. (Six million was by far the most common answer, at 37 percent, followed by “not sure.”)

The first example above has made it into research methods textbooks regarding the importance of how survey questions are worded. The ongoing discussion in this article also could illustrate these textbook dialogues: how questions are asked and how the results are interpreted by the researchers are very important.

There are other actors in this process that can help or harm the data interpretation:

  1. Funders/organizations behind the data. What do they do with the results?
  2. How the media reports the information. Do they accurately represent the data? Do they report on how the data was collected and analyzed?
  3. Does the public understand what the data means? Or, do they solely take their cues from the researchers and/or the media reports?
  4. Other researchers who look at the data. Would they measure the topics in the same way and, if not, what might be gained by alternatives?

This all may be boring details to many but going from choosing research topics and developing questions to sharing results with the public and interpretation from others can be a process. The hope is that all of the actors involved can help get as close to what is actually happening – in this case, accurately measuring and reporting attitudes and beliefs.

If one survey option receives the most votes (18%), can the item with the least votes (2%) be declared the least favorite?

The media can have difficulty interpreting survey results. Here is one recent example involving a YouGov survey that asked about the most attractive regional accents in the United States:

Internet-based data analytics and market research firm YouGov released a study earlier this month that asked 1,216 Americans over the age of 18 about their accent preferences. The firm provided nine options, ranging from regions to well-known dialects in cities. Among other questions, YouGov asked, “Which American region/city do you think has the most attractive accent?”

The winner was clear. The Southeastern accent, bless its heart, took the winning spot, with the dialect receiving 18 percent of the vote from the study’s participants. Texas wasn’t too far behind, nabbing the second-most attractive accent at 12 percent of the vote…

The least attractive? Chicago rolls in dead last, with just 2 percent of “da” vote.

John Kass did not like the results and consulted a linguist:

I called on an expert: the eminent theoretical linguist Jerry Sadock, professor emeritus of linguistics from the University of Chicago…

“The YouGov survey that CBS based this slander on does not support the conclusion. The survey asked only what the most attractive dialect was, the winner being — get this — Texan,” Sadock wrote in an email.

“Louie Gohmert? Really? The fact that very few respondents found the Chicago accent the most attractive, does not mean that it is the least attractive,” said Sadock. “I prefer to think that would have been rated as the second most attractive accent, if the survey had asked for rankings.”

In the original YouGov survey, respondents were asked: “Which American region/city do you think has the most attractive accent?” Respondents could select one option. The Chicago accent did receive the least number of selections.

However, Sadock has a point. Respondents could only select one option. If they had the opportunity to rank them, would the Chicago accent move up as a non-favorite but still-liked accent? It could happen.

Additionally, the responses were fairly diverse across the respondents. The original “winner” Southeastern accent was only selected by 18% of those surveyed. This means that over 80% of the respondents did not select the leading response. Is it fair to call this the favorite accent of Americans when fewer than one-fifth of respondents selected it?

Communicating the nuances of survey results can be difficult. Yet, journalists and other should resist the urge to immediately identify “favorites” and “losers” in such situations where the data does not show an overwhelming favorite respondents did not have the opportunity to rate all of the possible responses.

Measuring attitudes by search results rather than surveys?

An author suggests Google search result data gives us better indicators of attitudes toward insecurity, race, and sex than surveys:

I think there’s two. One is depressing and kind of horrifying. The book is called Everybody Lies, and I start the book with racism and how people were saying to surveys that they didn’t care that Barack Obama was black. But at the same time they were making horrible racist searches, and very clearly the data shows that many Americans were not voting for Obama precisely because he was black.

I started the book with that, because that is the ultimate lie. You might be saying that you don’t care that [someone is black or a woman], but that really is driving your behavior. People can say one thing and do something totally different. You see the darkness that is often hidden from polite society. That made me feel kind of worse about the world a little bit. It was a little bit frightening and horrifying.

But, I think the second thing that you see is a widespread insecurity, and that made me feel a little bit better. I think people put on a front, whether it’s to friends or on social media, of having things together and being sure of themselves and confident and polished. But we’re all anxious. We’re all neurotic.

That made me feel less alone, and it also made me more compassionate to people. I now assume that people are going through some sort of struggle, even if you wouldn’t know that from their Facebook posts.

We know surveys have flaws and there are multiple ways – from sampling, to bad questions, to nonresponse, to social desirability bias (the issue at hand here) – they can be skewed.

But, these flaws wouldn’t lead me to these options:

  1. Thinking that search results data provides better information. Who is doing the searching? Are they a representative population? How clear are the patterns? (It is common to see stories based on the data but that provide no numbers. “Illinois” might be the most misspelled word in the state, for example, but by a one search margin and with 486 to 485 searches).
  2. Thinking that surveys are worthless on the whole. They still tell us something, particularly if we know the responses to some questions might be skewed. In the example above, why would Americans tell pollsters they have more progressive racial attitudes that they do? They have indeed internalized something about race.
  3. That attitudes need to be measured as accurately as possible. People’s attitudes often don’t line up with their actions. Perhaps we need more measures of attitudes and behaviors rather than a single good one. The search result data cited above could supplement survey data and voting data to better inform us about how Americans think about race.

Biggest European youth survey to conclude with two documentaries

Several researchers are embarking on what they say will be the biggest survey of young adults in Europe but what the results will lead to is different:

RTÉ is seeking 18-34-year-olds to take part in a pan-European online survey that it hopes will produce the “most comprehensive sociological study” of the age group ever presented…

Processing the data will involve a three pronged approach. There will be a quantitative side based on the questionnaire results, a qualitative approach based on documentary videos of groups of people and individuals filling out the survey, and a comparative approach looking at how the answers compare with other European societies…

“We’re ultimately going to produce two one-hour documentaries later this year, which will be the sociological analysis of the survey, and which we hope will provide a very valuable window into contemporary Ireland today.

If there is so much good data to be collected here, the choice to conclude with a documentary is an interesting one. On one hand, the typical sociology approach would be to publish at least a journal article if not a book. On the other hand, if the researchers are trying to reach a broader audience, a documentary that is widely available might get a lot more attention. At least in the United States, documentaries might be seen as nice efforts at public sociology but they are unlikely to win many points toward good research (perhaps even if they are used regularly in sociology courses).

Why doesn’t the American Sociological Association have an arm that puts together documentaries based on sociological work? Or, is there some money to be made here for a production company to regularly put out sociological material? Imagine Gang Leader For a Day or Unequal Childhoods as an 80 minute documentary.

Census 2020 to go digital and online

The Census Bureau is developing plans to go digital in 2020:

The bureau’s goal is that 55% of the U.S. population will respond online using computers, mobile phones or other devices. It will mark the first time (apart from a small share of households in 2000) that any Americans will file their own census responses online. This shift toward online response is one of a number of technological innovations planned for the 2020 census, according to the agency’s recently released operational plan. The plan reflects the results of testing so far, but it could be changed based on future research, congressional reaction or other developments…

The Census Bureau innovations are driven by the same forces afflicting all organizations that do survey research. People are increasingly reluctant to answer surveys, and the cost of collecting their data is rising. From 1970 to 2010, the bureau’s cost to count each household quintupled, to $98 per household in 2010 dollars, according to the GAO. The Census Bureau estimates that its innovations would save $5.2 billion compared with repeating the 2010 census design, so the 2020 census would cost a total of $12.5 billion, close to 2010’s $12.3 billion price tag (both in projected 2020 dollars)…

The only households receiving paper forms under the bureau’s plan would be those in neighborhoods with low internet usage and large older-adult populations, as well as those that do not respond online.

To maximize online participation, the Census Bureau is promoting the idea that answering the census is quick and easy. The 2010 census was advertised as “10 questions, 10 minutes.” In 2020, bureau officials will encourage Americans to respond anytime and anywhere – for example, on a mobile device while watching TV or waiting for a bus. Respondents wouldn’t even need their unique security codes at hand, just their addresses and personal data. The bureau would then match most addresses to valid security codes while the respondent is online and match the rest later, though it has left the door open to restrict use of this option or require follow-up contact with a census taker if concerns of fraud arise.

Perhaps the marketing slogan could be: “Do the Census online to save your own taxpayer dollars!”

It will be interesting to see how this plays out. I’m sure there will be plenty of tests to (1) make sure the people responding are matched correctly to their address (and that fraud can’t be committed); (2) the data collected is as accurate as going door to door and mailing out forms; and (3) the technological infrastructure is there to handle all the traffic. Even after going digital, the costs will be high and I’m guessing more people will ask why all the expense is necessary. Internet response rates to surveys are notoriously low so it may take a lot of marketing and reminders to get a significant percentage of online respondents.

But, if the Census Bureau can pull this off, it could represent a significant change for the Census as well as other survey organizations.

(The full 192 page PDF file of the plan is here.)

A need to better measure financial support and wealth passed to Millennials

A look at how race affects the financial support given by parents to Millennials includes this bit about measurement:

Shapiro said the numbers of Millennials receiving support from family are “absolutely underestimated” because many survey questions are not as methodical and specific as those a sociologist might ask. “As much as 90 percent of what you’ll hear isn’t picked up in the survey,” he said.

Shapiro’s more careful research found this:

Shapiro’s work pays special attention to the role of intergenerational family support in wealth building. He coined the term “transformative assets” to refer to any money acquired through family that facilitates social mobility beyond what one’s current income level would allow for. And it’s not that parents and other family members are exceptionally altruistic, either. “It’s how we all operate,” Shapiro said. “Resources tend to flow to people who are more needy.”

Racial disparity in transformative assets became especially striking to Shapiro during interviews with middle-class black Americans. “They almost always talk about financial help they give family members. People come to them,” Shapiro said. But when he asked white interviewees if they were lending financial support to family members, he said, “I almost always get laughter. They’re still getting subsidized.”…

To many Millennials, the small influxes of cash from parents are a lifeline, a financial relief they’re hard pressed to find elsewhere. To researchers, however, it’s both a symptom and an exacerbating factor of wealth inequality. In a 2004 CommonWealth magazine interview, Shapiro explained that gifts like this are “often not a lot of money, but it’s really important money. It’s a kind of money that allows families to obtain something for themselves and for their children that they couldn’t do on their own.”

Two quick thoughts:

  1. Americans tend not to like to talk about passing down wealth but decades of sociological research (as well as research from others) shows that it happens frequently and is quite advantageous for those who have wealth passed to them. I recommend looking at Shapiro and Oliver’s book Black Wealth/White Wealth.
  2. Polls like those cited here from USA Today could lead to lots of problems just because the measurement is not great. Why not ask better poll questions in the first place? I understand there are likely limits to how many questions can be asked (it is costly to ask more and longer questions) but I’d rather have sociologists and other social scientists handling this rather than the media.

Evangelicals recommend four beliefs that should identify them on surveys

The National Association of Evangelicals and LifeWay Research suggest evangelicals should be identified by agreeing with four beliefs:

  • The Bible is the highest authority for what I believe.
  • It is very important for me personally to encourage non-Christians to trust Jesus Christ as their Savior.
  • Jesus Christ’s death on the cross is the only sacrifice that could remove the penalty of my sin.
  • Only those who trust in Jesus Christ alone as their Savior receive God’s free gift of eternal salvation.

More on the reasons for these four:

The statements closely mirror historian David Bebbington’s classic four-point definition of evangelicalism: conversionism, activism, biblicism, and crucicentrism. But this list emphasizes belief rather than behavior, said Ed Stetzer, executive director of LifeWay Research.

“Affiliation and behavior can be measured in addition to evangelical beliefs, but this is a tool for researchers measuring the beliefs that evangelicals—as determined by the NAE—believe best define the movement,” he said.

A few quick thoughts on this:

  1. On one hand, it can be helpful for religious groups to identify what they see as unique to them. Outsiders may not pick up on these things. On the other hand, outsiders might see beliefs or other characteristics that mark evangelicals.
  2. Measuring religiosity involves a lot more than just beliefs. From later in the article:

    “Identity, belief, and behavior are three different things when it comes to being an evangelical,” McConnell said. “Some people are living out the evangelical school of thought but may not embrace the label. And the opposite is also true.”

    So this is just one piece of the puzzle. And I think sociologists (and other social scientists) have contributed quite a bit here in looking at how these particular theological views relate to other social behavior from race relations to voting to charitable activity and more.

  3. The suggestion here is that research shows the “correct” number of evangelicals identify with these four statements – identifying evangelicals in other ways seems to get to similar percentages as working with these four beliefs. Yet, I wonder how many evangelicals would name these four statements if asked what they believe. How exactly are these statements taught and passed on within evangelicalism?

Can religion not be fully studied with surveys or do we not use survey results well?

In a new book (which I have not read), sociologist Robert Wuthnow critiques the use of survey data to explain American religion:

Bad stats are easy targets, though. Setting these aside, it’s much more difficult to wage a sustained critique of polling. Enter Robert Wuthnow, a Princeton professor whose new book, Inventing American Religion, takes on the entire industry with the kind of telegraphed crankiness only academics can achieve. He argues that even gold-standard contemporary polling relies on flawed methodologies and biased questions. Polls about religion claim to show what Americans believe as a society, but actually, Wuthnow says, they say very little…

Even polling that wasn’t bought by evangelical Christians tended to focus on white, evangelical Protestants, Wuthnow writes. This trend continues today, especially in poll questions that treat the public practice of religion as separate from private belief. As the University of North Carolina professor Molly Worthen wrote in a 2012 column for The New York Times, “The very idea that it is possible to cordon off personal religious beliefs from a secular town square depends on Protestant assumptions about what counts as ‘religion,’ even if we now mask these sectarian foundations with labels like ‘Judeo-Christian.’”…

These standards are largely what Wuthnow’s book is concerned with: specifically, declining rates of responses to almost all polls; the short amount of time pollsters spend administering questionnaires; the racial and denominational biases embedded in the way most religion polls are framed; and the inundation of polls and polling information in public life. To him, there’s a lot more depth to be drawn from qualitative interviews than quantitative studies. “Talking to people at length in their own words, we learn that [religion] is quite personal and quite variable and rooted in the narratives of personal experience,” he said in an interview…

In interviews, people rarely frame their own religious experiences in terms of statistics and how they compare to trends around the country, Wuthnow said. They speak “more about the demarcations in their own personal biographies. It was something they were raised with, or something that affected who they married, or something that’s affecting how they’re raising their children.”

I suspect such critiques could be leveled at much of survey research: the questions can be simplistic, the askers of the questions can have a variety of motives and skills in developing useful survey questions, and the data gets bandied about in the media and public. Can surveys alone adequately address race, cultural values, politics views and behaviors, and more? That said, I’m sure there are specific issues with surveys regarding religion that should be addressed.

I wonder, though , if another important issue here is whether the public and the media know what to do with survey results. This book review suggests people take survey findings as gospel. They don’t know about the nuances of surveys or how to look at multiple survey questions or surveys that get at similar topics. Media reports on this data are often simplistic and lead with a “shocking” piece of information or some important trend (even if the data suggests continuity). While more social science projects on religion could benefit from mixed methods or by incorporating data from the other side (whether quantitative or qualitative), the public knows even less about these options or how to compare data. In other words, surveys always have issues but people are generally innumerate in knowing what to do with the findings.

Call for changing sex and gender questions on major surveys

Two sociologists argue that survey questions about sex and gender don’t actually tell us much:

Traditional understandings of sex and gender found in social surveys – such as only allowing people to check one box when asked “male” or “female” – reflect neither academic theories about the difference between sex and gender nor how a growing number of people prefer to identify, Saperstein argues in a study she coauthored with Grand Valley State University sociology professor Laurel Westbrook.

In their analysis of four of the largest and longest-running social surveys in the United States, the sociologists found that the surveys not only used answer options that were binary and static, but also conflated sex and gender. These practices changed very little over the 60 years of surveys they examined.

“Beliefs about the world shape how surveys are designed and data are collected,” they wrote. “Survey research findings, in turn, shape beliefs about the world, and the cycle repeats.”…

“Characteristics from race to political affiliation are no longer counted as binary distinctions, and possible responses often include the category ‘other’ to acknowledge the difficulty of creating a preset list of survey responses,” they wrote…The researchers suggest the following changes to social surveys:

  • Surveys must consistently distinguish between sex and gender.
  • Surveys should rethink binary categories.
  • Surveys need to incorporate self-identified gender and acknowledge it can change over time.

Surveys have to change as social understandings change. Measurement of race and ethnicity has changed quite a bit in recent decades with the Census considering changes for 2020.

It sounds like the next step would be to do a pilot study of alternatives – have a major survey include standard questions as well as new options – and then (1) compare results and (2) see how the new information is related to other information collected by the survey.

The bias toward one party in 2014 election polls is a common problem

Nate Silver writes that 2014 election polls were generally skewed toward Democrats. However, this isn’t an unusual problem in election years:

This type of error is not unprecedented — instead it’s rather common. As I mentioned, a similar error occurred in 1994, 1998, 2002, 2006 and 2012. It’s been about as likely as not, historically. That the polls had relatively little bias in a number of recent election years — including 2004, 2008 and 2010 — may have lulled some analysts into a false sense of security about the polls.

Interestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

It’s equally important for polling analysts to recognize that this bias can just as easily run in either direction. It probably isn’t predictable ahead of time.

The key to the issue here seems to be the assumptions that pollsters make before the election: who is going to turn out? Who is most energized? How do we predict who exactly is a likely voter? What percentage of a voting district identifies as Republican, Democrat, or Independent?

One thing that Silver doesn’t address is how this affects both perceptions of and reliance on such political polls. To have a large number of these polls lean in one direction (or lean in Republican directions in previous election cycles) suggests there is more work to do in perfecting such polls. All of this isn’t an exact science yet the numbers seem to matter more than ever; both parties jump on the results to either trumpet their coming success or to try to get their base out to reverse the tide. I’ll be curious to see what innovations are introduced heading into 2016 when the polls matter even more for a presidential race.