Can religion not be fully studied with surveys or do we not use survey results well?

In a new book (which I have not read), sociologist Robert Wuthnow critiques the use of survey data to explain American religion:

Bad stats are easy targets, though. Setting these aside, it’s much more difficult to wage a sustained critique of polling. Enter Robert Wuthnow, a Princeton professor whose new book, Inventing American Religion, takes on the entire industry with the kind of telegraphed crankiness only academics can achieve. He argues that even gold-standard contemporary polling relies on flawed methodologies and biased questions. Polls about religion claim to show what Americans believe as a society, but actually, Wuthnow says, they say very little…

Even polling that wasn’t bought by evangelical Christians tended to focus on white, evangelical Protestants, Wuthnow writes. This trend continues today, especially in poll questions that treat the public practice of religion as separate from private belief. As the University of North Carolina professor Molly Worthen wrote in a 2012 column for The New York Times, “The very idea that it is possible to cordon off personal religious beliefs from a secular town square depends on Protestant assumptions about what counts as ‘religion,’ even if we now mask these sectarian foundations with labels like ‘Judeo-Christian.’”…

These standards are largely what Wuthnow’s book is concerned with: specifically, declining rates of responses to almost all polls; the short amount of time pollsters spend administering questionnaires; the racial and denominational biases embedded in the way most religion polls are framed; and the inundation of polls and polling information in public life. To him, there’s a lot more depth to be drawn from qualitative interviews than quantitative studies. “Talking to people at length in their own words, we learn that [religion] is quite personal and quite variable and rooted in the narratives of personal experience,” he said in an interview…

In interviews, people rarely frame their own religious experiences in terms of statistics and how they compare to trends around the country, Wuthnow said. They speak “more about the demarcations in their own personal biographies. It was something they were raised with, or something that affected who they married, or something that’s affecting how they’re raising their children.”

I suspect such critiques could be leveled at much of survey research: the questions can be simplistic, the askers of the questions can have a variety of motives and skills in developing useful survey questions, and the data gets bandied about in the media and public. Can surveys alone adequately address race, cultural values, politics views and behaviors, and more? That said, I’m sure there are specific issues with surveys regarding religion that should be addressed.

I wonder, though , if another important issue here is whether the public and the media know what to do with survey results. This book review suggests people take survey findings as gospel. They don’t know about the nuances of surveys or how to look at multiple survey questions or surveys that get at similar topics. Media reports on this data are often simplistic and lead with a “shocking” piece of information or some important trend (even if the data suggests continuity). While more social science projects on religion could benefit from mixed methods or by incorporating data from the other side (whether quantitative or qualitative), the public knows even less about these options or how to compare data. In other words, surveys always have issues but people are generally innumerate in knowing what to do with the findings.

Call for changing sex and gender questions on major surveys

Two sociologists argue that survey questions about sex and gender don’t actually tell us much:

Traditional understandings of sex and gender found in social surveys – such as only allowing people to check one box when asked “male” or “female” – reflect neither academic theories about the difference between sex and gender nor how a growing number of people prefer to identify, Saperstein argues in a study she coauthored with Grand Valley State University sociology professor Laurel Westbrook.

In their analysis of four of the largest and longest-running social surveys in the United States, the sociologists found that the surveys not only used answer options that were binary and static, but also conflated sex and gender. These practices changed very little over the 60 years of surveys they examined.

“Beliefs about the world shape how surveys are designed and data are collected,” they wrote. “Survey research findings, in turn, shape beliefs about the world, and the cycle repeats.”…

“Characteristics from race to political affiliation are no longer counted as binary distinctions, and possible responses often include the category ‘other’ to acknowledge the difficulty of creating a preset list of survey responses,” they wrote…The researchers suggest the following changes to social surveys:

  • Surveys must consistently distinguish between sex and gender.
  • Surveys should rethink binary categories.
  • Surveys need to incorporate self-identified gender and acknowledge it can change over time.

Surveys have to change as social understandings change. Measurement of race and ethnicity has changed quite a bit in recent decades with the Census considering changes for 2020.

It sounds like the next step would be to do a pilot study of alternatives – have a major survey include standard questions as well as new options – and then (1) compare results and (2) see how the new information is related to other information collected by the survey.

The bias toward one party in 2014 election polls is a common problem

Nate Silver writes that 2014 election polls were generally skewed toward Democrats. However, this isn’t an unusual problem in election years:

This type of error is not unprecedented — instead it’s rather common. As I mentioned, a similar error occurred in 1994, 1998, 2002, 2006 and 2012. It’s been about as likely as not, historically. That the polls had relatively little bias in a number of recent election years — including 2004, 2008 and 2010 — may have lulled some analysts into a false sense of security about the polls.

Interestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

It’s equally important for polling analysts to recognize that this bias can just as easily run in either direction. It probably isn’t predictable ahead of time.

The key to the issue here seems to be the assumptions that pollsters make before the election: who is going to turn out? Who is most energized? How do we predict who exactly is a likely voter? What percentage of a voting district identifies as Republican, Democrat, or Independent?

One thing that Silver doesn’t address is how this affects both perceptions of and reliance on such political polls. To have a large number of these polls lean in one direction (or lean in Republican directions in previous election cycles) suggests there is more work to do in perfecting such polls. All of this isn’t an exact science yet the numbers seem to matter more than ever; both parties jump on the results to either trumpet their coming success or to try to get their base out to reverse the tide. I’ll be curious to see what innovations are introduced heading into 2016 when the polls matter even more for a presidential race.

The need for “the endangered art of ethnography”

To highlight a new award for ethnography, a British sociologist explains what ethnography brings to the table:

Day after day, we are bombarded with survey evidence about the lives and the times of our fellow citizens. This, we are told, is how the unemployed regard benefit fraud, how the Scottish middle class react to the idea of independence, what black youths feel about the police’s use of stop and search. But much of this evidence is collected over a short period of time by professional pollsters who have little sense of the context in which they ask their tick-box questions.

Ethnography is a necessary supplement and often an important antidote to this form of research. It takes time: several of the researchers on our shortlist, for example, had spent two to three years studying, and often living within, a specific culture or subculture. It also allows questions to arise during the course of the research rather than being pre-programmed. So when Howard Parker embarked on his classic ethnographic study of delinquent youth in Liverpool (View from the Boys: A Sociology of Downtown Adolescents, 1974), he was faced by the official assumption that the young people in his sample were persistent offenders, hardened and even dangerous delinquents. Only after two years of hanging around with the boys was Parker able to conclude that this was far from the case. The boys’ offending was “mundane, trivial, petty, occasional, and very little of a threat to anyone except themselves”.

In a very similar manner, Heidi Hoefinger’s Sex, Love and Money in Cambodia: Professional Girlfriends and Transactional Relationships (2013), one of the studies shortlisted for the award, began from the common belief that encounters in the so-called “sex bars” of Cambodia would be entirely cash-based and essentially sleazy. Only after spending long periods of time talking to the women who worked in the bars and their male clients was she able to show that the relationships fashioned in the bars also had an important emotional component. Another stereotype had been exploded…

But the award is not only an affirmation of the significance of ethnography. What also prompted the five-year agreement between the BBC and the BSA was a wish to recognise the personal qualities that are needed in someone who is prepared to leave their family and friends to spend extended periods of time in a culture that will be uncomfortable, alien and, at times, downright dangerous. We all happily dip into different cultures: watch the skateboarders going through their paces under the Royal Festival Hall, check out the street style of the Rastas at the Notting Hill Carnival, wander through Chinatown during the New Year celebrations. But this is a far cry from suspending our own cherished values and embracing those of others for months and even years.

I wonder if ethnography gets less attention these days because we live in an era where:

1. We want research results more quickly. In comparison, surveys can be quickly administered and analyzed.

2. The big data of today allows for broad understandings and patterns. Ethnographies tend to be more particular.

3. We like “scientific” data that appears more readily available in surveys and experiments. Ethnographies appear more dependent on the researcher and subjective as opposed to “scientific.”

At the same time, there are other social forces that would promote ethnographies including more humane and holistic understandings of the world (particularly compared to the sterility of multiple-choice questions and quick numbers) as well as needing more time to study complex social phenomena.

Putin claims actions in Crimera based on sociological polls

Did sociology surveys provide cover for Vladimir Putin to incorporate Crimea? Here is one source:

Russian President Vladimir Putin said the final decision on the inclusion of Crimea and Sevastopol into Russia was made in regards to a sociological poll conducted in Crimea.

And another source:

“Russia did not prepare to incorporate Crimea, the decision on the republic’s accession to Russia was made only after data were received about the mood of local residents”, President Vladimir Putin said at a meeting with activists of the All-Russian People’s Front on Thursday…

The Republic of Crimea and Sevastopol, a city with a special status on the Crimean Peninsula, where most residents are Russians, signed reunification deals with Russia on March 18 after a referendum two days earlier in which an overwhelming majority of Crimeans voted to secede from Ukraine and join the Russian Federation.

While the international community is not likely to accept this reasoning, it does highlight an interesting issue: what happens when surveys show that people in one country would prefer to be in another? What then happens to national boundaries if there is strong public opinion to leave the current country? Perhaps the big difference here is that the people of Crimea didn’t revolt against Ukraine and seek to join Russia; Putin stepped in and pushed for this. But, there are likely lots of people groups in the world who might prefer to have their own country or to leave their current nation.

Another question might be regarding how this survey was conducted. I vaguely remember hearing similar figures that many in eastern Ukraine consider themselves to be Russian rather than Ukrainian while figures in the western side of the country were nearly opposite. How good are these sociological results?

The difficulty in wording survey questions about American education

Emily Richmond points out some of the difficulties in creating and interpreting surveys regarding public opinion on American education:

As for the PDK/Gallup poll, no one recognizes the importance of a question’s wording better than Bill Bushaw, executive director of PDK. He provided me with an interesting example from the September 2009 issue of Phi Delta Kappan magazine, explaining how the organization tested a question about teacher tenure:

“Americans’ opinions about teacher tenure have much to do with how the question is asked. In the 2009 poll, we asked half of respondents if they approved or disapproved of teacher tenure, equating it to receiving a “lifetime contract.” That group of Americans overwhelmingly disapproved of teacher tenure 73% to 26%. The other half of the sample received a similar question that equated tenure to providing a formal legal review before a teacher could be terminated. In this case, the response was reversed, 66% approving of teacher tenure, 34% disapproving.”

So what’s the message here? It’s one I’ve argued before: That polls, taken in context, can provide valuable information. At the same time, journalists have to be careful when comparing prior years’ results to make sure that methodological changes haven’t influenced the findings; you can see how that played out in last year’s MetLife teacher poll. And it’s a good idea to use caution when comparing findings among different polls, even when the questions, at least on the surface, seem similar.

Surveys don’t write themselves nor is the interpretation of the results necessarily straightforward. Change the wording or the order of the questions and results can change. I like the link to the list of “20 Questions A Journalist Should Ask About Poll Results” put out by the National Council on Public Polls. Our public life would be improved if journalists, pundits, and the average citizen would pay attention to these questions.

Sociologist who studies fear at and collects stats for a haunted house

Here is one way to put sociological training into practice: working for a haunted house.

Ms. Kerr’s equivalent of a coffee break was the ScareHouse in Etna, which bills itself as “Pittsburgh’s ultimate haunted house” and has earned accolades from national publications, trade magazines, horror movie directors and other outlets to buttress the claim…

A part-time professor at Pitt and Robert Morris University, Ms. Kerr’s appreciation for the macabre also led to a job at ScareHouse, where she’s worked since 2008 as an administrator, statistician and resident sociologist…

Though the ScareHouse, which opened in 1999, had long taken customer surveys, Ms. Kerr added a new dimension, he says, polling not just on what aspects of the haunted house worked but what customers’ fear most deeply…

Ms. Kerr’s book, based on her haunted house experiences, deals with “the real benefits of experiencing thrilling or scary materials.” Those can range from the endorphin and adrenaline rush and confidence boost of surviving a dicey encounter to the stronger bonds formed in social groups that experience a scary situation together. Of course, there’s an important caveat.

“To really enjoy thrilling situations, you have to know that you’re safe,” she added.

“Thanks for experiencing our haunted house – now please take our exit survey.” Yet, it sounds like an interesting place to collect data. It would be interesting to hear how generalizable the findings about fear at a haunted house might be to other situations.

I often tell my statistics and research methods students that all sorts of organizations, from NGOs to corporations to religious groups to governments, are looking to collect and analyze data. Here is another example I can use that might prove more interesting than some other options…

Asking residents of Burbank, CA about their thoughts on mansionization

A recent survey in Burbank, California asked residents about possible mansionization in the city:

A new survey of residents in Burbank, California, is trying to quantify some of this local frustration. Using images of seemingly out-of-place new houses within the city’s older neighborhoods, the online poll tries to get at both the “gut reactions” that city residents have to these “mansionized” houses and their overall willingness to create new laws to control the growth of house size.

Burbank last limited the size of new home construction in 2005, when it reduced the ratio of house square footage to total lot size, from 0.6 to 0.4. But even these new regulations allow for homes far larger than the average size across the city, according to Carol Barrett, the city’s assistant director for planning and transportation. She says the poll is designed to gauge the community’s interest in creating further size restrictions, as well as new guidelines for architectural style and building materials.

“It’s not just an issue that the houses are bigger,” Barrett says. Another important question, she explains, would be: “Is it just a giant box with some precast concrete stuck on for a little decorative design, or does it have a specific architectural character?”

All of this could be seen as largely a matter of taste. But the awkward images in the survey, of giant, Spanish-style mini-mansions dwarfing the decades-old bungalows and ranch houses next door are awfully convincing. Below are some of the most telling images from the survey, which Barrett culled from suggestions from local citizen groups like Preserve Burbank and coworkers in city hall.

I like the idea of a survey about mansionization. Here are a few thoughts on such a survey:

1. Having a decent survey response rate might be the biggest issue. Getting a representative sample from a city of just over 100,000 people is not necessarily easy. On one hand, people have more survey fatigue but, on the other hand, suburbanites tend to take threats to their neighborhoods and property values very seriously.

2. Linking people’s “gut reactions” to particular policy changes is an important step. I suspect, based on the pictures shown, people would respond fairly negatively to mansionization. But, there are a number of ways this could be addressed. It sounds like the survey asks about several policy options to limit houses; I wonder if there are a few residents who would argue for property rights (and the ability to make lots of money when selling their property).

3. The pictures included in the survey are very helpful: people need to see exactly what such houses might look like rather than imagine what might be the case. However, the particular pictures might influence responses as mansionziation can take multiple forms.

I would be really curious to see how residents respond.

Chicago Tribune editorial against “survey mania”

The Chicago Tribune takes a strong stance against “survey mania.”

Question 1: Do you find that being pelted by survey requests from your bank, cable company, doctor, insurance agent, landlord, airline, phone company — and so on — is annoying and intrusive?

Question 2: Do you ignore all online and phone requests for survey responses because, well, your brief encounter with a bank teller doesn’t really warrant a 15-minute exegesis on the endearing time you spent together?

Question 3: Don’t you wish that virtually every company in America hadn’t succumbed to survey mania at the same time, so that you’d feel, well, a little more special when each request for your precious thoughts pings into your email?

Question 4: Do you wish that companies would spend a little less on surveys and a little more on customer service staff, so that callers would not be held captive by soul-sucking, brain-scorching, automated answering systems in which a chirpy-voiced robot only grudgingly ushers your call — “which is very important to us, which is still very important to us” — to a human being?

Question 5: Do you agree that blogger Greg Reinacker laid out some reasonable guidelines for companies that send surveys to customers: “Tell me how long it’s going to take. Even better, tell me exactly how many questions there will be. … Don’t ask me the same question three different ways just to see if I’m consistent. … If you really, really want me to take the survey, offer me something. I’m a sucker for free stuff. And a drawing probably won’t do it.”

Question 6: Do you think companies should be aware that a pleasant experience — a flight, a hotel stay, a cruise — can be retroactively tainted by an exhausting survey and all those nagging email reminders that you haven’t yet filled it out?

Question 7: Do you find it irritating when a salesperson tries to game the system by reminding you over and over that only an excellent rating for his or her service will suffice … before said service has been rendered to you?

Question 8: Do you agree that there are ample opportunities to put in a good word for, say, an excellent waiter or sales clerk or customer service agent (just ask to speak to his or her supervisor!), which is much more sincere than you unhappily trudging through a long multiple-choice online questionnaire?

Question 9: Are you aware that marketing professors tell us that these surveys can be vitally important for companies to improve their service and that employee bonuses and other incentives hinge on whether you rate their service highly or not? We’re dubious, too, but just in case it’s true … would you please tell our boss how great you think this editorial is? Use all the space you need.

We get it – some people think they are being asked to do too many surveys. At the same time, this hints at some larger issues with surveys:

1. Companies and organizations would love to have more data. This reminds me of part of the genius of Facebook – people voluntarily give up their data because they get something out of it (the chance to maintain relationships with people they know).

2. Some of these problems listed above could be fixed easily. Take #7. Salespeople can be too pushy in trying to get data.

3. Some things in #5 could be done while others listed there are harder. It should be common practice to tell survey takers how long the survey might take. But, asking about a topic multiple times is often important to see if people are consistent. This is called testing the validity of the data.

4. I think more consumers would like to receive more for participating in surveys. This could be in the form of incentives, everything from free or cheaper products or special opportunities. At the least, they don’t want to feel used or to feel like just another data point.

5. Survey fatigue is a growing problem. This makes collecting data more difficult for everyone, including academic researchers.

All together, I don’t think the quest for survey data is going to end soon because customer or consumer info is so valuable for businesses and organizations. But, approaching consumers for data can be done in better or worse ways. To get good data – not just some data – organizations need to offer consumers something worthwhile in return.

Congressional town halls not necessarily indicative of public opinion

I heard two news reports yesterday from two respected media sources about Congressional members holding towns halls in their districts about possible military action in Syria. Both reports featured residents speaking up against military action. Both hinted that constituents weren’t happy with the idea of military action. However, how much do town halls like these really tell us?

I would suggest not much. While they give constituents an opportunity to directly address a member of Congress, these events are great for the media. There are plenty of opportunities for heated speeches, soundbites, and disagreement amongst the crowd. One report featured a soundbite of a constituent suggesting that if he were in power, he would put charge both the president and his congressman with treason. The other report featured some people speaking for military action in Syria – some Syrian Americans asking for the United States to stand up to a dictator – and facing boos from others in the crowd.

Instead of focusing on town halls which provide some political theater, we should look to national surveys to American public opinion. Focus on the big picture, not on towns halls which provide small samples.