Disconnect between how much Americans say they give to church and charity versus what they actually give

Research working with recent data on charitable and religious giving suggests there is an interesting disconnect: some people say they give more than they actually do.

A quarter of respondents in a new national study said they tithed 10 percent of their income to charity. But when their donations were checked against income figures, only 3 percent of the group gave more than 5 percent to charity…

But other figures from the Science of Generosity Survey and the 2010 General Social Survey indicate how little large numbers of people actually give to charity.

The generosity survey found just 57 percent of respondents gave more than $25 in the past year to charity; the General Social Survey found 77 percent donated more than $25, Price and Smith reported in their presentation on “Religion and Monetary Donations: We All Give Less Than We Think.”

In one indication of the gap between perception and reality, 10 percent of the respondents to the generosity survey reported tithing 10 percent of their income to charity although their records showed they gave $200 or less.

Two thoughts, more about methodological issues than the subject at hand:

1. What people say on surveys or in interviews doesn’t always match what they actually do. There are a variety of reasons for this, not all malicious or intentional. But, this leads me to thought #2…

2. I like the way some of these studies make use of multiple sources of data to find the disconnect between what people say and what they do. When looking at an important area of social life, like altruism, having multiple sources of data goes a long way. Measuring attitudes is often important in of itself but we also need data on practices and behaviors.

 

Analyst looks at “racial breakdown of [presidential election] polls”

An analyst for RealClearPolitics takes a look at possible issues with the racial breakdown in the samples of  presidential election polls. A few of the issues:

First, as Chait repeatedly concedes, we don’t know what the ultimate electorate will look like this November. That really should be the end of the argument — if we don’t know what the racial breakdown is going to be, it’s hard to criticize the pollsters for under-sampling minorities. After all, almost all pollsters weight their base sample of adults to CPS (current population survey) estimates to ensure the base sample reflects the actual population; after that, the data simply are what they are.

It’s true that the minority share of the electorate increased every year from 1996 through 2008. But there’s a reason that 1996 is always used as a start date: After declining every election from 1980 through 1988, the white share of the vote suddenly ticked up two points in 1992. In other words, these things aren’t one-way ratchets (and while there is no H. Ross Perot this year, the underlying white working-class angst that propelled his candidacy is very much present, as writers on the left repeatedly have observed)…

“The U.S. Census Bureau allows for multiple responses when it asks respondents what race they are, and Gallup attempts to replicate the Census in that respect. While most pollsters ask two separate questions about race and Hispanic ancestry, Gallup goes a step further, asking five separate questions about race. They ask respondents to answer whether or not they consider themselves White; Black or African American; Asian; Native American or Alaska Native; and Native Hawaiian or Pacific Islander.”

In other words, how you ask the question could impact how people self-identify with regard to race and ethnicity, which could in turn affect how your weighted data look. This is a polling issue that will likely become more significant as the nation grows more diverse, and more multi-racial.

Trying to figure out who exactly is going to vote is a tricky proposition and it is little surprise that different polling organizations have slightly different figures.

I hope people don’t see stories like this and conclude that polls can’t be trusted after all. Polling is not an exact science; all polls contain small margins of error. However, polling is so widely used because it is incredibly difficult to capture information about whole populations. Even one of the most comprehensive surveys we have, the US Census, was only able to get about 70-75% cooperation and that was with a large amount of money and workers. Websites like RealClearPolitics are helpful here because you can see averages of the major polls which can help smooth out some of their differences.

A final note: this is another reminder that measuring race and ethnicity is difficult. As noted above, the Census Bureau and some of these polling organizations use different measures and therefore get different results. Of course, because race and ethnicity are fluid, the measures have to change over time.

Atheists rally but still have a long way to go in the field of public opinion

Atheists may have held the “Reason Rally” last Saturday in Washington D.C. but data suggests they have a long way to go in countering negative public perceptions:

Atheists remain an enigma for many people. In fact, a study released last year found that religious people distrust atheists almost as much as rapists (National Post):

Researchers at the University of British Columbia and the University of Oregon conducted a series of studies that found a deep level of distrust toward those who don’t believe in God, deeming them to be among the least trusted people in the world — despite their growing ranks to an estimated half billion globally. “There’s this persistent belief that people behave better if they feel like God is watching them,” said Will Gervais, lead study author and doctoral candidate in the social psychology department at UBC. “So if you’re playing by those rules, you’re going to see other people’s religious beliefs as signals of how trustworthy they might be.” … “It’s pretty shocking that we get the same magnitude of distrust towards atheists simply because they don’t believe [in God],” said the researcher, who is himself an atheist. “With rapists, they’re distrusted because they rape people. Atheists are viewed as sort of a moral wild card.”…

So, one rally didn’t change perceptions too much … that’s not hard to believe. Gregory Paul, an independent researcher in sociology and evolution, and Phil Zuckerman, a professor of sociology, are puzzled by the dislike of atheists, but they see some positive signs for the nonbelievers’ future:

More than 2,000 years ago, whoever wrote Psalm 14 claimed that atheists were foolish and corrupt, incapable of doing any good. These put-downs have had sticking power. Negative stereotypes of atheists are alive and well. Yet like all stereotypes, they aren’t true — and perhaps they tell us more about those who harbor them than those who are maligned by them. … As with other national minority groups, atheism is enjoying rapid growth. Despite the bigotry, the number of American nontheists has tripled as a proportion of the general population since the 1960s. Younger generations’ tolerance for the endless disputes of religion is waning fast. Surveys designed to overcome the understandable reluctance to admit atheism have found that as many as 60 million Americans — a fifth of the population — are not believers. Our nonreligious compatriots should be accorded the same respect as other minorities.

It looks like there will be plenty of material to study in this area in the coming years.

I’d be interested to hear Paul and Zuckerman’s argument about surveys being titled toward religiosity. Do questions about religion suggest that the socially desirable answer is to be religious or does this pressure come mainly outside the survey? What questions produce higher results?

Do politicians understand how polls work?

A recent CBS News/New York Times poll showed 80% of Americans do not think their family is financially better than four years ago:

Just 20 percent of Americans feel their family’s financial situation is better today than it was four years ago. Another 37 percent say it is worse, and 43 percent say it is about the same.

When asked about these specific results, Harry Reid has this to say about polls in general:

“I’m not much of a pollster guy. As everyone knows, there isn’t a poll in America that had me having any chance of being re-elected, but I got re-elected,” he told TheDC.

“I think this poll is so meaningless. It is trying to give the American people an idea of what 300 million people feel by testing several hundred people. I think the poll is flawed in so many different ways including a way that questions were asked. I don’t believe in polls generally and specifically not in this one.”

The cynical take on this is that Reid and politicians in general like polls when they are supportive of their positions and don’t like them when they do not favor them. If this is true, then you might expect politicians to cite polls when they are good but to ignore them or even try to discredit them if they are bad.

But, I would like to ask a more fundamental question: are politicians any better than average Americans in understanding polls? Reid seems to suggest that this poll has two major problems: it doesn’t ask questions of enough people to really understand all Americans (a sampling issue) and the questions are poor which leads to biased answers (an issue of how the questions are worded). Is Reid right? From the information at the bottom of the CBS story about the poll, it seems pretty standard:

This poll was conducted by telephone from March 7-11, 2012 among 1009 adults nationwide.

878 interviews were conducted with registered voters, including 301 with voters who said they plan to vote in a Republican primary. Phone numbers were dialed from samples of both standard land-line and cell phones. The error due to sampling for results based on the entire sample could be plus or minus three percentage points. The margin of error for the sample of registered voters could be plus or minus three points and six points for the sample of Republican primary voters. The error for subgroups may be higher. This poll release conforms to the Standards of Disclosure of the National Council on Public Polls.

Yes, the number of respondents seems low to be able to talk about all Americans but this is how all major polls work: you select a representative sample based on standard demographic factors (gender, race, age, etc.) and then you estimate how close the survey results are to the actual results if we asked all American adults these questions. This is why all polls have a margin of error: if you ask less people, you are less confident in the generalizability of the results (which is why there is a larger 6% gap for the smaller Republican primary voters subgroup) and if you ask more people, you can be more confident (though the payoff of asking more people usually diminishes between 1200-1500 respondents so it is not worth asking more at some point).

I don’t think Reid sounds very good in this soundbite: he attacks the scientific basis of polls with common objections. While polls may not “feel right” and may contradict anecdotal or personal evidence, they can be done well and with a good sample of around 1,000 people, you can be confident that the results are generalizable to the American people. If Reid does understand how polls work, he could raise other issues. For example, he could insist that this is a one-time poll and you would want to measure this again and again to see how it changes (perhaps this is an unusual trough?) or you would want other polling organizations to ask the same question and triangulate the results between the surveys (like what Real Clear Politics does by taking averages of polls). Or he could suggest that this question doesn’t matter much because asking about four years ago is a rather arbitrary point and philosophically, does life always have to get better over time?

US mosques increased from 1,209 to 2,106 between 2000 and 2011

A new study shows that the number of mosques in the United States increased 74% between 2000 and 2011:

Researchers conducting the national count found a total of 2,106 Islamic centers, compared to 1,209 in 2000 and 962 in 1994. About one-quarter of the centers were built between 2000-2011, as the community faced intense scrutiny by government officials and a suspicious public. In 2010, protest against an Islamic center near ground zero erupted into a national debate over Islam, extremism and religious freedom. Anti-mosque demonstrations spread to Tennessee, California and other states.

While some are pleased as this suggests Muslims feel comfortable enough in the United States to establish religious congregations, I think there are two other interesting things about these findings:

1. The methodology for counting mosques:

The report released Wednesday, “The American Mosque 2011,” is a tally based on mailing lists, websites and interviews with community leaders, and a survey and interviews with 524 mosque leaders. The research is of special interest given the limited scholarship so far on Muslim houses of worship, which include a wide range of religious traditions, nationalities and languages.

Researchers defined a mosque as a Muslim organization that holds Friday congregational prayers called jumah, conducts other Islamic activities and has operational control of its building. Buildings such as hospitals and schools that have space for Friday prayer were not included. Chapters of the Muslim Student Association at colleges and universities were included only if they had space off-campus or had oversight of the building where prayer was held…

The 2011 mosque study is part of the Faith Communities Today partnership, which researches the more than 300,000 houses of worship in the United States. Among the report’s sponsors are the Council on American-Islamic Relations, the Hartford Institute for Religion Research, the Islamic Society of North America and Islamic Circle of North America.

I wonder if other researchers might disagree with this methodology, particularly with how a mosque was defined. This is a reminder that it can be difficult to track or count religious groups because there are no master lists, not everyone is in the phone book, and not everyone has a web site. Additionally, religious congregations can quickly form and disband.

(I assume the researchers talk about this in their report but could the increase in mosques could be related to doing a more comprehensive search this time around?)

2. It is interesting to note where the mosques are located:

The overwhelming majority of mosques are in cities, but the number located in suburbs rose from 16 percent in 2000 to 28 percent in 2011. The Northeast once had the largest number of mosques, but Islamic centers are now concentrated in the South and West, the study found. New York still has the greatest number of Islamic centers — 257 — followed by 246 in California and 166 in Texas. Florida is fourth with 118. The shift follows the general pattern of population movement to the South and West.

I am most interested in the figures about the suburban growth as I have tracked several cases of proposals for mosques in the Chicago suburbs. This article doesn’t say but I wonder if the greater number of suburban mosques is because city mosques have moved from city to suburb (which would mirror the movement of Protestant churches out of the city in the post-World War II suburban boom) or because these are new suburban mosques built in response to a growing suburban Muslim population.

 

More evidence that Americans don’t like answering survey questions about income

While looking at data about the wage gap between men and women, two researchers discovered that respondents to the American Community Survey may not been completely correct in stating their incomes or the incomes of others in their households:

The authors, whose study will be published in the journal Social Science Research, identified these biases by examining data from the American Community Survey, which is also conducted by the Census Bureau. Respondents are interviewed multiple times, one year apart. When the researchers looked at how responses to these questions changed across the subsequent interviews (controlling for other factors), they found that people answered more generously for themselves than other people had for them.About half of the data on this income question in the American Community Survey have long come from “proxy reporters” — people answering on behalf of others in their household. In the early ’80s, a majority of these proxy reporters were women. “They were simply around to answer the phone call,” Reynolds said, noting that women had not entered the work force full time back then to the extent that they have today.

On the whole, these female survey respondents likely under-reported the income of their husbands, and over-reported their own — creating the skewed impression that the gender gap in America was much smaller in the early ’80s than it really was…

Once Reynolds and Wenger had calculated the extent of these biases, they went back to the data we’ve long used to measure the wage gap and readjusted it. Over time, as more women have entered the labor force, men have also become more likely to answer these surveys for themselves. And that impacts the data, too. The existing analysis — based on what the authors call the “naïve approach” to this data — suggested that the wage gap in America between 1979 and 2009 closed by about 16 percent (or $1.19 per hour). Wenger and Reynolds put that number instead at 22 percent (or $1.76). And so we have been 50 percent off in this basic calculation.

Interesting finding. As I tell my students, how you collect the data matters a lot for your conclusions. How much will other researchers be willing to change their data and conclusions based on this “quirk” in the data? No other researcher had ever thought about this before or have others considered the issue and moved forward anyway?

Researchers need to be particularly careful in dealing with questions about income. The researcher will have to find some sort of compromise where you can get the most fine-grained data while making sure that people are still willing and able to answer the question. If you ask about specific incomes, you are likely to get a lot of missing data as people are not comfortable answering. If you ask too broadly (say by having really large categories), you may not be able to do much with the data.

Does this suggest that other surveys that ask a single person to report on their whole household may also be skewed?

Discovering Ella Fitzgerald while conducting a sociological survey

Surveys are conducted in order to find out information about a population. But, you can discover other things while doing a survey including Ella Fitzgerald:

The first time I heard Ella singing I was in college, going from door to door conducting some survey for a sociology class. In one of the dorm rooms, music was playing: a woman’s voice that was so smooth, so smart, that I interrupted the sociological question-and-answer session to ask, “What is this?’’

It was Ella. The song was a dopey one: a coy ditty about gradually giving in to the pleading of an irresistibly seductive man – a lover, you think; but no, he turns out to be a guy selling magazine subscriptions. Ella made this unpromising material into something memorable: witty and delicious.

Sounds fortuitous. In addition to getting information about respondents, someone conducting a survey might learn things about themselves.

This also reminds me of the story Sudhir Venkatesh tells about getting into researching gangs that is told in Freakonomics: while conducting a survey in a Chicago housing project on the advice of his graduate school adviser, Venkatesh was kidnapped and held for several days. While usually not what you would want as a researcher, this helped Venkatesh earn the trust of the gang and he has gone on to write several books on the subject.

This makes me want to track down an article or a book where sociologists talk the interesting and strange things that happened to them while conducting surveys…

More on limits of Census measures of race and ethnicity

Here is some more information about the limitations of measuring race with the current questions in the United States Census:

When the 2010 census asked people to classify themselves by race, more than 21.7 million — at least 1 in 14 — went beyond the standard labels and wrote in such terms as “Arab,” ”Haitian,” ”Mexican” and “multiracial.”

The unpublished data, the broadest tally to date of such write-in responses, are a sign of a diversifying America that’s wrestling with changing notions of race…

“It’s a continual problem to measure such a personal concept using a check box,” said Carolyn Liebler, a sociology professor at the University of Minnesota who specializes in demography, identity and race. “The world is changing, and more people today feel free to identify themselves however they want — whether it’s black-white, biracial, Scottish-Nigerian or American. It can create challenges whenever a set of people feel the boxes don’t fit them.”

In an interview, Census Bureau officials said they have been looking at ways to improve responses to the race question based on focus group discussions during the 2010 census. The research, some of which is scheduled to be released later this year, examines whether to include new write-in lines for whites and blacks who wish to specify ancestry or nationality; whether to drop use of the word “Negro” from the census form as antiquated; and whether to possibly treat Hispanics as a mutually exclusive group to the four main race categories.

This highlights some of the issues of social science research:

1. Social science categories change as people’s own understanding of the terms changes. Keeping up with these understandings can be difficult and there is always a lag. For example, a sizable group of respondents in the 2010 Census didn’t like the categories but the problem can’t be fixed until a future Census.

2. Adding write-in options or more questions means that the Census becomes longer, requiring more time to take and analyze. With all of the Census forms that are returned, this is no small matter.

3. Comparing results of repeated surveys like the Census can become quite difficult when the definitions change.

4. The Census is going to change things based on focus groups? I assume they will also test permutations of the questions and possible categories in smaller-scale surveys before settling on what they will do.

Two different methodologies to measure the US Jewish population

Measuring small populations within the United States can be difficult. Here is an example: even though two separate studies agree the US Jewish population is roughly 6.5 million, they used different methodologies to arrive at this number:

Many federations around the country commission scientific studies to better understand their local Jewish populations. These reports typically rely on random digit dialing, in which researchers come up with a percentage of Jews in the community based on the results of telephone surveys. In other instances, researchers will estimate the number of Jews based on the number of people with Jewish last names.

These reports provided the backbone for Sheskin and Dashefsky’s own annual estimate. But since not every federation studies its own population, the two conducted original research in some localities. In this, they were often aided by knowledgeable community members or by local estimates they found online. Lastly, they used data collected by the U.S. Census of three solidly Hasidic Jewish towns in New York state: Kiryas Joel, Kaser Village and New Square. (Aside from these exceptions, the U.S. Census does not count Jews.)

Adding these figures together, Sheskin and Dashefsky came up with a national estimate — albeit a patchwork one — that far exceeded previous figures. And in some ways exceeded their own expectations. Their national total of 6,588,000 is an overestimate, they contend, because some Jews — such as college students who live in one place and go to school elsewhere, or retirees who live part-time in one city and part-time in another — were likely counted twice…

Saxe came to his national estimate of 6.4 million through very different means.

Daunted by the steep expense and lengthy time required by random digit dialing, Saxe and his team ferreted out data that already existed to reach his conclusion. This included information from more than 150 government surveys on topics completely unrelated to Judaism, such as health care or education. Each study had a sample size of at least 1,000 people, and each study asked the question: What is your religion?

“From this, we are now absolutely confident — and it has been vetted by all sorts of groups and people — that about 1.8% of the adult American population says that their religion is Judaism,” he said.

Saxe adjusted his sample to account for children and came to a total of 6.4 million Jews in America.

In order to count and know more about relatively smaller populations in the United States, say Muslims in the United States when asking questions about religion, survey researchers often try to oversample these groups so that they can draw conclusions from a larger N. But as this article notes, finding people of smaller groups through random-digit dialing can take a long time.

Both of these researchers worked with existing data in order make generalizations: one worked with local figures and the other used a sample of large-scale surveys. In both cases, this is a clever use of existing data because doing a large-scale survey would have likely been a lot more costly in terms of time and money.

I would guess both sets of researchers are happy that their figures are close to those of the other study as this enhances the validity of their numbers.

Tim Tebow is America’s favorite pro athlete…with 3% of the vote!

The fact that Tim Tebow is America’s favorite pro athlete may be a great headline but it covers up the fact that very few people actually selected him:

How big is Tebow-mania? According to the ESPN Sports Poll, Tim Tebow is now America’s favorite active pro athlete.

The poll, calculated monthly, had the Denver Broncos quarterback ranked atop the list for the month of December. In the 18 years of the ESPN Sports Poll only 11 different athletes — a list that includes Michael Jordan, Tiger Woods and LeBron James — have been No. 1 in the monthly polling.

In December’s poll, Tebow was picked by 3 percent of those surveyed as their favorite active pro athlete. That put him ahead of Kobe Bryant (2 percent), Aaron Rodgers (1.9 percent), Peyton Manning (1.8 percent) and Tom Brady (1.5 percent) in the top-five of the results.

The poll results were gathered from 1,502 interviews from a nationally representative sample of Americans ages 12 and older.

Tebow is the favorite and he was selected by 3% of the respondents? This is not a lot. While it is meaningful that he was selected so early in his career says something but we need some more data to think through this. What percent have previous favorite athletes gotten? Have previous iterations of this poll had larger gaps between the favorite and second-place? Are responses to this poll more diverse now than in the past?

I wonder about the validity of such questions that ask Americans to pick a favorite as they can garner low totals. Isn’t Tebow’s advantage over Bryant easily within the margin of error of the survey? The issues here are even greater than a recent poll asking about favorite Presidents. If you are a marketer, does this result clearly tell you that you should have Tebow sell your product?

Some quick history of the ESPN Sports Poll.