The retraction of a study provides a reminder of the importance of levels of measurement

Early in Statistics courses, students learn about different ways that variables can be measured. This is often broken down into three categories: nominal variables (unordered, unranked), ordinal variables (ranked but with varied category widths), and interval-ratio (ranked and with consistent spaces between categories). Decisions about how to measure variables can have significant influence on what can be done with the data later. For example, here is a study that received a lot of attention when published but the researchers miscoded a nominal variable:

In 2015, a paper by Jean Decety and co-authors reported that children who were brought up religiously were less generous. The paper received a great deal of attention, and was covered by over 80 media outlets including The Economist, the Boston Globe, the Los Angeles Times, and Scientific American. As it turned out, however, the paper by Decety was wrong. Another scholar, Azim Shariff, a leading expert on religion and pro-social behavior, was surprised by the results, as his own research and meta-analysis (combining evidence across studies from many authors) indicated that religious participation, in most settings, increased generosity. Shariff requested the data to try to understand more clearly what might explain the discrepancy.

To Decety’s credit, he released the data. And upon re-analysis, Shariff discovered that the results were due to a coding error. The data had been collected across numerous countries, e.g. United States, Canada, Turkey, etc. and the country information had been coded as “1, 2, 3…” Although Decety’s paper had reported that they had controlled for country, they had accidentally not controlled for each country, but just treated it as a single continuous variable so that, for example “Canada” (coded as 2) was twice the “United States” (coded as 1). Regardless of what one might think about the relative merits and rankings of countries, this is obviously not the right way to analyze data. When it was correctly analyzed, using separate indicators for each country, Decety’s “findings” disappeared. Shariff’s re-analysis and correction was published in the same journal, Current Biology, in 2016. The media, however, did not follow along. While it covered extensively the initial incorrect results, only four media outlets picked up the correction.

In fact, Decety’s paper has continued to be cited in media articles on religion. Just last month two such articles appeared (one on Buzzworthy and one on TruthTheory) citing Decety’s paper that religious children were less generous. The paper’s influence seems to continue even after it has been shown to be wrong.

Last month, however, the journal, Current Biology, at last formally retracted the paper. If one looks for the paper on the journal’s website, it gives notice of the retraction by the authors. Correction mechanisms in science can sometimes work slowly, but they did, in the end, seem to be effective here. More work still needs to be done as to how this might translate into corrections in media reporting as well: The two articles above were both published after the formal retraction of the paper.

To reiterate, the researcher treated country – a nominal variable in this case since the countries were not ranked or ordered in any particular way – incorrectly which then threw off the overall results. When then using country correctly – from the description above, it sounds like using country as a dummy variable coded 1 and 0 – the findings that received all the attention disappeared.

The other issue at play here is whether corrections to academic studies or retractions are treated as such. It is hard to notify readers that a previously published study had flaws and the results have changed.

All that to say, paying attention to level of measurement earlier in the process helps avoid problems down the road.

A preview of my upcoming talk on social media, emerging adults, and religiosity

Ahead of my participation in the “Emerging Adults: Formation for Mission” conference taking place soon on the Wheaton College campus is an interview regarding my talk:

I would describe the use of social media by emerging adults as a mix of excitement and resignation. A vast majority of emerging adults participate. They describe learning about relationships they already have, connecting with friends and family, seeing pictures, sharing jokes.

They find out about events and news through social media. They search for romantic partners through social media.

On the other side, they can articulate some of the downsides of this use including a lack of focus, not spending time with people (just their feeds), and the conflict that can arise in social media. Not participating means missing key connections and knowledge that others can access. Current emerging adults have known and participated in social media all of their lives and will continue to use social media as they age beyond this life stage.

The emerging adults of today are immersed in social media, technology, and other forms of media and they bring this with them as they consider faith and church.

While there is a lot of work looking at social media or social network site (SNS) use, relatively little of it in sociology and other disciplines addresses how this activity is influenced by or influences religion. This talk will help bring together several years of research projects with my sociology colleagues Peter Mundey and Jon Hill.

A need to better understand why more education doesn’t lead to less religiosity among American Christians

A new Pew report looks at the relationship between education and religiosity:

On one hand, among U.S. adults overall, higher levels of education are linked with lower levels of religious commitment by some measures, such as belief in God, how often people pray and how important they say religion is to them. On the other
hand, Americans with college degrees report attending religious services as often as Americans with less education.
Moreover, the majority of American adults (71%) identify as Christians. And among Christians, those with higher levels of education appear to be just as religious as those with less schooling, on average. In fact, highly educated Christians are more likely than less-educated Christians to say they are weekly churchgoers.
There is a two part process with this data. First, it has to be collected, analyzed, and reported. On the face, it seems to contradict some long-held ideas within sociology and other fields that increasing levels of education would reduce religiosity. Second, however, is perhaps the tougher task of interpretation. Why is this the case among Christians and not other groups? What about the differences between Christian traditions? How exactly is religion linked to education – does the education reinforce religiosity or are they separate spheres for Christians (among other possibilities)? Data is indeed helpful but proper explanation can often take much longer.

Evangelicals recommend four beliefs that should identify them on surveys

The National Association of Evangelicals and LifeWay Research suggest evangelicals should be identified by agreeing with four beliefs:

  • The Bible is the highest authority for what I believe.
  • It is very important for me personally to encourage non-Christians to trust Jesus Christ as their Savior.
  • Jesus Christ’s death on the cross is the only sacrifice that could remove the penalty of my sin.
  • Only those who trust in Jesus Christ alone as their Savior receive God’s free gift of eternal salvation.

More on the reasons for these four:

The statements closely mirror historian David Bebbington’s classic four-point definition of evangelicalism: conversionism, activism, biblicism, and crucicentrism. But this list emphasizes belief rather than behavior, said Ed Stetzer, executive director of LifeWay Research.

“Affiliation and behavior can be measured in addition to evangelical beliefs, but this is a tool for researchers measuring the beliefs that evangelicals—as determined by the NAE—believe best define the movement,” he said.

A few quick thoughts on this:

  1. On one hand, it can be helpful for religious groups to identify what they see as unique to them. Outsiders may not pick up on these things. On the other hand, outsiders might see beliefs or other characteristics that mark evangelicals.
  2. Measuring religiosity involves a lot more than just beliefs. From later in the article:

    “Identity, belief, and behavior are three different things when it comes to being an evangelical,” McConnell said. “Some people are living out the evangelical school of thought but may not embrace the label. And the opposite is also true.”

    So this is just one piece of the puzzle. And I think sociologists (and other social scientists) have contributed quite a bit here in looking at how these particular theological views relate to other social behavior from race relations to voting to charitable activity and more.

  3. The suggestion here is that research shows the “correct” number of evangelicals identify with these four statements – identifying evangelicals in other ways seems to get to similar percentages as working with these four beliefs. Yet, I wonder how many evangelicals would name these four statements if asked what they believe. How exactly are these statements taught and passed on within evangelicalism?

Can religion not be fully studied with surveys or do we not use survey results well?

In a new book (which I have not read), sociologist Robert Wuthnow critiques the use of survey data to explain American religion:

Bad stats are easy targets, though. Setting these aside, it’s much more difficult to wage a sustained critique of polling. Enter Robert Wuthnow, a Princeton professor whose new book, Inventing American Religion, takes on the entire industry with the kind of telegraphed crankiness only academics can achieve. He argues that even gold-standard contemporary polling relies on flawed methodologies and biased questions. Polls about religion claim to show what Americans believe as a society, but actually, Wuthnow says, they say very little…

Even polling that wasn’t bought by evangelical Christians tended to focus on white, evangelical Protestants, Wuthnow writes. This trend continues today, especially in poll questions that treat the public practice of religion as separate from private belief. As the University of North Carolina professor Molly Worthen wrote in a 2012 column for The New York Times, “The very idea that it is possible to cordon off personal religious beliefs from a secular town square depends on Protestant assumptions about what counts as ‘religion,’ even if we now mask these sectarian foundations with labels like ‘Judeo-Christian.’”…

These standards are largely what Wuthnow’s book is concerned with: specifically, declining rates of responses to almost all polls; the short amount of time pollsters spend administering questionnaires; the racial and denominational biases embedded in the way most religion polls are framed; and the inundation of polls and polling information in public life. To him, there’s a lot more depth to be drawn from qualitative interviews than quantitative studies. “Talking to people at length in their own words, we learn that [religion] is quite personal and quite variable and rooted in the narratives of personal experience,” he said in an interview…

In interviews, people rarely frame their own religious experiences in terms of statistics and how they compare to trends around the country, Wuthnow said. They speak “more about the demarcations in their own personal biographies. It was something they were raised with, or something that affected who they married, or something that’s affecting how they’re raising their children.”

I suspect such critiques could be leveled at much of survey research: the questions can be simplistic, the askers of the questions can have a variety of motives and skills in developing useful survey questions, and the data gets bandied about in the media and public. Can surveys alone adequately address race, cultural values, politics views and behaviors, and more? That said, I’m sure there are specific issues with surveys regarding religion that should be addressed.

I wonder, though , if another important issue here is whether the public and the media know what to do with survey results. This book review suggests people take survey findings as gospel. They don’t know about the nuances of surveys or how to look at multiple survey questions or surveys that get at similar topics. Media reports on this data are often simplistic and lead with a “shocking” piece of information or some important trend (even if the data suggests continuity). While more social science projects on religion could benefit from mixed methods or by incorporating data from the other side (whether quantitative or qualitative), the public knows even less about these options or how to compare data. In other words, surveys always have issues but people are generally innumerate in knowing what to do with the findings.

Is more Internet use correlated to a decline in religious affiliation?

A new study suggests using the Internet more is correlated with lower levels of religious affiliation:

Downey analyzed data from the General Social Survey, a well-respected annual research survey carried out by the University of Chicago, to make his findings.

Downey says the single biggest cause of religious affiliation is upbringing: those you are raised in religious households are much more likely to remain in their family’s religion as adults…

By far the largest factor, says Downey, is Internet use.

In the 1980s, Internet use was virtually non-existent, but in 2010, 53 per cent of people spent two hours online a week and 25 per cent spent more than seven hours…

Downey says that his research has controlled for ‘most of the obvious candidates, including income, education, socioeconomic status, and rural/urban environments’ to discount a third factor, one that is responsible both for the rise of Internet use and the drop in religiosity.

Since the full story is behind a subscriber wall, two speculations about the methodology of this study:

1. This sounds like a regression and/or ANOVA analysis based on R-squared changes. In other words, when one explanatory factor is in the model, how much more of the variation in the dependent variable (religiosity) is explained? You can then add or subtract different factors singly or in combination to see how that percent of variation explained changes.

2. Looking at religious affiliation is just one way to measure religiosity. Affiliation is based on self-identification (do you consider yourself a Catholic, mainline Protestant, conservative Protestant, etc.) or what religious congregation you regularly attend or interact with. But, levels of religious affiliation have been falling in recent years even as not all measures of religiosity are falling. Research about the rise of the “religious nones” shows a number of these people still are spiritual or perform religious practices.

If there is a strong causal relationship between increased Internet use and less religiosity, why might this be the case? A few ideas:

1. The Internet opens people up to a whole realm of information beyond themselves. Traditionally, people would look to those around them, whether individuals or institutions, within relatively close proximity. The Internet breaks a lot of these social boundaries and allows people to search for information way beyond themselves.

2. The Internet offers social interactions in a way that religion used to. Instead of going to a religious congregation to meet people, the Internet offers the possibilities of finding like-minded people in all sorts of areas from hobbies and interests, people in the same career field, dating websites, and people you want to sell goods to. In other words, some of the social aspects of religion can now be replicated online.

3. The Internet in its medium and content tends to be individualistic. Anyone with an Internet connection can do all sorts of things without relying on others (outside of having a service provider). This simply feeds into individualistic attitudes that already existed in the United States.

It sounds like there is a lot more here for researchers to explore and unpack.

Measuring spirituality via smartphone app

A new app, SoulPulse, allows users to track their spirituality and researchers to get their hands on more real-time data:

It’s an “experiential” research survey inspired by pastor/author John Ortberg and conducted by a team led by Bradley Wright, an associate professor of sociology at the University of Connecticut and author of “Christians Are Hate-Filled Hypocrites … and Other Lies You’ve Been Told.”

Twice a day for two weeks, participants receive questions asking about their experiences of spirituality, their emotions, activities and more at the moment the text messages arrive.

Were they feeling satisfied, loved, happy, hostile, sleepy or stressed? Were they more or less aware of God when they were commuting or computing or hanging out with family and friends?…

SoulPulse participants will receive an individual report, reflecting their different temperaments and temptations. Ortberg said his personalized report has already changed his life.

See the website for the app here.

At the least, this could help researchers with more data. Many studies of religiosity rely on asking people about past events through surveys or interviews. The information given here is not necessarily false but it can be hard to remember too far back (thus researchers tend to ask about a short, more defined time period like the last week or month) and there is potential for social desirability bias (people want to give the response they think they should – might happen some with church attendance). Additionally, time diaries require a lot of effort. Thus, utilizing a new technology that people check all the time could be a nice way to reduce the errors with other methods.

While the reports might be helpful for users, could they verge into the gamification of spirituality?