Determining how many churches are in the United States is not a simple task:
According to a recent paper published by sociologist Simon Brauer in the Journal for the Scientific Study of Religion, the number of religious congregations in the United States has increased by almost 50,000 since 1998. A key reason: growth in nondenominational churches.
Using the National Congregations Study (NCS) conducted in 2006 and 2012, he estimates the number of congregations in the US increased from 336,000 in 1998 to a peak of 414,000 in 2006, but then leveled off at 384,000 in 2012.
Brauer’s estimate is more reliable—statistically speaking—than previous estimates that used other methodology; however, his model “relies on samples of individuals and not the organizations themselves,” so there is still a range of variation around the “best bets,” he told CT. Thus, the loss of 30,000 churches is not statistically significant (as it falls within the model’s confidence interval of 95%)…
Brauer’s study corroborates an earlier finding from a team of sociologists led by Shawna Anderson at Duke University, who estimated the average annual death rate of congregations between 1998 and 2005 to be only 1 percent, among the lowest of any type of organization.
Organizations come and organizations go but the number of churches remains large.
The National Congregations Study made a breakthrough in studying congregations by sampling individuals about their congregations and finding that this was a reliable measure of religious organizations. In contrast, trying to find every church can be very difficult. For the 2011 book The Place of Religion in Chicago, the researchers spent years driving all over Cook County to find all the religious congregations and discovered over 4,000. Other researchers have used public sources like websites and white pages/yellow pages to uncover all the churches (though such sources may miss congregations that don’t last long as well as small ethnic congregations).
Tiny houses get a lot of attention – including this recent Parade story – but rarely are numbers provided about how big (or small) this trend really is. The Parade story did provide some data (though without any indication of how this was measured) on the number of tiny houses in the US. Ready for the figure?
Without much context, it is hard to know what to do with this figure or how accurate it might be. Assuming the figure’s veracity, is that a lot of tiny houses? Not that many? Some comparisons might help:
–Between February 2016 and March 2017, there were over 1,000,000 housing starts in each month. (National Association of Home Builders) Within data going back to 1959, the lowest point for housing starts after the 2000s housing bubble burst experienced about 500,000 new housing starts a month. (Census Bureau data at TradingEconomics.com)
–The RV industry shipped over 430,000 units in 2016. This follows a low point of shipments in recent years back in 2009 where only 165,000 units were shipped. (Recreation Vehicle Industry Association)
–The number of manufactured homes that have shipped in recent years – 2014 to 2016 – has surpassed 60,000 each year. (Census Bureau)
–The percent of new homes that are under 1,400 square feet has actually dropped since 1999 to 7% in 2016. (Census Bureau)
Based on these comparisons, 10,000 units is not much at all. They are barely a drop in the bucket within all housing.
Perhaps the trend is sharply on the rise? There is a little evidence of this. I wrote my first post here on tiny houses back in 2010 and it involved how to measure the tiny house trend. The cited article in that post included measures like the number of visitors to a tiny house blog and sales figures from tiny house builders. Would the number of tiny house shows on HGTV and similar networks provide some data? All trends have to start somewhere – with a small number of occurrences – but it doesn’t seem like the tiny house movement is taking off in exponential form.
Ultimately, I would ask for more and better data on tiny houses. Clearly, there is some interest. Yet, calling this a major trend would be misleading.
Yesterday, I highlighted a sociological argument about who white evangelicals are. Recently, evangelical leaders came together to provide their own definition for evangelicals. This included input from sociologists, theologians, historians, and others. Here is the four part definition:
The Bible is the highest authority for what I believe.
It is very important for me personally to encourage non-Christians to trust Jesus Christ as their Savior.
Jesus Christ’s death on the cross is the only sacrifice that could remove the penalty of my sin.
Only those who trust in Jesus Christ alone as their Savior receive God’s free gift of eternal salvation.
This is a theological definition. With a few well-worded survey questions, evangelicals can be separated from other religious and Protestant groups.
From a sociological perspective, what does this definition miss? At least a few things:
- Social/cultural context. Theological beliefs alone cannot capture the cultural dimensions of being evangelicals. If we define culture as “patterns of meaning-making” (a definition preferred by sociologists of culture), making sense of those four theological views and putting them into practice is a whole additional ballgame to consider. What is it like to worship in an evangelical setting? How are evangelicals encouraged to live their day-to-day lives? What kinds of media do they consume? What institutions do they celebrate and contribute to? And so on.
It is not enough to cite a particular religious history for the group that could be dated back to 1600s American Protestants or 1700s-1800s British Protestants. Those theological paths were also significantly influenced by social events including the Enlightenment, evolution and the rise of science, industrialization, urbanization, and the rise of the western democratic state.
In other words, others can hold similar theological views – particularly black Protestants – but they do not share the same social dimensions with white evangelicals.
- Engagement with race. As has been explored in the last two decades, particularly in still-relevant Divided By Faith, American evangelicalism has a sordid history with race. While some evangelicals have fought for the rights of non-whites, many have not. When white evangelicals today are asked about race, they tend to stick to color-blind approaches (“we don’t see race”), argue that talking about race issues makes it worse, and that evangelicals should be united in Christ. The argument in Divided By Faith is that evangelicals have an individualistic approach to all of life – including theology – and can’t see structural issues like racism. If evangelicals do try to address race (or other less popular issues), some evangelicals exercise their individual abilities to join new churches or groups.
- Politics. This has probably received the most public attention since the 1970s as evangelicals emerged as a recognizable group, had their first President (a Baptist and Democrat), and formed their own political groups (The Moral Majority, etc.). Evangelicals do tend to vote a certain way – with Republicans – and have coalesced around certain moral issues (like abortion) while saying little about others that are clearly Biblical concerns (like poverty and immigration, as just two examples).
A recent plenary session at a sociology of religion meeting I was at noted a more recent trend: evangelicals (and other religious groups) as a whole are not really voting with religious convictions in mind. It is all about party identification.
- Forming their own institutions. Once the modern-fundamentalist split occurred around the turn of the 20th century, evangelicals created a whole new set of institutions: TV and radio stations, colleges, magazines, parachurch ministries (think Focus on the Family), publishing houses, celebrities (from Billy Graham to Tim Tebow), movies, and more. And perhaps the most notable institutions are non-denominational churches as well as the suburban megachurch.
- Limited interaction, engagement, and work with Christians around the world, let alone other Christian groups in the United States. The evangelical tendencies toward drawing boundaries based on theology (as well as cultural characteristics) can make it difficult to work with others.
- Where did the fundamentalists go? They were subsumed under the evangelical umbrella after World War II. Few Christian groups choose to use this name given its connotations today but it can sometimes be hard to determine the fundamentalists (who typically advocate more separation with the world) and evangelicals (who typically advocate more engagement with the world). Insiders can tell you clear differences between Bob Jones and Wheaton College but outsiders may not be able to (and may not care to).
All this said, it is not as simple as defining a religious group solely by their theology. To their credit, LifeWay and others acknowledge that this four point scale only gets at evangelical belief. As sociologists of religion often note, religiosity includes belief, belonging, and behavior. Perhaps evangelicals themselves want to primarily emphasize theological positions but this does not fully capture who they are nor is it the way that those outside the group will regard them.
Nielsen will change how they measure TV viewing as ratings continue to drop:
Despite the May axing of 19 first-year series and such surprise dumpings as ABC’s Castle and Nashville, cancellations are proving rarer, even as linear ratings shrink. That’s because, of the 60 returning scripted series to air on the five main broadcast networks this season, only one finished with improved ratings from the previous year. And that show premiered in the ’90s. Law & Order: SVU‘s modest gain, up an incremental 4 percent during its 17th cycle, is a case study in how the industry standard week of DVR and on-demand views doesn’t provide the most complete narrative any longer — or at least not one that the networks are eager to tell.
“We have found that audiences continue to grow beyond seven days in every instance, some by 58 percent among adults 18-to-49,” says Nielsen audience insights senior vp Glenn Enoch. “Growth after seven days is consistent, but the rate of growth varies by genre. Some programs need to be viewed in the week they air, while consumers use on-demand libraries to view others over time, like animated comedies and episodic dramas.”
To that end, on Aug. 29, Nielsen will up the turnaround on live-plus-7-day reporting (no more 15-day wait time), offering daily rolling on time-shifting, and it will start extending the tail past the long-established extra week of views. The measurement giant announced in March that the window for regularly reported on-demand and DVR data now will extend to 35 days after the original airdate.
The extra draw between weeks two and five is not minor for many scripted series. Grey’s Anatomy, again ABC’s highest-rated drama in its 12th season, saw its live-plus-7 average in the key demographic drop 3 percent from the previous season. But the 35-day trail of VOD (with online streams) adds another 1.5 rating points among 18-to-49, making for a 6 percent improvement from the show’s 11th season. (Of note: 1.5 is the complete live-plus-7-day rating for Thursday neighbor and surprise renewal The Catch.)
Certainly viewing habits have changed in recent years as viewing options proliferate. But, it is hard also not to see this as an attempt to chase numbers to provide advertisers (which leads to more money). If only one show showed an improvement from the past season (and a Law & Order in its 17th season), change the system of measurement. Perhaps this is the true acknowledgment that television will never be the same: the best solution to declining ratings is not to put together better content or to put together a new consolidated model but rather to chase viewers to all ends of the earth.
Big data may appear to be a recent phenomena but the big data of the 1800s allowed for new questions and discoveries:
Fortunately for Quetelet, his decision to study social behavior came during a propitious moment in history. Europe was awash in the first wave of “big data” in history. As nations started developing large-scale bureaucracies and militaries in the early 19th century, they began tabulating and publishing huge amounts of data about their citizenry, such as the number of births and deaths each month, the number of criminals incarcerated each year, and the number of incidences of disease in each city. This was the inception of modern data collection, but nobody knew how to usefully interpret this hodgepodge of numbers. Most scientists of the time believed that human data was far too messy to analyze—until Quetelet decided to apply the mathematics of astronomy…
In the early 1840s, Quetelet analyzed a data set published in an Edinburgh medical journal that listed the chest circumference, in inches, of 5,738 Scottish soldiers. This was one of the most important, if uncelebrated, studies of human beings in the annals of science. Quetelet added together each of the measurements, then divided the sum by the total number of soldiers. The result came out to just over 39 ¾ inches—the average chest circumference of a Scottish soldier. This number represented one of the very first times a scientist had calculated the average of any human feature. But it was not Quetelet’s arithmetic that was history-making—it was his answer to a rather simple-seeming question: What, precisely, did this average actually mean?…
Scholars and thinkers in every field hailed Quetelet as a genius for uncovering the hidden laws governing society. Florence Nightingale adopted his ideas in nursing, declaring that the Average Man embodied “God’s Will.” Karl Marx drew on Quetelet’s ideas to develop his theory of Communism, announcing that the Average Man proved the existence of historical determinism. The physicist James Maxwell was inspired by Quetelet’s mathematics to formulate the classical theory of gas mechanics. The physician John Snow used Quetelet’s ideas to fight cholera in London, marking the start of the field of public health. Wilhelm Wundt, the father of experimental psychology, read Quetelet and proclaimed, “It can be stated without exaggeration that more psychology can be learned from statistical averages than from all philosophers, except Aristotle.”
Is it a surprise then that sociology emerges in the same time period with greater access to data on societies in Europe and around the globe? Many are so used to having data and information at our fingertips that the revolution that this must have been – large-scale data within stable nation-states – opened up all sorts of possibilities.
One company is using the microphone in smartphones to figure out what people are watching on TV:
TV news was abuzz Thursday morning after Variety reported on a presentation by Alan Wurtzel, a president at NBCUniversal, who said that streaming shows weren’t cutting into broadcast television viewership to the degree that much of the press seems to believe. Mr. Wurtzel used numbers that estimated viewership using data gathered by mobile devices that listened to what people were watching and extrapolating viewership across the country…
The company behind the technology is called Symphony Advanced Media. The Observer spoke to its CEO Charles Buchwalter, about how it works, via phone. “Our entire focus is to add insights and perspectives on an entire new paradigm around how consumers are consuming media across platforms,” he told the Observer…
Symphony asks those who opt in to load Symphony-branded apps onto their personal devices, apps that use microphones to listen to what’s going on in the background. With technology from Gracenote, the app can hear the show playing and identify it using its unique sound signature (the same way Shazam identifies a song playing over someone else’s speakers). Doing it that way allows the company to gather data on viewing of sites like Netflix and Hulu, whether the companies like it or not. (Netflix likes data)
It uses specific marketing to recruit “media insiders” into its system, who then download its app (there’s no way for consumers to get it without going through this process). In exchange, it pays consumers $5 in gift cards (and up) per month, depending on the number of devices he or she authorizes.
The undertone of this reporting is that there are privacy concerns lurking around the corner. Like the video camera now built into most laptops, tablets, and smartphones that might be turned on by nefarious people, most of these devices also have microphones that could be utilized by others.
Yet, as noted here, there is potential to gather data through opt-in programs. Imagine a mix between survey and ethnographic data where an opt-in program can get an audio sense of where the user is. Or record conversations to examine both content and interaction patterns. Or to look at the noise levels people are surrounded by. Or to simply capture voice responses to survey questions that might allow respondents to provide more details (because they are able to interact with the question more as well as because their voice patterns might also provide insights).
The FBI released the 2014 Uniform Crime Report Monday but it doesn’t have every piece of information we might wish to have:
As I noted in May, much statistical information about the U.S. criminal-justice system simply isn’t collected. The number of people kept in solitary confinement in the U.S., for example, is unknown. (A recent estimate suggested that it might be as many as 80,000 and 100,000 people.) Basic data on prison conditions is rarely gathered; even federal statistics about prison rape are generally unreliable. Statistics from prosecutors’ offices on plea bargains, sentencing rates, or racial disparities, for example, are virtually nonexistent.
Without reliable data on crime and justice, anecdotal evidence dominates the conversation. There may be no better example than the so-called “Ferguson effect,” first proposed by the Manhattan Institute’s Heather MacDonald in May. She suggested a rise in urban violence in recent months could be attributed to the Black Lives Matter movement and police-reform advocates…
Gathering even this basic data on homicides—the least malleable crime statistic—in major U.S. cities was an uphill task. Bialik called police departments individually and combed local media reports to find the raw numbers because no reliable, centralized data was available. The UCR is released on a one-year delay, so official numbers on crime in 2015 won’t be available until most of 2016 is over.
These delays, gaps, and weaknesses seem exclusive to federal criminal-justice statistics. The U.S. Department of Labor produces monthly unemployment reports with relative ease. NASA has battalions of satellites devoted to tracking climate change and global temperature variations. The U.S. Department of Transportation even monitors how often airlines are on time. But if you want to know how many people were murdered in American cities last month, good luck.
There could be several issues at play including:
- A lack of measurement ability. Perhaps we have some major disagreements about how to count certain things.
- Local law enforcement jurisdictions want some flexibility in working with the data.
- A lack of political will to get all this information.
My guess is that the most important issue is #3. If we wanted this data we could get this data. Yet, it may require concerted efforts by individuals or groups to make the issues enough of a social problem to ask that we collect good data. This means that the government and/or public needs a compelling enough reason to get uniformity in measurement and consistency in reporting.
How about this reason: having consistent and timely reporting on such data would help cut down on anecdotes and instead correctly keep the American public up to date. They could then make more informed political and civic choices. Right now, many Americans don’t quite know what is happening with crime rates as their primary sources are anecdotes or mass media reports (which can be quite sensationalistic).