Did certain sitcoms change American society – and how would we know?

Did Norman Lear change American culture through the television shows he created? Here is one headline hinting at this:

From the linked article, here are some of the ways Lear was influential:

Lear had already established himself as a top comedy writer and captured a 1968 Oscar nomination for his screenplay for “Divorce American Style” when he concocted the idea for a new sitcom, based on a popular British show, about a conservative, outspokenly bigoted working-class man and his fractious Queens family. “All in the Family” became an immediate hit, seemingly with viewers of all political persuasions.

Lear’s shows were the first to address the serious political, cultural and social flashpoints of the day – racism, abortion, homosexuality, the Vietnam war — by working pointed new wrinkles into the standard domestic comedy formula. No subject was taboo: Two 1977 episodes of “All in the Family” revolved around the attempted rape of lead character Archie Bunker’s wife Edith.

Their fresh outrageousness turned them into huge ratings successes: For a time, “Family” and “Sanford,” based around a Los Angeles Black family, ranked No. 1 and No. 2 in the country. “All in the Family” itself accounted for no less than six spin-offs. “Family” was also honored with four Emmys in 1971-73 and a 1977 Peabody Award for Lear, “for giving us comedy with a social conscience.” (He received a second Peabody in 2016 for his career achievements.)

Some of Lear’s other creations played with TV conventions. “One Day at a Time” (1975-84) featured a single mother of two young girls as its protagonist, a new concept for a sitcom. Similarly, “Diff’rent Strokes” (1978-86) followed the growing pains of two Black kids adopted by a wealthy white businessman.

Other series developed by Lear were meta before the term ever existed. “Mary Hartman, Mary Hartman” (1976-77) spoofed the contorted drama of daytime soaps; while the show couldn’t land a network slot, it became a beloved off-the-wall entry in syndication. “Hartman” had its own oddball spinoff, “Fernwood 2 Night,” a parody talk show set in a small Ohio town; the show was later retooled as “America 2-Night,” with its setting relocated to Los Angeles…

One of Hollywood’s most outspoken liberals and progressive philanthropists, Lear founded the advocacy group People for the American Way in 1981 to counteract the activities of the conservative Moral Majority.

The emphasis here is on both television and politics. Lear created different kinds of shows that proved popular as they promoted particular ideas. He also was politically active for progressive causes.

How might we know that these TV shows created cultural change? Just a few ways this could be established:

-How influential were these shows to later shows and cultural products? How did television shows look before and after Lear’s work?

-Ratings: how many people watched?

-Critical acclaim: what did critics think? What did his peers within the industry think? How do these shows stand up over time?

But, the question I might want to ask is whether we know how the people who watched these shows – millions of Americans – were or were not changed by these minutes and hours spent in front of the television. Americans take in a lot of television and media over their lifetime. This certainly has an influence in the aggregate. Do we have data and/or evidence that can link these shows to changed attitudes and actions? My sense is that is easier to see broad changes over time but harder to show more directly that specific media products led to particular outcomes at the individual (and sometimes also at the social) level.

These are research methodology questions that could involve lots of cultural products. The headline above might be supportable but it could require putting together multiple pieces of evidence and not having all the data we could have.

The possibilities of linking together sets of data

I saw multiple interesting presentations at ASA this year that linked together several datasets to develop robust analysis and interesting findings. These data sources included government data, data collected by the researchers, and other available data. Doing this unlocks a lot of possibilities for answering research questions.

Photo by Manuel Geissinger on Pexels.com

But, how might this happen more regularly? Or, put differently, how might more researchers use multiple datasets in a single project? Here are some quick thoughts on what could help make this possible:

-More access to data. Some data is publicly available. Other data is restricted for a variety of reasons. Having more big datasets accessible opens up possibilities. Just knowing where to request data is a process plus whatever applications and/or resources might be needed to access it.

-Having the know-how to put datasets together. It takes work to become familiar with a single dataset. To be able to merge data requires additional work. I do not know if it would be useful to offer more instruction in doing this or whether it matters which individual datasets are involved.

-Asking research questions gets more interesting and complicated with more variables and layers at play. Constructing sets of questions that build on the strengths of the combined data is a skill.

-Including more – but concise and understandable – explanations of how the data was merged in publications can help demystify the process.

And with all of this data innovation, it is interesting to consider how projects that link multiple datasets complement and come alongside other projects with only one source of data.

What counts as “good science,” happiness studies edition

Looking across studies that examined factors leading to happiness, several researchers concluded only two of five factors commonly discussed stood up to scrutiny:

Photo by Jill Wellington on Pexels.com

But even these studies failed to confirm that three of the five activities the researchers analyzed reliably made people happy. Studies attempting to establish that spending time in nature, meditating and exercising had either weak or inconclusive results.

“The evidence just melts away when you actually look at it closely,” Dunn said.

There was better evidence for the two other tasks. The team found “reasonably solid evidence” that expressing gratitude made people happy, and “solid evidence” that talking to strangers improves mood.

How might researchers improve their studies and confidence in the results?

The new findings reflect a reform movement under way in psychology and other scientific disciplines with scientists setting higher standards for study design to ensure the validity of the results.

To that end, scientists are including more subjects in their studies because small sample sizes can miss a signal or indicate a trend where there isn’t one. They are openly sharing data so others can check or replicate their analyses. And they are committing to their hypotheses before running a study in a practice known as “pre-registering.” 

These seem like helpful steps for quantitative research. Four solutions are suggested above (one is more implicit):

  1. Analyzing dozens of previous studies. When researchers study similar questions, are their findings consistent? Do they use similar methods? Is there consensus across a field or across disciplines? This summary work is useful.
  2. Avoid small samples. This helps reduce the risk of a chance finding among a smaller group of participants.
  3. Share data so that others can look at procedures and results.
  4. Test certain hypotheses set at the beginning rather than fitting hypotheses to statistically significant findings.

One thing I have not seen in discussions of these approaches intended to create better science: how much better will results be after following these steps? How much can a field improve with better confidence in the results? 5-10% 25% More?

Changes in methodology behind Naperville’s move to #16 best place to live in 2022 from #45 in 2021?

Money recently released their 2022 Best Places to Live in the United States. The Chicago suburb of Naperville is #16 in the country. Last year, it was #45. How did it move so much in one year? Is Naperville that much better in one year, other places that much work, or is something else at work? I wonder if the methodology led to this. Here is what went into the 2022 rankings:

Photo by RODNAE Productions on Pexels.com

Chief among those changes included introducing new data related to national heritage, languages spoken at home and religious diversity — in addition to the metrics we already gather on racial diversity. We also weighted these factors highly. While seeking places that are diverse in this more traditional sense of the word, we also prioritized places that gave us more regional diversity and strove to include cities of all sizes by lifting the population limit that we often relied on in previous years. This opened up a new tier of larger (and often more diverse) candidates.

With these goals in mind, we first gathered data on places that:

  • Had a population of at least 20,000 people — and no population maximum
  • Had a population that was at least 85% as racially diverse as the state
  • Had a median household income of at least 85% of the state median

Here is what went into the 2021 rankings:

To create Money’s Best Places to Live ranking for 2021-2022, we considered cities and towns with populations ranging from 25,000 up to 500,000. This range allowed us to surface places large enough to have amenities like grocery stores and a nearby hospital, but kept the focus on somewhat lesser known spots around the United States. The largest place on our list this year has over 457,476 residents and the smallest has 25,260.

We also removed places where:

  • the crime risk is more than 1.5x the national average
  • the median income level is lower than its state’s median
  • the population is declining
  • there is effectively no ethnic diversity

In 2021, the top-ranked communities tend to be suburbs. In 2022, there is a mix of big cities and suburbs with Atlanta at the top of the list and one neighborhood of Chicago, Rogers Park, at #5.

So how will this get reported? Did Naperville make a significant leap? Is it only worth highlighting the #16 ranking in 2022 and ignore the previous year’s lower ranking? Even while Naperville has regularly featured in Money‘s list (and in additional rankings as well), #16 can be viewed as an impressive feat.

Why it can take months for rent prices to show up in official data

It will take time for current rent prices to contribute to measures of inflation:

Photo by Burak The Weekender on Pexels.com

To solve this conundrum, the best place to start is to understand that rents are different from almost any other price. When the price of oil or grain goes up, everybody pays more for that good, at the same time. But when listed rents for available apartments rise, only new renters pay those prices. At any given time, the majority of tenants surveyed by the government are paying rent at a price locked in earlier.

So when listed rents rise or fall, those changes can take months before they’re reflected in the national data. How long, exactly? “My gut feeling is that it takes six to eight months to work through the system,” Michael Simonsen, the founder of the housing research firm Altos, told me. That means we can predict two things for the next six months: first, that official measures of rent inflation are going to keep setting 21st-century records for several more months, and second, that rent CPI is likely to peak sometime this winter or early next year.

This creates a strange but important challenge for monetary policy. The Federal Reserve is supposed to be responding to real-time data in order to determine whether to keep raising interest rates to rein in demand. But a big part of rising core inflation in the next few months will be rental inflation, which is probably past its peak. The more the Fed raises rates, the more it discourages residential construction—which not only reduces overall growth but also takes new homes off the market. In the long run, scaled-back construction means fewer houses—which means higher rents for everybody.

To sum up: This is all quite confusing! The annual inflation rate for new rental listings has almost certainly peaked. But the official CPI rent-inflation rate is almost certainly going to keep going up for another quarter or more. This means that, several months from now, if you turn on the news or go online, somebody somewhere will be yelling that rental inflation is out of control. But this exclamation might be equivalent to that of a 17th-century citizen going crazy about something that happened six months earlier—the news simply took that long to cross land and sea.

This sounds like a research methods problem: how to get more up-to-date data into the current measures? A few quick ideas:

  1. Survey rent listings to see what landlords are asking for.
  2. Survey new renters to better track more recent rent prices.
  3. Survey landlords as to the prices of the recent units they rented.

Given how much rides on important economic measures such as the inflation rate, more up-to-date data would be helpful.

Recent quote on doing agendaless science

Vaclav Smil’s How The World Really Works offers an analysis of foundational materials and processes behind life in 2022 and what these portend for the near future. It also includes this as the second to last paragraph of the book:

Is it possible to have no agenda in carrying out analysis and writing such an overview?

Much of what Smil describes and then extrapolates from could be viewed as having an agenda. This agenda could be scientism. On one hand, he reveals some of the key forces at work in our world and on the other hand he provides interpretation of what these mean now and at other times. The writing suggests he knows this; he makes similar points to that quoted above throughout the book to address the issue.

I feel this tension when teaching Research Methods in Sociology. Sociology has a stream of positivism and scientific analysis from its early days, wanting to apply a more dispassionate method to the social world. It has also has early strands of a different approach less beholden to the scientific method and allowing for additional forms of social analysis. These strands continue today and make for an interesting challenge and opportunity in teaching a plurality of methods within a single discipline.

I learned multiple things from the book. I also will need time to ponder the implications and the approach.

“Journalism is sociology on fast forward”

Listening to 670 The Score at 12:14 PM today, I heard Leila Rehimi say this about journalism:

Photo by brotiN biswaS on Pexels.com

Journalism is sociology on fast forward.

I can see the logic in this as journalists and sociologists are interested in finding out what is happening in society. They are interested in trends, institutions, patterns, people in different roles and with different levels of access to power and resources, and narratives.

There are also significant differences in the two fields. One is hinted at in the quote above: different timelines. A typical sociology project from idea to publication in some form could takes 4-6 (a rough average). Journalists usually work on shorter timelines and have stronger pressures to generate content more quickly.

Related to this timing issue is the difference in methods for understanding and analyzing data and evidence. Sociologists use a large number of quantitative and qualitative methods, follow the scientific method, and take longer periods of time to analyze and write up conclusions. Sociologists see themselves more as social scientists, not just describers of social realities.

I am sure there are plenty of sociologists and journalists with thoughts on this. It would be interesting to see where they see convergence and divergence between the two fields.

The difficulty of collecting, interpreting, and acting on data quickly in today’s world

I do not think the issue is just limited to the problems with data during COVID-19:

Photo by Artem Saranin on Pexels.com

If, after reading this, your reaction is to say, “Well, duh, predictions are difficult. I’d like to see you try it”—I agree. Predictions are difficult. Even experts are really bad at making them, and doing so in a fast-moving crisis is bound to lead to some monumental errors. But we can learn from past failures. And even if only some of these miscalculations were avoidable, all of them are instructive.

Here are four reasons I see for the failed economic forecasting of the pandemic era. Not all of these causes speak to every failure, but they do overlap…

In a crisis, credibility is extremely important to garnering policy change. And failed predictions may contribute to an unhealthy skepticism that much of the population has developed toward expertise. Panfil, the housing researcher, worries about exactly that: “We have this entire narrative from one side of the country that’s very anti-science and anti-data … These sorts of things play right into that narrative, and that is damaging long-term.”

My sense as a sociologist is that the world is in a weird position: people expect relatively quick solutions to complex problems, there is plenty of data to think about (even as the quality of the data varies widely), and there are a lot of actors interpreting and acting on data or evidence. Put this all together and it is can be difficult to collect good data, make sound interpretations of data, and make good choices regarding acting on those interpretations.

In addition, making predictions about the future is already difficult even with good information, interpretation, and policy options.

So, what should social scientists take from this? I would hope we can continue to improve our abilities to respond quickly and well to changing conditions. Typical research cycles take years but this is not possible in certain situations. There are newer methodological options that allow for quicker data collection and new kinds of data; all of this needs to be evaluated and tested. We need better processes of reaching consensus at quicker rates.

Will we ever be at a point where society is predictable? This might be the ultimate dream of social science if only we had enough data and the correct models. I am skeptical but certainly our methods and interpretation of data can always be improved.

Illinois lost residents 2010 to 2020; discrepancies in year to year estimates and decennial count

Illinois lost residents over the last decade. But, different Census estimates at different times created slightly different stories:

Photo by Nachelle Nocom on Pexels.com

Those estimates showed Illinois experiencing a net loss of 9,972 residents between 2013 and 2014; 22,194 residents between 2014 and 2015; 37,508 residents between 2015 and 2016; about 33,700 residents between 2016 and 2017; 45,116 between 2017 and 2018; 51,250 between 2018 and 2019; and 79,487 between 2019 and 2020…

On April 26, the U.S. Census Bureau released its state-by-state population numbers based on last year’s census. These are the numbers that determine congressional apportionment. Those numbers, released every 10 years, show a different picture for Illinois: a loss of about 18,000 residents since 2010.

What’s the deal? For starters, the two counting methods for estimated annual population and the 10-year census for apportionment are separate. Apples and oranges. Resident population numbers and apportionment population numbers are arrived at differently, with one set counting Illinois families who live overseas, including in the military, and one not.

Additionally, the every-10-years number is gathered not from those county-by-county metrics but from the census forms we fill out and from door-to-door contacts made by census workers on the ground.

The overall story is the same but this is a good reminder of how different methods can produce different results. Here are several key factors to keep in mind:

  1. The time period is different. One estimate comes every year, one comes every ten years. The yearly estimates are helpful because people like data. That does not necessarily mean the yearly estimates can be trusted as much as the other ones.
  2. The method in each version – yearly versus every ten years – is different. The decennial data involves more responses and requires more effort.
  3. The confidence in the two different kinds of estimates is different because of #2. The ten year estimates are more valid because they collect more data.

Theoretically, the year-to-year estimates could lead to a different story compared to the decennial estimates. Imagine year-to-year data that told of a slight increase in population while the ten-year numbers provided a slight decrease in population. This does not mean the process went wrong there or in the narrative where the yearly and ten-year estimates agreed. With estimates, researchers are trying their best to measure the full population patterns. But, there is some room for error.

That said, now that Illinois is known as one of the three states that lost population over the last decade, it will be interesting to see how politicians and business leaders respond. I can predict some of the responses already as different groups have practiced their talking points for years. Yet, the same old rhetoric may not be enough as these figures paint Illinois in a bad light when population growth is good in the United States.

Researchers adjust as Americans say they are more religious when asked via phone versus responding online

Research findings suggest Americans answer questions about religiosity differently depending on the mode of the survey:

Photo by mentatdgt on Pexels.com

Researchers found the cause of the “noise” when they compared the cellphone results with the results of their online survey: social desirability bias. According to studies of polling methods, people answer questions differently when they’re speaking to another human. It turns out that sometimes people overstate their Bible reading if they suspect the people on the other end of the call will think more highly of them if they engaged the Scriptures more. Sometimes, they overstate it a lot…

Smith said that when Pew first launched the trend panel in 2014, there was no major difference between answers about religion online and over the telephone. But over time, he saw a growing split. Even when questions were worded exactly the same online and on the phone, Americans answered differently on the phone. When speaking to a human being, for example, they were much more likely to say they were religious. Online, more people were more comfortable saying they didn’t go to any kind of religious service or listing their religious affiliation as “none.”…

After re-weighting the online data set with better information about the American population from its National Public Opinion Reference Survey, Pew has decided to stop phone polling and rely completely on the online panels…

Pew’s analysis finds that, today, about 10 percent of Americans will say they go to church regularly if asked by a human but will say that they don’t if asked online. Social scientists and pollsters cannot say for sure whether that social desirability bias has increased, decreased, or stayed the same since Gallup first started asking religious questions 86 years ago.

This shift regarding studying religion highlights broader considerations about methodology that are always helpful to keep in mind:

  1. Both methods and people/social conditions change. More and more surveying (and other data collection) is done via the Internet and other technologies. This might change who responds, how people respond, and more. At the same time, actual religiosity changes and social scientists try to keep up. This is a dynamic process that should be expected to change over time to help researchers get better and better data.
  2. Social desirability bias is not the same as people lying to researchers or being dishonest with researchers. That implies an intentional false answer. This is more about context: the mode of the survey – phone or online – influences who the respondent is responding to. And with a human interaction, we might respond differently. In an interaction, we with impression management in mind where we want to be viewed in particular ways by the person with whom we are interacting.
  3. Studying any aspect of religiosity benefits from multiple methods and multiple approaches to the same phenomena under study. A single measure of church attendance can tell us something but getting multiple data points with multiple methods can help provide a more complete picture. Surveys have particular strengths but they are not great in other areas. Results from surveys should be put alongside other data drawn from interviews, ethnographies, focus groups, historical analysis, and more to see what consensus can be reached. All of this might be out of the reach of individual researchers or single research projects but the field as a whole can help find the broader patterns.