Assessing public arguments as an academic

Two recent encounters with arguments made – one on a podcast, one in a book meant for a broad reading audience – reminded me of the unique ways academics assess arguments. In both cases, the makers of the argument made connections across different sources and sets of evidence to present a particular point of view. As I considered these arguments, here are two features of my own thought processes that stood out:

Photo by Armin Rimoldi on Pexels.com
  1. A tendency to defer to those with expertise in a particular area rather than assemble broad arguments with multiple data sources. It is difficult to make big arguments with multiple moving pieces as this might cover ground addressed by numerous scholars across different disciplines. In academia, scholars often have fairly narrow sets of expertise. Can one argument adequately represent all the important parts of knowledge? Why not assemble a larger argument from the clear expertise multiple scholars hold rather than try to do it as one person or a small team?
  2. An interest in assessing the methods and form of the argument from a disciplinary perspective. Different academic fields go about the study of the world differently. They have different methods and think differently about what might count as evidence. They put their arguments together in different ways. The content of an argument or the rhetorical force of an argument matter but we often expect them to be presented in particular ways. Go outside these methodologies or formats and academics might struggle to past this.

Based on this, I wonder how well academics can work with arguments made to the public when we have been trained in specific that work within the parameters of academia.

Studying both individual communities and patterns across communities

In considering places in the United States, is it better to study a community in-depth and get at its uniqueness? Or, is it better to look for patterns across places, focusing more on what joins types of communities compared to other types?

Photo by Nataliya Vaitkevich on Pexels.com

The last two posts have introduced this question through an unusual place in western Pennsylvania and all the histories communities across the United States have. And this is a common issue in urban sociology and among others who study cities and places: should we seek to adopt model places that help us understand sets of places – think of the odd quote that “There are only three great cities in the US and everywhere else is just Cleveland” – or focus on all of the particularities of a particular place or region?

I have tried in my own work to do some of both when studying places and buildings. Two examples come to mind. In 2013, I published an article titled “Not All Suburbs are the Same: The Role of Character in in Shaping Growth and Development in Three Chicago Suburbs.” I built off in-depth research on three suburbs to compare how internal understandings of character affected how they responded differently to changes in the Chicago region and changes to suburbs more broadly. On one hand, these suburbs that shared important similarities have different character and on the other hand they still fit within the category of suburbs that sets them apart from different kinds of places.

As a second example, take the book Building Faith I co-authored with Robert Brenneman. We provide case studies of particular religious congregations as they navigate constructing and altering buildings as those physical structures shape their worship and community. These case studies among different religious traditions and in different locations highlight unique patterns in these congregations and places. Yet, we also look across places, considering patterns of religious buildings in suburbs, in Guatemala, and a few other places.

In both works, knowing the particulars and examining the broader patterns are helpful. Different researchers might go other routes; why not investigate even further in these particular cases? What else is there in archives, interviews, ethnographic observation, etc. that could reveal even more details? Or, go the other direction: look at patterns in hundreds or thousands of places to find commonalities and differences across more settings.

But, I find that the particularities of a certain place make more sense in light of broader patterns and those broader patterns make more sense knowing some local or micro patterns. Having a sufficient number of cases or a varied enough set of cases to make these links can be tricky. Yet, I enjoy approaching places this way: digging into both the histories of particular communities and seeking broader patterns that hold across communities.

Did certain sitcoms change American society – and how would we know?

Did Norman Lear change American culture through the television shows he created? Here is one headline hinting at this:

From the linked article, here are some of the ways Lear was influential:

Lear had already established himself as a top comedy writer and captured a 1968 Oscar nomination for his screenplay for “Divorce American Style” when he concocted the idea for a new sitcom, based on a popular British show, about a conservative, outspokenly bigoted working-class man and his fractious Queens family. “All in the Family” became an immediate hit, seemingly with viewers of all political persuasions.

Lear’s shows were the first to address the serious political, cultural and social flashpoints of the day – racism, abortion, homosexuality, the Vietnam war — by working pointed new wrinkles into the standard domestic comedy formula. No subject was taboo: Two 1977 episodes of “All in the Family” revolved around the attempted rape of lead character Archie Bunker’s wife Edith.

Their fresh outrageousness turned them into huge ratings successes: For a time, “Family” and “Sanford,” based around a Los Angeles Black family, ranked No. 1 and No. 2 in the country. “All in the Family” itself accounted for no less than six spin-offs. “Family” was also honored with four Emmys in 1971-73 and a 1977 Peabody Award for Lear, “for giving us comedy with a social conscience.” (He received a second Peabody in 2016 for his career achievements.)

Some of Lear’s other creations played with TV conventions. “One Day at a Time” (1975-84) featured a single mother of two young girls as its protagonist, a new concept for a sitcom. Similarly, “Diff’rent Strokes” (1978-86) followed the growing pains of two Black kids adopted by a wealthy white businessman.

Other series developed by Lear were meta before the term ever existed. “Mary Hartman, Mary Hartman” (1976-77) spoofed the contorted drama of daytime soaps; while the show couldn’t land a network slot, it became a beloved off-the-wall entry in syndication. “Hartman” had its own oddball spinoff, “Fernwood 2 Night,” a parody talk show set in a small Ohio town; the show was later retooled as “America 2-Night,” with its setting relocated to Los Angeles…

One of Hollywood’s most outspoken liberals and progressive philanthropists, Lear founded the advocacy group People for the American Way in 1981 to counteract the activities of the conservative Moral Majority.

The emphasis here is on both television and politics. Lear created different kinds of shows that proved popular as they promoted particular ideas. He also was politically active for progressive causes.

How might we know that these TV shows created cultural change? Just a few ways this could be established:

-How influential were these shows to later shows and cultural products? How did television shows look before and after Lear’s work?

-Ratings: how many people watched?

-Critical acclaim: what did critics think? What did his peers within the industry think? How do these shows stand up over time?

But, the question I might want to ask is whether we know how the people who watched these shows – millions of Americans – were or were not changed by these minutes and hours spent in front of the television. Americans take in a lot of television and media over their lifetime. This certainly has an influence in the aggregate. Do we have data and/or evidence that can link these shows to changed attitudes and actions? My sense is that is easier to see broad changes over time but harder to show more directly that specific media products led to particular outcomes at the individual (and sometimes also at the social) level.

These are research methodology questions that could involve lots of cultural products. The headline above might be supportable but it could require putting together multiple pieces of evidence and not having all the data we could have.

The possibilities of linking together sets of data

I saw multiple interesting presentations at ASA this year that linked together several datasets to develop robust analysis and interesting findings. These data sources included government data, data collected by the researchers, and other available data. Doing this unlocks a lot of possibilities for answering research questions.

Photo by Manuel Geissinger on Pexels.com

But, how might this happen more regularly? Or, put differently, how might more researchers use multiple datasets in a single project? Here are some quick thoughts on what could help make this possible:

-More access to data. Some data is publicly available. Other data is restricted for a variety of reasons. Having more big datasets accessible opens up possibilities. Just knowing where to request data is a process plus whatever applications and/or resources might be needed to access it.

-Having the know-how to put datasets together. It takes work to become familiar with a single dataset. To be able to merge data requires additional work. I do not know if it would be useful to offer more instruction in doing this or whether it matters which individual datasets are involved.

-Asking research questions gets more interesting and complicated with more variables and layers at play. Constructing sets of questions that build on the strengths of the combined data is a skill.

-Including more – but concise and understandable – explanations of how the data was merged in publications can help demystify the process.

And with all of this data innovation, it is interesting to consider how projects that link multiple datasets complement and come alongside other projects with only one source of data.

What counts as “good science,” happiness studies edition

Looking across studies that examined factors leading to happiness, several researchers concluded only two of five factors commonly discussed stood up to scrutiny:

Photo by Jill Wellington on Pexels.com

But even these studies failed to confirm that three of the five activities the researchers analyzed reliably made people happy. Studies attempting to establish that spending time in nature, meditating and exercising had either weak or inconclusive results.

“The evidence just melts away when you actually look at it closely,” Dunn said.

There was better evidence for the two other tasks. The team found “reasonably solid evidence” that expressing gratitude made people happy, and “solid evidence” that talking to strangers improves mood.

How might researchers improve their studies and confidence in the results?

The new findings reflect a reform movement under way in psychology and other scientific disciplines with scientists setting higher standards for study design to ensure the validity of the results.

To that end, scientists are including more subjects in their studies because small sample sizes can miss a signal or indicate a trend where there isn’t one. They are openly sharing data so others can check or replicate their analyses. And they are committing to their hypotheses before running a study in a practice known as “pre-registering.” 

These seem like helpful steps for quantitative research. Four solutions are suggested above (one is more implicit):

  1. Analyzing dozens of previous studies. When researchers study similar questions, are their findings consistent? Do they use similar methods? Is there consensus across a field or across disciplines? This summary work is useful.
  2. Avoid small samples. This helps reduce the risk of a chance finding among a smaller group of participants.
  3. Share data so that others can look at procedures and results.
  4. Test certain hypotheses set at the beginning rather than fitting hypotheses to statistically significant findings.

One thing I have not seen in discussions of these approaches intended to create better science: how much better will results be after following these steps? How much can a field improve with better confidence in the results? 5-10% 25% More?

Changes in methodology behind Naperville’s move to #16 best place to live in 2022 from #45 in 2021?

Money recently released their 2022 Best Places to Live in the United States. The Chicago suburb of Naperville is #16 in the country. Last year, it was #45. How did it move so much in one year? Is Naperville that much better in one year, other places that much work, or is something else at work? I wonder if the methodology led to this. Here is what went into the 2022 rankings:

Photo by RODNAE Productions on Pexels.com

Chief among those changes included introducing new data related to national heritage, languages spoken at home and religious diversity — in addition to the metrics we already gather on racial diversity. We also weighted these factors highly. While seeking places that are diverse in this more traditional sense of the word, we also prioritized places that gave us more regional diversity and strove to include cities of all sizes by lifting the population limit that we often relied on in previous years. This opened up a new tier of larger (and often more diverse) candidates.

With these goals in mind, we first gathered data on places that:

  • Had a population of at least 20,000 people — and no population maximum
  • Had a population that was at least 85% as racially diverse as the state
  • Had a median household income of at least 85% of the state median

Here is what went into the 2021 rankings:

To create Money’s Best Places to Live ranking for 2021-2022, we considered cities and towns with populations ranging from 25,000 up to 500,000. This range allowed us to surface places large enough to have amenities like grocery stores and a nearby hospital, but kept the focus on somewhat lesser known spots around the United States. The largest place on our list this year has over 457,476 residents and the smallest has 25,260.

We also removed places where:

  • the crime risk is more than 1.5x the national average
  • the median income level is lower than its state’s median
  • the population is declining
  • there is effectively no ethnic diversity

In 2021, the top-ranked communities tend to be suburbs. In 2022, there is a mix of big cities and suburbs with Atlanta at the top of the list and one neighborhood of Chicago, Rogers Park, at #5.

So how will this get reported? Did Naperville make a significant leap? Is it only worth highlighting the #16 ranking in 2022 and ignore the previous year’s lower ranking? Even while Naperville has regularly featured in Money‘s list (and in additional rankings as well), #16 can be viewed as an impressive feat.

Why it can take months for rent prices to show up in official data

It will take time for current rent prices to contribute to measures of inflation:

Photo by Burak The Weekender on Pexels.com

To solve this conundrum, the best place to start is to understand that rents are different from almost any other price. When the price of oil or grain goes up, everybody pays more for that good, at the same time. But when listed rents for available apartments rise, only new renters pay those prices. At any given time, the majority of tenants surveyed by the government are paying rent at a price locked in earlier.

So when listed rents rise or fall, those changes can take months before they’re reflected in the national data. How long, exactly? “My gut feeling is that it takes six to eight months to work through the system,” Michael Simonsen, the founder of the housing research firm Altos, told me. That means we can predict two things for the next six months: first, that official measures of rent inflation are going to keep setting 21st-century records for several more months, and second, that rent CPI is likely to peak sometime this winter or early next year.

This creates a strange but important challenge for monetary policy. The Federal Reserve is supposed to be responding to real-time data in order to determine whether to keep raising interest rates to rein in demand. But a big part of rising core inflation in the next few months will be rental inflation, which is probably past its peak. The more the Fed raises rates, the more it discourages residential construction—which not only reduces overall growth but also takes new homes off the market. In the long run, scaled-back construction means fewer houses—which means higher rents for everybody.

To sum up: This is all quite confusing! The annual inflation rate for new rental listings has almost certainly peaked. But the official CPI rent-inflation rate is almost certainly going to keep going up for another quarter or more. This means that, several months from now, if you turn on the news or go online, somebody somewhere will be yelling that rental inflation is out of control. But this exclamation might be equivalent to that of a 17th-century citizen going crazy about something that happened six months earlier—the news simply took that long to cross land and sea.

This sounds like a research methods problem: how to get more up-to-date data into the current measures? A few quick ideas:

  1. Survey rent listings to see what landlords are asking for.
  2. Survey new renters to better track more recent rent prices.
  3. Survey landlords as to the prices of the recent units they rented.

Given how much rides on important economic measures such as the inflation rate, more up-to-date data would be helpful.

Recent quote on doing agendaless science

Vaclav Smil’s How The World Really Works offers an analysis of foundational materials and processes behind life in 2022 and what these portend for the near future. It also includes this as the second to last paragraph of the book:

Is it possible to have no agenda in carrying out analysis and writing such an overview?

Much of what Smil describes and then extrapolates from could be viewed as having an agenda. This agenda could be scientism. On one hand, he reveals some of the key forces at work in our world and on the other hand he provides interpretation of what these mean now and at other times. The writing suggests he knows this; he makes similar points to that quoted above throughout the book to address the issue.

I feel this tension when teaching Research Methods in Sociology. Sociology has a stream of positivism and scientific analysis from its early days, wanting to apply a more dispassionate method to the social world. It has also has early strands of a different approach less beholden to the scientific method and allowing for additional forms of social analysis. These strands continue today and make for an interesting challenge and opportunity in teaching a plurality of methods within a single discipline.

I learned multiple things from the book. I also will need time to ponder the implications and the approach.

“Journalism is sociology on fast forward”

Listening to 670 The Score at 12:14 PM today, I heard Leila Rehimi say this about journalism:

Photo by brotiN biswaS on Pexels.com

Journalism is sociology on fast forward.

I can see the logic in this as journalists and sociologists are interested in finding out what is happening in society. They are interested in trends, institutions, patterns, people in different roles and with different levels of access to power and resources, and narratives.

There are also significant differences in the two fields. One is hinted at in the quote above: different timelines. A typical sociology project from idea to publication in some form could takes 4-6 (a rough average). Journalists usually work on shorter timelines and have stronger pressures to generate content more quickly.

Related to this timing issue is the difference in methods for understanding and analyzing data and evidence. Sociologists use a large number of quantitative and qualitative methods, follow the scientific method, and take longer periods of time to analyze and write up conclusions. Sociologists see themselves more as social scientists, not just describers of social realities.

I am sure there are plenty of sociologists and journalists with thoughts on this. It would be interesting to see where they see convergence and divergence between the two fields.

The difficulty of collecting, interpreting, and acting on data quickly in today’s world

I do not think the issue is just limited to the problems with data during COVID-19:

Photo by Artem Saranin on Pexels.com

If, after reading this, your reaction is to say, “Well, duh, predictions are difficult. I’d like to see you try it”—I agree. Predictions are difficult. Even experts are really bad at making them, and doing so in a fast-moving crisis is bound to lead to some monumental errors. But we can learn from past failures. And even if only some of these miscalculations were avoidable, all of them are instructive.

Here are four reasons I see for the failed economic forecasting of the pandemic era. Not all of these causes speak to every failure, but they do overlap…

In a crisis, credibility is extremely important to garnering policy change. And failed predictions may contribute to an unhealthy skepticism that much of the population has developed toward expertise. Panfil, the housing researcher, worries about exactly that: “We have this entire narrative from one side of the country that’s very anti-science and anti-data … These sorts of things play right into that narrative, and that is damaging long-term.”

My sense as a sociologist is that the world is in a weird position: people expect relatively quick solutions to complex problems, there is plenty of data to think about (even as the quality of the data varies widely), and there are a lot of actors interpreting and acting on data or evidence. Put this all together and it is can be difficult to collect good data, make sound interpretations of data, and make good choices regarding acting on those interpretations.

In addition, making predictions about the future is already difficult even with good information, interpretation, and policy options.

So, what should social scientists take from this? I would hope we can continue to improve our abilities to respond quickly and well to changing conditions. Typical research cycles take years but this is not possible in certain situations. There are newer methodological options that allow for quicker data collection and new kinds of data; all of this needs to be evaluated and tested. We need better processes of reaching consensus at quicker rates.

Will we ever be at a point where society is predictable? This might be the ultimate dream of social science if only we had enough data and the correct models. I am skeptical but certainly our methods and interpretation of data can always be improved.