Use better social science categories than “generations”

Millennials, Boomers, the Silent Generation, Gen Y, etc. are all categories that people generally think describe real phenomena. But, are they useful categories for describing patterns within American society?

Photo by Sarah Chai on Pexels.com

This supposition requires leaps of faith. For one thing, there is no empirical basis for claiming that differences within a generation are smaller than differences between generations. (Do you have less in common with your parents than with people you have never met who happen to have been born a few years before or after you?) The theory also seems to require that a person born in 1965, the first year of Generation X, must have different values, tastes, and life experiences from a person born in 1964, the last year of the baby-boom generation (1946-64). And that someone born in the last birth year of Gen X, 1980, has more in common with someone born in 1965 or 1970 than with someone born in 1981 or 1990.

Everyone realizes that precision dating of this kind is silly, but although we know that chronological boundaries can blur a bit, we still imagine generational differences to be bright-line distinctions. People talk as though there were a unique DNA for Gen X—what in the nineteenth century was called a generational “entelechy”—even though the difference between a baby boomer and a Gen X-er is about as meaningful as the difference between a Leo and a Virgo…

In any case, “explaining” people by asking them what they think and then repeating their answers is not sociology. Contemporary college students did not invent new ways of thinking about identity and community. Those were already rooted in the institutional culture of higher education. From Day One, college students are instructed about the importance of diversity, inclusion, honesty, collaboration—all the virtuous things that the authors of “Gen Z, Explained” attribute to the new generation. Students can say (and some do say) to their teachers and their institutions, “You’re not living up to those values.” But the values are shared values…

In other words, if you are basing your characterization of a generation on what people say when they are young, you are doing astrology. You are ascribing to birth dates what is really the result of changing conditions.

As this piece notes, popular discourse often treats generations as monolithic blocks. Everyone in a particular generation has similar experiences, outlooks, values. Is this actually true? Or, are other social forces at work including changing conditions, lifecourse changes, social markers like race, class, and gender, and more?

I remember seeing earlier this year an open letter from social scientists to Pew Research asking them to discontinue using generation categories. This is one way that change could occur: researchers working in this area can replace less helpful categories with more helpful ones. This could be scientific progress: as our understanding of social phenomena develops, we can better conceptualize and operationalize these. With sustained effort and keeping up with changes in society, we could see a shift in how we talk about differences between people born at different times.

Yet, this also takes a lot of work. The generations labels are popular. They are a convenient shorthand. People in the United States are used to understanding themselves and others with these categories. Sociological categories are not always easy to bring to the public nor do they always find acceptance.

At the least, perhaps we can hope for fewer articles and opinions that broadly smear whole generations. Making hasty or less than accurate generalizations is not helpful.

52% of Americans say they live in a suburban neighborhood

A call for a more official definition of suburban areas starts with new data on the percent of Americans who say they live in a suburban neighborhood:

Much of America looks suburban, with neighborhoods of single-family homes connected by roads to retail centers and low-rise office buildings. For the first time, government data confirm this. According to the newly released 2017 American Housing Survey (of nearly 76,000 households nationwide), about 52 percent of people in the United States describe their neighborhood as suburban, while about 27 percent describe their neighborhood as urban, and 21 percent as rural.

This seems just about right based on data I have seen from the Census Bureau regarding the percent of Americans who live in suburbs. The 2002 report “Demographic Trends in the 20th Century” put 50.0% of Americans in suburbs, 30.3% in central cities, and the rest in rural areas. More recent figures I have seen put the percent of Americans in suburbs just over 50%.

I would guess the above figures are off a few percent for a few reasons:

1. Some urban neighborhoods feel suburban. If suburbs are marked by single-family homes and driving, plenty of urban neighborhoods in the United States would count. This is particularly true in more sprawling cities in the South and West.

2. Some rural neighborhoods marked by bigger lots and/or smaller population densities might officially be considered suburban neighborhoods by the Census even if they have a more rural feel.

Poverty measure that goes beyond income or financial resources

How exactly to define poverty  is an ongoing conversation (earlier posts here and here) and here is another proposal that would include two additional dimensions:

If the point of measuring poverty is to capture well-being, we should reframe poverty as a form of social exclusion and deprivation. “Poverty has a wider meaning than lack of income. It’s not being able to participate in things we take for granted in terms of connection to society, but also crime, and life expectancy,” argues Rank. In an era when most deaths by guns are suicides, addiction rates are rising, and U.S. life expectancy is dropping and increasingly unequal by race and education level, capturing people’s well-being and designing solutions beyond material hardship is paramount.

Both of these dimensions have grounding in sociological discussions of poverty. The difference between absolute poverty and relative poverty covers similar ground to the idea of deprivation. There may be a minimum amount of resources someone needs to survive but this is different than comparing survival to normal or regular participation in a group or society. This is particularly compounded in today’s world where it is so easy for anyone – rich or poor – to at least how how others live (though this is certainly not a new issue).

Social exclusion can be very damaging as it limits opportunities for particular groups and often prevents the ability to help shape their own lives through political or collective action. This reminds me of William Julius Wilson’s work where economic troubles lead to the social exclusion of poor neighborhoods from broader society. Other researchers, such as Mario Small in Villa Victoria, have examined this idea more closely and found that some members of poorer neighborhoods are able to develop social networks outside their neighborhood of residence but these forays do not necessarily extend advantages to the whole community.

If researchers did decide that deprivation and social exclusion should be part of poverty measures, it would be interesting to see how the measures are standardized for social science and government data.

Can sociologists be the ones who officially define the middle class?

Defining the middle class is a tricky business with lots of potential implications, as one sociologist notes:

“Middle class” has become a meaningless political term covering everyone who is not on food stamps and does not enjoy big capital gains. Like a sociological magician, I can make the middle class grow, shrink or disappear just by the way I choose to define it.

What is clear and incontestable is the growing inequality in this country over the last three decades. In a 180-degree reversal of the pattern in the decades after World War II, the gains of economic growth flow largely to the people at the top.

I like the idea of a sociological magician but this is an important issue: many Americans may claim to be middle class but their life chances, experiences, and tastes can be quite different. Just look at the recent response to possible changes to the 529 college savings programs. A vast group may help political parties make broad appeals yet it doesn’t help in forming policies. (Just to note: those same political parties make bland and broad appeals even as they work harder than ever to microtarget specific groups for donations and votes.)

Given some recent conversations about the relative lack of influence of sociologists, perhaps this is an important area where they can contribute. Class goes much further than income; you would want to think about income, wealth, educational attainment, the neighborhood in which one lives, cultural tastes and consumption patterns, and more. The categories should clearly differentiate groups while remaining flexible enough to account for combinations of factors as well as changes in American society.

New way of measuring poverty gives California highest rate

The Census Bureau tried changing the definition of poverty and it put California at the top of the list for poverty:

California continues to have – by far – the nation’s highest level of poverty under an alternative method devised by the Census Bureau that takes into account both broader measures of income and the cost of living.

Nearly a quarter of the state’s 38 million residents (8.9 million) live in poverty, a new Census Bureau report says, a level virtually unchanged since the agency first began reporting on the method’s effects.

Under the traditional method of gauging poverty, adopted a half-century ago, California’s rate is 16 percent (6.1 million residents), somewhat above the national rate of 14.9 percent but by no means the highest. That dubious honor goes to New Mexico at 21.5 percent.

But under the alternative method, California rises to the top at 23.4 percent while New Mexico drops to 16 percent and other states decline to as low as 8.7 percent in Iowa.

Not surprisingly, the new methodology has become political:

It’s now routinely cited in official reports and legislative documents, and Neel Kashkari, the Republican candidate for governor, has tried to make it an issue in his uphill challenge to Democratic Gov. Jerry Brown, even spending several days in Fresno posing as a homeless person to dramatize it.

The definition of poverty is an interesting methodological topic that certainly has social and political implications. I assume the Census Bureau argues the new definition is a better one since it accounts for more information and adjusts for regional variation. But, “better” could also mean one that either reduces or increases the official number which then can be used for different ends.

Hard to measure school shootings

It is difficult to decide on how to measure school shootings and gun violence:

What constitutes a school shooting?

That five-word question has no simple answer, a fact underscored by the backlash to an advocacy group’s recent list of school shootings. The list, maintained by Everytown, a group that backs policies to limit gun violence, was updated last week to reflect what it identified as the 74 school shootings since the massacre in Newtown, Conn., a massacre that sparked a national debate over gun control.

Multiple news outlets, including this one, reported on Everytown’s data, prompting a backlash over the broad methodology used. As we wrote in our original post, the group considered any instance of a firearm discharging on school property as a shooting — thus casting a broad net that includes homicides, suicides, accidental discharges and, in a handful of cases, shootings that had no relation to the schools themselves and occurred with no students apparently present.

None of the incidents rise to the level of the massacre that left 27 victims, mostly children, dead in suburban Connecticut roughly 18 months ago, but multiple reviews of the list show how difficult quantifying gun violence can be. Researcher Charles C. Johnson posted a flurry of tweets taking issue with incidents on Everytown’s list. A Hartford Courant review found 52 incidents involving at least one student on a school campus. (We found the same, when considering students or staff.) CNN identified 15 shootings that were similar to the violence in Newtown — in which a minor or adult was actively shooting inside or near a school — while Politifact identified 10.

Clearly, there’s no clean-cut way to quantify gun violence in the nation’s schools, but in the interest of transparency, we’re throwing open our review of the list, based on multiple news reports per incident. For each, we’ve summarized the incident and included casualty data where available.

This is a good example of the problems of conceptualization and operationalization. The idea of a “school shooting” seems obvious until you start looking at a variety of incidents and have to decide whether they hang together as one definable phenomenon. It is interesting here that the Washington Post then goes on to provide more information about each case but doesn’t come down on any side.

So how might this problem be solved? In the academic or scientific world, scholars would debate this through publications, conferences, and public discussions until some consensus (or at least some agreement about the contours of the argument) emerges. This takes time, a lot of thinking, and data analysis. This runs counter to more media or political-driven approaches that want quick, sound bite answers to complex social problems.

Chicago crime stats: beware the “official” data in recent years

Chicago has a fascinating look at some interesting choices made about how to classify homicides in Chicago – with the goal of trying to reduce the murder count.

For the case of Tiara Groves is not an isolated one. Chicago conducted a 12-month examination of the Chicago Police Department’s crime statistics going back several years, poring through public and internal police records and interviewing crime victims, criminologists, and police sources of various ranks. We identified 10 people, including Groves, who were beaten, burned, suffocated, or shot to death in 2013 and whose cases were reclassified as death investigations, downgraded to more minor crimes, or even closed as noncriminal incidents—all for illogical or, at best, unclear reasons…

Many officers of different ranks and from different parts of the city recounted instances in which they were asked or pressured by their superiors to reclassify their incident reports or in which their reports were changed by some invisible hand. One detective refers to the “magic ink”: the power to make a case disappear. Says another: “The rank and file don’t agree with what’s going on. The powers that be are making the changes.”

Granted, a few dozen crimes constitute a tiny percentage of the more than 300,000 reported in Chicago last year. But sources describe a practice that has become widespread at the same time that top police brass have become fixated on demonstrating improvement in Chicago’s woeful crime statistics.

And has there ever been improvement. Aside from homicides, which soared in 2012, the drop in crime since Police Superintendent Garry McCarthy arrived in May 2011 is unprecedented—and, some of his detractors say, unbelievable. Crime hasn’t just fallen, it has freefallen: across the city and across all major categories.

Two quick thoughts:

1. “Official” statistics are often taken for granted and it is assumed that they measure what they say they measure. This is not necessarily the case. All statistics have to be operationalized, taken from a more conceptual form into something that can be measured. Murder seems fairly clear-cut but as the article notes, there is room for different people to classify things differently.

2. Fiddling with the statistics is not right but, at the same time, we should consider the circumstances within which this takes place. Why exactly does the murder count – the number itself – matter so much? Are we more concerned about the numbers or the people and communities involved? How happy should we be that the number of murders was once over 500 and now is closer to 400? Numerous parties mentioned in this article want to see progress: aldermen, the mayor, the police chief, the media, the general public. Is progress simply reducing the crime rate or rebuilding neighborhoods? In other words, we might consider whether the absence of major crimes is the best end goal here.

Can you have a food desert in a rural area?

A number of commentators on this new map of food deserts in the United States suggest the reason there are rural food deserts out West is because few people live in these areas. Here is a sample of the comments (from three different people):

surprise surprise……grocery stores don’t exist where people don’t live. It doesn’t take a statistician to figure that out. (Only to spend the time to show people that in a brightly-colored graph.)

Stupid. Stupid. Stupid. Commenters here have it correct. This graph is an absolute waste of time given that there SHOULDN’T be fresh food available where there are no people to eat it. “You can see the number of grocery stores multiply as you start in Nevada and enter into California’s urban areas.” Duh.

Crisis creation. Move along folks. Nothing to see here but smoke and mirrors.

The creator of the map explains how he measured food deserts in more rural areas:

Using the Google Places API, Yau search for the nearest grocery store every 20 miles (this included smaller stores–not just the major chains he plotted in his last visualization). “I chose those increments, because there’s some rough agreement that a food desert is a place where there isn’t a grocery store within 10 miles,” he explains, adding that in pedestrian cities the standard is closer to a mile. “And if you consider searches every 20 with a 10-mile radius you’ve got a fairly comprehensive view.”

There are two issues at work. One is how exactly to define a food desert. One mile might make sense in a city but is 10 miles a good measure in a more rural area? The second issue is behind the scenes and concerns more than just grocery stores in rural areas: how exactly should services and businesses be distributed in rural areas? How many health care facilities should there be? What about social services? Businesses and organizations could make a case that it is difficult to make money or cover their costs in such a rural environment.

One way around this would be to distinguish between urban food deserts and rural food deserts.

Adding creative endeavors to GDP

The federal government is set to change how it measures GDP and the new measure will include creative work:

The change is relatively simple: The BEA will incorporate into GDP all the creative, innovative work that is the backbone of much of what the United States now produces. Research and development has long been recognized as a core economic asset, yet spending on it has not been included in national accounts. So, as the Wall Street Journal noted, a Lady Gaga concert and album are included in GDP, but the money spent writing the songs and recording the album are not. Factories buying new robots counted; Pfizer’s expenditures on inventing drugs were not.

As the BEA explains, it will now count “creative work undertaken on a systematic basis to increase the stock of knowledge, and use of this stock of knowledge for the purpose of discovering or developing new products, including improved versions or qualities of existing products, or discovering or developing new or more efficient processes of production.” That is a formal way of saying, “This stuff is a really big deal, and an increasingly important part of the modern economy.”

The BEA estimates that in 2007, for example, adding in business R&D would have added 2 percent to U.S. GDP, or about $300 billion. Adding in the various inputs into creative endeavors such as movies, television and music will mean an additional $70 billion. A few other categories bring the total addition to over $400 billion. That is larger than the GDP of more than 160 countries…

The new framework will not stop the needless and often harmful fetishizing of these numbers. GDP is such a simple round number that it is catnip to commentators and politicians. It will still be used, incorrectly, as a proxy for our economic lives, and it will still frame our spending decisions more than it should. Whether GDP is up 2 percent or down 2 percent affects most people minimally (down a lot, quickly, is a different story). The wealth created by R&D that was statistically less visible until now benefited its owners even those the figures didn’t reflect that, and faster GDP growth today doesn’t help a welder when the next factory will use a robot. How wealth is used, who benefits from it and whether it is being deployed for sustainable future growth, that is consequential. GDP figures, even restated, don’t tell us that.

On one hand, changing a measure so that more accurately reflects the economy is a good thing. This could help increase the validity of the measure. On the other hand, measures still can be used well or poorly, the change may not be a complete improvement over previous measures, and it may be difficult to reconcile new figures with past figures. It is not quite as easy as simply “improving” a measure; a lot of other factors are involved. It will be interesting to see how this measurement change sorts out in the coming years and how the information is utilized.

Defining what makes for a luxury home

Here is how one data firm defines what it means to be a luxury housing unit:

Although upscale housing is selling better in some cities than in others, a monthly analysis by the Altos Research data firm for the Institute for Luxury Home Marketing says that overall, that segment of the market is gaining momentum and prices are rising…

Q: “Luxury home” is probably one of the most abused phrases in real estate-ese. How do you define it?

A: A price range that’s considered the high end of the market in one place might be something that’s average in another. So, “luxury” is local: Our organization generally defines it as the top 10 percent of an area’s sales in the past 12 months. But for the purposes of the research that we do with Altos for our monthly Luxury Market Report, we’ve taken the ZIP codes within each of 31 markets that have the highest median prices, and for about five years we’ve tracked the sales of homes in those (areas) that are $500,000 and above.

There are two techniques proposed here:

1. The highest 10 percent of a local housing market. Thus, the prices are all relative and the data is based on the highest end in each place. So, there could be some major differences in luxury prices across zip codes or metropolitan regions.

2. Breaking it down first by geography to the wealthiest places (so this is based on geographic clustering) and then setting a clear cut point at $500,000. In these wealthiest zip codes, wouldn’t most of the units be over $500,000? Why the 31 wealthiest markets and not 20 or 40?

Each of these approaches have strengths and weaknesses but I imagine the data here could change quite a bit based on what operationalization is utilized.

Interestingly, the firm found that luxury sales rebounded quicker than the rest of the market:

The interesting thing about this recovery is that the luxury segment, that group of affluent households, was able to recover fairly quickly. They shifted their assets around, and a lot of them were able to see opportunities in the down market. By 2010, there were almost as many high-end households as before the downturn, not just in the United States, but internationally, as well. This group focused on residential real estate as a pretty desirable asset — for them, a second or third home turned out to be a portfolio play.

This shouldn’t be too surprising – when an economic crisis hits, the wealthier members of society have more of a cushion. While the upper end is doing all right, others have argued the bottom end, those looking for starter homes, are having a tougher time.