Change how album sales are measured, change perceptions of popular music

The music industry changed in 1991 when how album sales were measured changed:

Photo by Maor Attias on Pexels.com

On May 25, 1991—30 years ago Tuesday—Billboard started using Nielsen SoundScan data to build its album chart, with all of its charts, including singles hub The Hot 100, eventually following suit. Meaning, the magazine started counting album sales with scanners and computers and whatnot, and not just calling up record stores one at a time and asking them for their individual counts, often a manual and semi-accurate and flagrantly corrupt process. This is the record industry’s Moneyball moment, its Eureka moment, its B.C.-to-A.D. moment. A light bulb flipping on. The sun rising. We still call this the SoundScan Era because by comparison the previous era might as well have been the Dark Ages.

First SoundScan revelation: Albums opened like movies, so for anything with an established fan base, that first week is usually, by far, the biggest. First beneficiary: Skid Row. And why not? “Is Skid Row at the height of their imperial period?” Molanphy asks of this ’91 moment. “For Skid Row, yes. But Skid Row is not Michael Jackson, Whitney Houston, Bruce Springsteen, or Stevie Wonder. Skid Row is a middle-of-the-road hair-metal band at the peak of their powers, relatively speaking. So it’s not as if they are commanding the field. It’s just the fans all showed up in week no. 1, and it debuts at no. 1. And then we discover, ‘Oh, this is going to happen every week. This is not special anymore.’”

Next SoundScan revelation: Hard rock and heavy metal were way more popular than anybody thought. Same deal with alternative rock, R&B, and most vitally, rap and country. In June 1991, N.W.A’s second album, Efil4zaggin, hit no. 1 after debuting at no. 2 the previous week. That September, Garth Brooks’s third album, the eventually 14-times-platinum Ropin’ the Wind, debuted at no. 1, the week after Metallica’s eventually 16-times-platinum self-titled Black Album debuted there. In early January 1992, Nirvana’s Nevermind, released in September ’91, replaced MJ’s Dangerous in the no. 1 spot, a generational bellwether described at the time by Billboard itself as an “astonishing palace coup.”

Virtually overnight, SoundScan changed the rules on who got to be a mega, mega superstar, and the domino effect—in terms of magazine covers, TV bookings, arena tours, and the other spoils of media attention and music-industry adulation—was tremendous, if sometimes maddeningly slow in coming. Garth, Metallica, N.W.A, Nirvana, and Skid Row were already hugely popular, of course. But SoundScan revealed exactly how popular, which of course made all those imperial artists exponentially more popular.

This is all about measurement – boring measurement! – but it is a fascinating story. Thinking from a cultural production perspective, here are three things that stand out to me:

  1. This was prompted in part by a technology change involving computers, scanners, and inventory systems. The prior system of calling some record sales and getting their sales clearly has problems. But, how to get to all music being sold? This requires some coordination and technology across many settings.
  2. The change in measurement led to changes in how people understood the music industry. What genres are popular? What artists are hot? How often do artists have debut #1 albums as opposed to getting discovered by the public and climbing the charts? Better data changed how people perceived music.
  3. The change in measurement not only changed perceptions; it had cascading effects. The Matthew Effect suggests small initial differences can lead to widening outcomes when actors are treated differently in those early stages. When the new measurement system highlighted different artists, they got more attention.

Summary: some might say that good music is good music but how we obtain data and information about music and then act upon that information influences what we music we promote and listen to.

The difficulty of measuring TV watching (COVID-19 and otherwise)

Nielsen and TV networks are sparring over Nielsen data that suggests fewer people are watching television during COVID-19:

Photo by John-Mark Smith on Pexels.com

Through the trade group Video Advertising Bureau, the networks are perplexed by Nielsen statistics that show the percentage of Americans who watched their televisions at least some time during the week declined from 92% in 2019 to 87% so far this year.

Besides being counter-intuitive in the pandemic era, the VAB says that finding runs counter to other evidence, including viewing measurements from set-top cable boxes, the increased amount of streaming options that have become available and a jump in sales for television sets…

The number of families, particularly large families, participating in Nielsen measurements has dropped over the past year in percentages similar to the decrease in viewership, Cunningham said. Nielsen acknowledges that its sample size is smaller — the company is not sending personnel into homes because of COVID-19 — but said statistics are being weighted to account for the change…

More people are spending time on tablets and smartphones, which aren’t measured by Nielsen. The podcast market is soaring. Sports on television was interrupted. Due to production shutdowns, television networks were airing far more reruns, Nielsen said.

This sounds like a coming together of long-term trends and short-term realities. The long-term trends include people engaging with media across a wider range of devices, it takes work to measure all of their viewing and finding people to participate in any data collection, and there are a lot of entertainment choices competing with television. In the short-term, COVID-19 pushed people home but it disrupted their typical patterns.

Will this affect the long-running place television has in the everyday lives of Americans? Even as of 2018, Nielsen reported that the average American watched more than 4 hours of television a day. TV might be conveyed through different formats – streaming, handheld devices, etc. – but it is still a powerful force and a significant use of time.

At the same time, how TV is consumed and how this affects what television means could be quite different moving forward. Watching streaming television on a smartphone while commuting is a very different experience than sitting on the couch after dinner for an hour or two and watching a big-screen TV. Teasing out these differences takes some work but a new and/or younger generation of TV viewers might have quite a disparate relationship with television.

Facebook continues to claim it is about “meaningful social interactions”

Members of Congress questioned leaders of social media companies this week. In contrast to what legislators suggested, Mark Zuckerberg said Facebook has one particular goal:

Photo by Kampus Production on Pexels.com

Focusing on the attention-driven business model seems to have been a coordinated strategy among the committee’s Democrats, but they were not alone. Bill Johnson, a Republican from Ohio, compared the addictiveness of social platforms to cigarettes. “You profit from hooking users on your platforms by capitalizing off their time,” he said, addressing Dorsey and Zuckerberg. “So yes or no: Do you agree that you make money off of creating an addiction to your platforms?”

Both executives said no. As they did over and over again, along with Pichai, when asked straightforwardly whether their platforms’ algorithms are optimized to show users material that will keep them engaged. Rather than defend their companies’ business model, they denied it.

Zuckerberg, in particular, suggested that maximizing the amount of time users spend on the platform is the furthest thing from his engineers’ minds. “It’s a common misconception that our teams even have goals of trying to increase the amount of time that people spend,” he said. The company’s true goal, he insisted, is to foster “meaningful social interactions.” Misinformation and inflammatory content actually thwarts that goal. If users are spending time on the platform, it simply proves that the experience is so meaningful to them. “Engagement,” he said, “is only a sign that if we deliver that value, then it will be natural that people use our services more.”

Zuckerberg has said this for years; see this earlier post. Facebook and other social media platforms have the opportunity to bring people together, whether that is through building upon existing relationships or interacting with new people based on common interests and causes.

Has Facebook delivered on this promise? Do social media users find “meaningful social interactions”? The research I have done with Peter Mundey suggests emerging adult users are aware of the downsides of social media interactions but many still participate because there is meaning or enough meaning.

I suppose it might come down to defining and measuring “meaningful social interaction.” Social interaction can take many forms, ranging from carrying on social media mediated relationships through simply viewing images and text over time to less personal interaction in commenting on or registering a reaction to something like hundreds of others to direct interaction to people through various means. Is a negative response meaningful? Does a positive direct interaction count more? Can the interaction be more episodic or is it sustained over a certain period of time?

One possible path: ask for the evidence of Facebook, Twitter, Instagram, and Snapchat users (among others) having meaningful interactions alongside evidence of how these platforms count and measure capturing attention. Another: ask whether these companies think they have succeeded in creating “meaningful social interactions” and what they would cite as markers of this.

Defining and measuring boredom

Learn more about “boredom studies” here. On the definition of boredom:

Photo by Victoria Borodinova on Pexels.com

Contemporary boredom researchers, for all their scales and graphs, do engage some of the same existential questions that had occupied philosophers and social critics. One camp contends that boredom stems from a deficit in meaning: we can’t sustain interest in what we’re doing when we don’t fundamentally care about what we’re doing. Another school of thought maintains that it’s a problem of attention: if a task is either too hard for us or too easy, concentration dissipates and the mind stalls. Danckert and Eastwood argue that “boredom occurs when we are caught in a desire conundrum, wanting to do something but not wanting to do anything,” and “when our mental capacities, our skills and talents, lay idle—when we are mentally unoccupied.”

Erin Westgate, a social psychologist at the University of Florida, told me that her work suggests that both factors—a dearth of meaning and a breakdown in attention—play independent and roughly equal roles in boring us. I thought of it this way: An activity might be monotonous—the sixth time you’re reading “Knuffle Bunny” to your sleep-resistant toddler, the second hour of addressing envelopes for a political campaign you really care about—but, because these things are, in different ways, meaningful to you, they’re not necessarily boring. Or an activity might be engaging but not meaningful—the jigsaw puzzle you’re doing during quarantine time, or the seventh episode of some random Netflix series you’ve been sucked into. If an activity is both meaningful and engaging, you’re golden, and if it’s neither you’ve got a one-way ticket to dullsville.

On measuring boredom:

The interpretation of boredom is one thing; its measurement is quite another. In the nineteen-eighties, Norman Sundberg and Richard Farmer, two psychology researchers at the University of Oregon, developed a Boredom Proneness Scale, to assess how easily a person gets bored in general. Seven years ago, John Eastwood helped come up with a scale for measuring how bored a person was in the moment. In recent years, boredom researchers have done field surveys in which, for example, they ask people to keep diaries as they go about daily life, recording instances of naturally occurring lethargy. (The result of these new methods was a boon to boredom studies—Mann refers to colleagues she runs into on “the ‘boredom’ circuit.”) But many of the studies involve researchers inducing boredom in a lab setting, usually with college students, in order to study how that clogged, gray lint screen of a feeling affects people.

The study of human behavior continues. A few quick thoughts:

  1. Boredom often comes in solitary conditions. In addition to study social interactions and collective, looking at what people do on their own is worthwhile – and is connected to broader social interaction.
  2. The article mentions various dimensions of boredom as well as its persistence throughout time periods. I would be interested to hear more about how boredom has changed.
  3. In terms of measurement, why not more observational studies? If parked in a public space or granted access to living spaces, I would think researchers would have ample opportunities to see boredom. And the smartphone would seem to be a great device for tracking boredom given its ability to sense movement, keep track of particular uses, ask survey questions when boredom is sensed, etc.

The study of human behavior continues!

When growing rural communities are reclassified as urban communities

James Fallows points to a Washington Post piece that discusses the reclassification issue facing numerous rural communities:

 

A few years after every census, counties like Bracken are reclassified, and rural or “nonmetropolitan” America shrinks and metropolitan America grows. At least on paper. The character of a place doesn’t necessarily change the moment a city crosses the 50,000-resident mark…

The sprawling, diverse segment of the United States that has changed from rural to urban since 1950 is the fastest-growing segment of the country. Culturally, newly urban areas often have more in common with persistently rural places than with the biggest cities. Most notably, in 2016, Hillary Clinton would have won only the counties defined as urban when the metropolitan classification began in 1950, while Donald Trump would have won every group of counties added to metropolitan after the initial round….

About 6 in 10 U.S. adults who consider themselves “rural” live in an area classified as metropolitan by standards similar to those used above, according to a Washington Post-Kaiser Family Foundation poll conducted in 2017. And 3 in 4 of the adults who say they live in a “small town”? They’re also in a metro area…

If rural Americans complain of being left behind, it might be because they literally are. In government statistics, and in popular conception, rural is defined as what’s left after you have staked out all the cities and their satellites.

This is a measurement issue. What exactly counts as an urban, suburban, or rural area? This is a question I frequently field from students but it is more complicated than it looks.

My short answer: everything in between larger central cities and rural areas is a suburb.

My longer answer: metropolitan regions (encompassing the suburban areas around central cities) are drawn with county boundaries, not municipal boundaries. This means an entire county might be part of a metropolitan region but significant portions of the county are still rural.

My longer longer answer: the official boundaries do not truly capture a suburban way of life. This could be mimicked in numerous urban neighborhoods that contain single-family homes, yards, and families as well as more rural communities.

All of this may help explain why Americans tend to say they like or live in small towns even when these communities are not, by certain measures, not small towns.

The last quoted paragraph above is also intriguing: is rural truly whatever is leftover outside of metropolitan areas? At the start of the twentieth century, the vast majority of Americans lived outside cities and suburbs. As urban and suburban populations swelled, so did their geographic area. It is hard not to think that we still have not quite caught up with these major changes in spaces and communities a little over one hundred years later.

The difficulty of measuring a splintered pop culture

What are common pop culture products or experiences in today’s world? It is hard to tell:

Does Eilish, for instance, enjoy the same level of celebrity someone of equal accomplishment would have had 15 years ago? I don’t know. There are Soundcloud stars, Instagram stars, YouTube stars, Twitter stars, TikTok stars. Some artists, like Eilish and Lil Nas X, transition from success on a platform like Soundcloud to broader reach. But like Vine, TikTok has its own celebrities. So do YouTube and Instagram. We’ll always have George Clooneys and Lady Gagas, but it almost feels like we might be lurching towards a future with fewer superstars and more stars.

We can still read the market a little more clearly in music, and on social media, but the enigma of streaming services illustrates these challenges well. The problem is this: At a time when we’re both more able and more willing to concentrate in niches, we also have fewer metrics to understand what’s actually happening in our culture…

We have absolutely no idea. People could be watching “Santa Clarita Diet” in similar numbers to something like “It’s Always Sunny in Philadelphia,” or they could be completely ignoring it. The same goes for every show on Amazon, Hulu, and Netflix. We can see what people are chattering about on social media (hardly a representative sample), and where the companies are putting their money. But that’s it. Not only do true “cultural touchstones” seem to be fewer and farther in between in the streaming era, we also have fewer tools to determine what they actually are…

But it also means there’s more incentive for the creators of pop culture to carve us up by our differences rather than find ways to bring us together. It means we’re sharing fewer cultural experiences beyond the process of logging onto Netflix or Spotify, after which the home screen is already customized. On top of all this, it means we’re more and more in the dark about what’s entertaining us, and why that matters. What does cultural impact look like in an era of proliferating niches, where the metrics are murky?

I wonder if this could open up possibilities for new kinds of measurement beyond the producers of such products. For example, if Netflix is unwilling to report their numbers or does so only in certain circumstances, what is stopping new firms or researchers from broadly surveying Americans about their cultural consumption?

There may be additional unique opportunities and challenges for researchers. There are so many niche cultural products that could be considered hits that there is almost an endless supply of phenomena to study and analyze. On the other hand, this will make it more difficult to talk about “popular culture” or “American culture” as a whole. What unites all of these niches of different sizes and tastes?

It will also be interesting to see in 10-20 years what is actually remembered from this era of splintered cultural consumption. What cultural phenomena cross enough boundaries or niches to register with a large portion of the American population? Will the primary touchstones be viral Internet videos or stories rather than songs, movies, TV shows, books, etc.?

 

The changing concept of TV ratings

Recent report from Netflix about the number of viewers for certain movies and TV shows raises questions about what ratings actually are in today’s world:

These numbers were presumably the flashiest numbers that Netflix had to offer, but, hot damn, they are flashy—even if they should be treated with much skepticism. For one thing, of Netflix’s 139 million global subscribers, only about 59 million are American, something to bear in mind when comparing Netflix’s figures with the strictly domestic ratings of most linear channels. Another sticking point: What constitutes “watching”? According to Netflix, the numbers reflect households where someone watched at least 70 percent of one episode—given the Netflix model, it seems likely that most people started with Episode 1—but this doesn’t tell us how many people stuck with it, or what the average rating for the season was, which is, again, an important metric for linear channels…

Ratings are not just a reflection of how many people are watching a TV show. They are not just a piece of data about something that has already happened. They are also a piece of information that changes what happens, by defining whether we think of something as a hit, which has a knock-on effect on how much attention gets paid to that show, not just by other prospective viewers, but by the media. (Think how much more has been written on You now that we know 40 million people may have watched it.)

Consider, for example, how something like last year’s reboot of Roseanne might have played out if it had been a Netflix series. It would have been covered like crazy before its premiere and then, in the absence of any information about its ratings at all, would have become, like, what? The Ranch? So much of the early frenzy surrounding Roseanne had to do with its enormous-for-our-era ratings, and what those ratings meant. By the same token, years ago I heard—and this is pure rumor and scuttlebutt I am sharing because it’s a fun thought exercise—that at that time Narcos was Netflix’s most popular series. Where is Narcos in the cultural conversation? How would that position have changed if it was widely known that, say, 15 million people watch its every season?

Multiple factors are at play here including the decline of network television, the rise of cable television and streaming services, the general secrecy Netflix has about its ratings, and how today we define cultural hits. The last one seems the most interesting to me as a cultural sociologist: in a fragmented media world, how do we know what is a genuine cultural moment or touchstone compared to being a small fad or a trend isolated to a small group? Ratings were once a way to do this as we could assume big numbers meant it mattered to a lot of people.

Additionally, we today want quicker news about new trends and patterns. A rating can only tell us so much. It depends how it was measured. How does the rating compare to other ratings? Perhaps most importantly, the rating cannot tell us a lot about the lasting cultural contributions of the show or movie. Some products with big ratings will not stand the test of time while others will. Do we think people will be discussing You and talking about its impact on society in 30 years? We need time to discuss, analyze, and process what each cultural product is about. Cultural narratives involving cultural products need time to develop.

Defining middle class in an era of economic uncertainty

Understanding the middle class requires looking not just at resources but also how the middle-class life is lived:

By the 1990s, the world that Mills had documented was coming apart as corporate downsizing and disinvestment upended the neat equation of secure work and praiseworthy home life. Social thinkers writing in that decade, including the sociologist Katherine Newman and the journalist Barbara Ehrenreich, followed Mills in charting the social and psychological shape of that in-between class. But they found that loss had replaced dependency as the most conspicuous feeling associated with middling workers’ place in the hierarchy.

Today anguish over lost social standing has, in turn, been replaced by a pervasive sense of insecurity…

Aspiring to stability and respectability today means not only navigating the landscape of eroded and contingent work, but of managing debts. Trying to give children a shot, parents take on financial burdens that can destabilize their own future security.

Class has always been partly about income, but debt is now an equal component of the middle-class story, leading to a central paradox of aspirational lives: Striving for stability and respectability means inhabiting insecurity both socially and psychologically. Economic metrics alone can only tell a shallow story, but at the very least, debt should join income in any attempt at definition.

If this is true, perhaps social class should be accompanied by a different sort of measure. Here are a few options:

  1. Economic security or economic insecurity. Perhaps there would be a certain bar to meet – having a certain amount of savings, the ability to find another job, or something else.
  2. Some measure of anxiety or well-being about current economic conditions.

Two households with similar sets of resources could be quite different on these measures based on the particulars of certain jobs, family situations, debt, etc.

The biggest downsides to such measures could be that they remove the baselines that social class measures often have as well as affect the value judgments made about social class. We know that less income or lower wealth matters; a household with $20,000 of income is going to be different than one with $100,000. (Yes, this could be contextual based on cost of living.) But, if we start including some measures of the lived experience of class, is there a baseline? Similarly, what if financial measures were similar for two groups but one group had a higher level of anxiety or insecurity; would researchers and pundits be quick to judge whether that anxiety is justified?

Of course, if the insecurity/anxiety questions are asked alongside more traditional measures of social class, researchers can look at the relationships and determine what a consistent and valid measure of social class should be.

Archetypal American cities and “America has only three cities: New York, San Francisco, and New Orleans. Everywhere else is Cleveland.”

A story about the decline of retail establishments in Manhattan and the consequences for street life ends with this saying from Tennessee Williams:

“America has only three cities,” Tennessee Williams purportedly said. “New York, San Francisco, and New Orleans. Everywhere else is Cleveland.” That may have been true once. But New York’s evolution suggests that the future of cities is an experiment in mass commodification—the Clevelandification of urban America, where the city becomes the very uniform species that Williams abhorred. Paying seven figures to buy a place in Manhattan or San Francisco might have always been dubious. But what’s the point of paying New York prices to live in a neighborhood that’s just biding its time to become “everywhere else”?

These three cities are indeed unique with distinct cultures and geographies. But, I could imagine there would be some howls in response from a number of other big cities. What about Chicago and its distinct Midwest rise in the middle of a commodity empire? What about Los Angeles and its sprawling suburbs and highways between and across mountains and the ocean? What about Miami serving as a Caribbean capital? What about Portland’s unusual climate and approach to social issues? And the list could go on.

Perhaps a more basic question is this: how many archetypal American cities are there? One of the books I have used in urban sociology, The City, Revisited, argues for three main schools of urban theory: New York, Chicago, and Los Angeles. These happen to be the three largest cities in the United States and also have the advantage of having collections of urban scholars present in each. New York is marked by a strong core (Manhattan) and a unique colonial history (Dutch and then English) that helped kickstart a thriving economy and religious and cultural pluralism. Chicago is the American boom city of the 1800s and was home to the influential Chicago School at the University of Chicago in the 1920s and 1930s. Los Angeles is the prototypical twentieth-century American city built around highways and Hollywood with a rise of urban theorists in the late 1900s dubbing themselves the Los Angeles School. If these are the three main cities on which to compare and contrast, a place like Cleveland is more like Chicago (as is much of the Rust Belt), Houston is more like Los Angeles (as is much of the Sunbelt), and San Francisco is more like New York (and some other coastal cities might fit here).

But, these three biggest cities cannot cover all possible kinds of American cities. How many archetypal cities are too many before the categories become less helpful? Should the emphasis be on cultural feel or on how cities develop (New Orleans might simply be a unique outlier in all of this data)? Having these ideal type cities is only helpful so that they help describe and embody broad patterns across groups of cities.

Researchers say half the world is middle class or higher

A new report suggests a majority of humans are middle class or above:

For the first time since agriculture-based civilization began 10,000 years ago, the majority of humankind is no longer poor or vulnerable to falling into poverty. By our calculations, as of this month, just over 50 percent of the world’s population, or some 3.8 billion people, live in households with enough discretionary expenditure to be considered “middle class” or “rich.” About the same number of people are living in households that are poor or vulnerable to poverty. So September 2018 marks a global tipping point. After this, for the first time ever, the poor and vulnerable will no longer be a majority in the world. Barring some unfortunate global economic setback, this marks the start of a new era of a middle-class majority.

We make these claims based on a classification of households into those in extreme poverty (households spending below $1.90 per person per day) and those in the middle class (households spending $11-110 per day per person in 2011 purchasing power parity, or PPP). Two other groups round out our classification: vulnerable households fall between those in poverty and the middle class; and those who are at the top of the distribution who are classified as “rich.”

The consequences could be interesting:

Why does it matter that a middle-class tipping point has been reached and that the middle class is the most rapidly growing segment of the global income distribution? Because the middle class drive demand in the global economy and because the middle class are far more demanding of their governments…

In most countries, there is a clear relationship between the fate of the middle class and the happiness of the population. According to the Gallup World Poll, new entrants into the middle class are noticeably happier than those stuck in poverty or in vulnerable households. Conversely, individuals in countries where the middle class is shrinking report greater degrees of personal stress. The middle class also puts pressure on governments to perform better. They look to their governments to provide affordable housing, education, and universal health care. They rely on public safety nets to help them in sickness, unemployment or old age. But they resist efforts of governments to impose taxes to pay the bills. This complicates the politics of middle-class societies, so they range from autocratic to liberal democracies. Many advanced and middle-income countries today are struggling to find a set of politics that can satisfy a broad middle-class majority.

There are multiple issues to consider here: how all of this is measured, whether the majority is relatively evenly spread across countries or is concentrated in certain areas, and what this might bring.

But, I will point to another feature of this study: it suggests relatively good news. For much of human history, larger-scale collectives – from kingdoms to empires to countries – have consisted of some elites, perhaps a limited middle class, and a larger poor and working-class population. If these figures are true, more people have access to resources and opportunities to do things.

This would fit nicely with some materials I have heard in recent years about a good amount of good news about the global system. On one hand, there are still major problems and sizable poor and vulnerable populations (the less well-off half in this study). On the other hand, global health is improving, economic conditions on the whole are improving, violence is down (in relative terms), and people around the world may be paying attention to the plight of others like never before.

Perhaps this is why even Google has ways of providing some of good news. Even if much news revolves around problems, there is plenty of good news to find.