Considering the environmental and material costs of Internet music

A new book considers what it takes to record, produce, sell, and consume music in today’s world:

Photo by Vlad Bagacian on Pexels.com

Listening to music on the Internet feels clean, efficient, environmentally virtuous. Instead of accumulating heaps of vinyl or plastic, we unpocket our sleek devices and pluck tunes from the ether. Music has, it seems, been freed from the grubby realm of things. Kyle Devine, in his recent book, “Decomposed: The Political Ecology of Music,” thoroughly dismantles that seductive illusion. Like everything we do on the Internet, streaming and downloading music requires a steady surge of energy. Devine writes, “The environmental cost of music is now greater than at any time during recorded music’s previous eras.” He supports that claim with a chart of his own devising, using data culled from various sources, which suggests that, in 2016, streaming and downloading music generated around a hundred and ninety-four million kilograms of greenhouse-gas emissions—some forty million more than the emissions associated with all music formats in 2000. Given the unprecedented reliance on streaming media during the coronavirus pandemic, the figure for 2020 will probably be even greater.

The ostensibly frictionless nature of online listening has other hidden or overlooked costs. Exploitative regimes of labor enable the production of smartphone and computer components. Conditions at Foxconn factories in China have long been notorious; recent reports suggest that the brutally abused Uighur minority has been pressed into the production of Apple devices. Child laborers are involved in the mining of cobalt, which is used in iPhone batteries. Spotify, the dominant streaming service, needs huge quantities of energy to power its servers. No less problematic are the streaming services’ own exploitative practices, including their notoriously stingy royalty payments to working musicians. Not long ago, Daniel Ek, Spotify’s C.E.O., announced, “The artists today that are making it realize that it’s about creating a continuous engagement with their fans.” In other words, to make a living as a musician, you need to claw desperately for attention at every waking hour…

Devine holds out hope for a shift in consciousness, similar to the one that has taken place in our relationship with food. When we listen to music, we may ask ourselves: Under what conditions was a particular recording made? How equitable is the process by which it has reached us? Who is being paid? How are they being treated? And—most pressing—how much music do we really need? Perhaps, if we have less of it, it may matter to us more.

A full consideration of the ethics of music production and sales could raise a number of concerns. In addition to the environmental issues, how about how musical acts are treated? Who profits from streaming? How many people in the music industry come out in the end as better people?

In a non-COVID-19 world, it seems like an answer would be to support local live music. Even though live shows take up space and energy, if the musicians do not have to travel far, the audience is taking it all in without any recording and equipment for listening on their own standing in the way, and there is a positive collective spirit, this might be the ideal. This shifts the attention away from music as a commodity – I can own or stream a tremendous amount of music – versus music as an experience. Alas, this might be hard to do even without a pandemic given propensities toward large tours (particularly the mega-tours of the most famous acts) and lots of travel.

Thinking beyond music, this line of argument highlights how many of the direct outcomes or effects of consumption or actions are even further removed for people when information, products, and experiences are put through the Internet. If I am streaming, I may know the data comes from somewhere. But, how many people have seen a data center, let alone have some idea of what is involved?

“So what are the rules of ethnography, and who enforces them?”

A journalist looking into the Goffman affair discusses the ethics of ethnography:

To find out, I called several sociologists and anthropologists who had either done ethnographic research of their own or had thought about the methodology from an outside perspective. Ethnography, they explained, is a way of doing research on groups of people that typically involves an extended immersion in their world. If you’re an ethnographer, they said, standard operating procedure requires you to take whatever steps you need to in order to conceal the identities of everyone in your sample population. Unless you formally agree to fulfill this obligation, I was told, your research proposal will likely be blocked by the institutional review board at your university…

The frustration is not merely a matter of academics resenting oversight out of principle. Many researchers think the uncompromising demand for total privacy has a detrimental effect on the quality of scholarship that comes out of the social sciences—in part because anonymization makes it impossible to fact-check the work…

According to Goffman, her book is no less true than Leovy’s or LeBlanc’s. That’s because, as she sees it, what sociologists set out to capture in their research isn’t truths about specific individuals but general truths that tell us how the world works. In her view, On the Run is a true account because the general picture it paints of what it’s like to live in a poor, overpoliced community in America is accurate.

“Sociology is trying to document and make sense of the major changes afoot in society—that’s long been the goal,” Goffman told me. Her job, she said, as a sociologist who is interested in the conditions of life in poor black urban America, is to identify “things that recur”—to observe systemic realities that are replicated in similar neighborhoods all over the country. “If something only happens once, [sociologists are] less interested in it than if it repeats,” she wrote to me in an email. “Or we’re interested in that one time thing because of what it reveals about what usually happens.” This philosophy goes back to the so-called Chicago school of sociology, Goffman added, which represented an attempt by observers of human behavior to make their work into a science “by finding general patterns in social life, principles that hold across many cases or across time.”…

Goffman herself is the first to admit that she wasn’t treating her “study subjects” as a mere sample population—she was getting to know them as human beings and rendering the conditions of their lives from up close. Her book makes for great reading precisely because it is concerned with specifics—it is vivid, tense, and evocative. At times, it reads less like an academic study of an urban environment and more like a memoir, a personal account of six years living under extraordinary circumstances. Memoirists often take certain liberties in reconstructing their lives, relying on memory more than field notes and privileging compelling narrative over strict adherence to the facts. Indeed, in a memoir I’m publishing next month, there are several moments I chose to present out of order in order to achieve a less convoluted timeline, a fact I flag for the reader in a disclaimer at the front of the book.

Not surprisingly, there is disagreement within the discipline of sociology as well as across disciplines about how ethnography could and should work. It is a research method that requires so much time and personal effort that it can be easy to tie to a particular researcher and their laudable steps or mistakes. This might miss the forest for the trees; I’ve thought for a while that we need more discussion across ethnographies rather than seeing them as either the singular work on the subject. In other words, does Goffman’s data line up with what others have found in studying race, poor neighborhoods, and the criminal justice system? And if there are not comparisons to make with Goffman’s work, why aren’t more researchers wrestling with the same topic?

Additionally, this particular discussion highlights longstanding tensions in sociology: qualitative vs. quantitative data (with one often assumed to be more “fact”); “facts” versus “interpretation”; writing academic texts versus books for more general audiences; emphasizing individual stories (which often appeals to the public) versus the big picture; dealing with outside regulations such as IRBs that may or may not be accustomed to dealing with ethnographic methods in sociology; and how to best do research to help disadvantaged communities. Some might see these tensions as more evidence that sociology (and other social sciences) simply can’t tell us much of anything. I would suggest the opposite: the realities of the social world are so complex that these tensions are necessary in gathering and interpreting comprehensive data.

Teaching student tech designers to treat users more humanely

Here are a few college classes intended to help future tech designers keep the well-being of users in mind:

The class, which she taught at the Rhode Island School of Design and the MIT Media Lab, attempted to teach a sense of responsibility to technology inventors through science fiction, a genre in which writers have been thinking deeply about the impact of today’s technologies for decades. “It encourages people to have that long-term version that I think is missing in the world of innovation right now,” she says, “What happens when your idea scales to millions of people? What happens when people are using your product hundreds of times a day? I think the people who are developing new technologies need to be thinking about that.”

Students in Brueckner’s class built functional prototypes of technologies depicted by science fiction texts. One group created a “sensory fiction” book and wearable gadget that, in addition to adding lights and sounds to a story, constricts the body through air pressure bags, changing temperature and vibrating “to influence the heart” depending on how the narrative’s protagonist feels. Another group was inspired by a dating technology in Dave Eggers’s The Circle that uses information scraped from the Internet about a date to give suggestions about how to impress him or her. They created an interactive website about a friend using his public information to see how he would react to the idea. A third group imagined how a material that could transition from liquid to solid on command like the killing material “ice-nine” from Kurt Vonnegut’s Cat’s Cradle could be used as a prototyping tool…

Neema Moraveji, the founding director of Standford’s Calming Technology Lab and a cofounder of breath-tracking company Spire, has a different approach for teaching students to consider the human impact of what they are designing. His classes teach students to create technology that actively promotes a calm or focused state of mind, and he co-authored a paper that laid out several suggestions for technology designers, including:

  • Letting users control or temporarily disable interruptions, the way that TweetDeck allows users to control from whom to receive notifications on Twitter.
  • Avoiding overload through the number of features available and the way information is presented. For instance, a Twitter app that opens to the least-recent tweet, “gives users the sense that they must read through all the tweets before they are done.”
  • Using a human tone or humor
  • Providing positive feedback such as “Thanks for filling out the form” and “You successfully updated the application” in addition to error alerts
  • Including easy ways to interact socially, such as Likes and Retweets, which allow people to interact without worrying about how they appear to others.
  • Avoiding time pressure when not necessary.
  • Incorporating natural elements like “soothing error tones, naturalistic animations, and desktop wallpapers taken from the natural world.”

These sound like interesting ideas that may just help designers think not just about the end goals of a product but also consider the user experience. Yet, I still wonder about the ability of tech designers to resist the pressure their employers might put on them. For example, putting these more humane options into practice could be easier when working for your own startup but would be more difficult if a big corporation is breathing down your neck to push the bottom line or end product. Think the Milgram experiment: can individual designers follow the ethical path? Perhaps some of this training also needs to happen at the executive and managerial levels so that the emphasis on protecting the user is pervasive throughout organizations.

Facebook not going to run voting experiments in 2014

Facebook is taking an increasing role in curating your news but has decided to not conducts experiments with the 2014 elections:

Election Day is coming up, and if you use Facebook, you’ll see an option to tell everyone you voted. This isn’t new; Facebook introduced the “I Voted” button in 2008. What is new is that, according to Facebook, this year the company isn’t conducting any experiments related to election season.

That’d be the first time in a long time. Facebook has experimented with the voting button in several elections since 2008, and the company’s researchers have presented evidence that the button actually influences voter behavior…

Facebook’s experiments in 2012 are also believed to have influenced voter behavior. Of course, everything is user-reported, so there’s no way of knowing how many people are being honest and who is lying; the social network’s influence could be larger or smaller than reported.

Facebook has not been very forthright about these experiments. It didn’t tell people at the time that they were being conducted. This lack of transparency is troubling, but not surprising. Facebook can introduce and change features that influence elections, and that means it is an enormously powerful political tool. And that means the company’s ability to sway voters will be of great interest to politicians and other powerful figures.

Facebook will still have the “I voted” button this week:

On Tuesday, the company will again deploy its voting tool. But Facebook’s Buckley insists that the firm will not this time be conducting any research experiments with the voter megaphone. That day, he says, almost every Facebook user in the United States over the age of 18 will see the “I Voted” button. And if the friends they typically interact with on Facebook click on it, users will see that too. The message: Facebook wants its users to vote, and the social-networking firm will not be manipulating its voter promotion effort for research purposes. How do we know this? Only because Facebook says so.

It seems like there are two related issues here:

1. Should Facebook promote voting? I would guess many experts would like popular efforts to try to get people to vote. After all, how good is democracy if many people don’t take advantage of their rights to vote? Facebook is a popular tool and if this can help boost political and civic engagement, what could be wrong with that?

2. However, Facebook is also a corporation that is collecting data. Their efforts to promote voting might be part of experiments. Users aren’t immediately aware that they are participating in an experiment when they see a “I voted” button. Or, the company may decide to try to influence elections.

Facebook is not alone in promoting elections. Hundreds of media outlets promote election news. Don’t they encourage voting? Aren’t they major corporations? The key here appears to be the experimental angle: people might be manipulated. Might this be okay if (1) they know they are taking part (voluntary participation is key to social science experiments) and (2) it promotes the public good? This sort of critique implies that the first part is necessary because fulfilling a public good is not enough to justify the potential manipulation.

What if Facebook could consistently improve users’ moods?

There has been a lot of hubbub about the ethics of a mood experiment Facebook ran several years ago. But, what if Facebook could consistently alter what it presents users to improve their mood and well-being? Positive psychology guru Marty Seligman hints at this in Flourish:

It is not only measuring well-being that Facebook and its cousins can do, but increasing well-being as well. “We have a new application: goals.com,” Mark continued. “In this app, people record their goals and their progress toward their goals.”

I commented on Facebook’s possibilities for instilling well-being: “As it stands now, Facebook may actually be building four of the elements of well-being: positive emotion, engagement (sharing all those photos of good events), positive relationships (The heart of what ‘friends’ are all about), and now accomplishment. All to the good. The fifth element of well-being, however, needs work, and in the narcissistic environment of Facebook, this work is urgent, and that is belonging to and serving something that you believe is bigger than the self – the element of meaning. Facebook could indeed help to build meaning in the lives of the five hundred million users. Think about it, Mark.” (page 98)

This might still be a question of ethics and letting users know what is happening.  And I’m sure some critics would argue that it is too artificial, the relationships sustained online are of a different kind than that of face-to-face relationships (though we know most users interact with people online that they already know offline), and this puts too power in the hands of Facebook. Yet, what if Facebook could help improve well-being? What if a lot of good be done by altering the online experience?

Facebook ran a mood altering experiment. What are the ethics for doing research with online subjects?

In 2012, Facebook ran a one-week experiment by changing news feeds and looking how people’s moods changed. The major complaint about this seems to be the lack of consent and/or deception:

The backlash, in this case, seems tied directly to the sense that Facebook manipulated people—used them as guinea pigs—without their knowledge, and in a setting where that kind of manipulation feels intimate. There’s also a contextual question. People may understand by now that their News Feed appears differently based on what they click—this is how targeted advertising works—but the idea that Facebook is altering what you see to find out if it can make you feel happy or sad seems in some ways cruel.

This raises important questions about how online research intersects with traditional scientific ethics. In sociology, we tend to sum up our ethics in two rules: don’t harm people and participants have to volunteer or give consent to be part of studies. The burden falls on the researcher to ensure that the subject is protected. How explicit should this be online? Participants on Facebook were likely not seriously harmed though it could be quite interesting if someone could directly link their news feed from that week to negative offline consequences. And, how well do the terms of service line up with conducting online research? Given the public relations issues, it would behoove companies to be more explicit about this in their terms of services or somewhere else though they might argue informing people immediately when things are happening online can influence results. This particular issue will be one to watch as the sheer numbers of people online alone will drive more and more online research.

Let’s be honest about the way this Internet stuff works. There is a trade-off involved: users get access to all sorts of information, other people, products, and the latest viral videos and celebrity news that everyone has to know. In exchange, users give up something, whether that is their personal information, tracking of their online behaviors, and advertisements intended to part them from their money. Maybe it doesn’t have to be this way, set up with such bargaining. But, where exactly the line is drawn is a major discussion point at this time. But, you should assume websites and companies and advertisers are trying to get as much from you as possible and plan accordingly. Facebook is not a pleasant entity that just wants to make your life better by connecting you to people; they have their own aims which may or may not line up with your own. Google, Facebook, Amazon, etc. are mega corporations whether they want to be known as such or not.

Wait, What’s Your Problem: the Census does or does not require people to participate?

Sunday’s What’s Your Problem? column in the Chicago Tribune featured a woman irritated by some Census workers who did sound like creepers. Yet, a Census employee is still unclear about whether U.S. residents have to participate in Census surveys:

He said census interviewers are trained to be professional, courteous, and to never use the possibility of a fine to coerce people into participating.

Olson said the American Community Survey is mandatory and there is a potential fine for people who fail to participate, but the Census Bureau relies on public cooperation to encourage responses.

The survey is important because its data guide nearly 70 percent of federal grants, Olson said.

This is a common response from the Census but it is still vague. Is participating in the Census and the American Community Survey mandatory or not? Is there a fine for participation or not? The answer seems to be yes and yes – mandatory, a fine is possible, and yet no has to really worry about incurring a penalty.

Typical social science research, which is akin to what the Census Bureau is doing (and the organization has been led by sociologists), has several basic rules regarding ethics in collecting information from people. Don’t harm people. (See the above story about peeking in people’s windows.) And participation has to be voluntary. This can include contacting people multiple times. So is participation really voluntary if there is even the implicit idea of a fine? This is where it is less like social science research and more like government action, which is a fine line the Census is walking here. Clearing this up might help improve relations with people who are suspicious of why the Census wants basic information about their lives.

 

Sociologist on how studying an extremist group led to a loss of objectivity

A retired sociologist who studied the Aryan Nation discusses how his research led to a loss of objectivity and a change of research topics:

Aho began his research in the mid-1980s with a focus on the most notorious group in Idaho, the Aryan Nation Church near Coeur d’Alene and Hayden Lake. Annual conferences were held there with people from all around the world to fight what they called the “race war.” The group, originally formed in California, was forced to relocate to Idaho due to pressure from authorities. Aho was able to interview members of the group face to face, conduct phone interviews and correspond with prison inmates who were part of the organization.

“These individuals were genuinely good, congenial folks,” said Aho. “They were very independent, married, church-going people with deep beliefs. It was only when they gathered in groups and reaffirmed each other’s prejudices that things became dangerous…

In his research, Aho tried to place himself in his subjects’ shoes. He expressed how it is important to see yourself in the other person to find mutual ground and truths that can only be obtained by using this research methodology. However, after nearly a decade of research, he felt that he was losing objectivity and only adding to the problem.

“I spent years trying to understand the people who are attracted to violence, but I began to feel like my fascination with violence made me partly responsible for it,” Aho said. “I think I lost my sociological objectivity, and thought it was time to end my efforts of trying to understanding it, and move on to other scholarly activities.”

Some candor about researching a difficult topic. Given statements by some recently that we should not “commit sociology” and refrain from looking for explanations for violence, we could just ignore such groups. But, looking for explanations is not the same as excusing or condoning behavior and may help limit violence in the future. At the same time, spending lots of time with people, whether they are good or bad, can lead to relationships and a humanizing the research subject. This may provide better data for a while as well as dignity for the research subject but can lead to the “going native” issue that anthropologists sometimes discuss. A sociologist wants to be able to remain an observer and analyst, even as they try to put themselves in the shoes of others.

It would be interesting to hear the opinions of sociologists regarding studying clearly unpopular groups like white supremacists/terrorists. Sociologists are often interested in studying disadvantaged or voiceless groups but what about groups with which they profoundly disagree?

Problems in Detroit include “dysfunctional American sociology” and lack of regional governance

One commentator focuses on the lack of metropolitan governance in Detroit and also mentions “dysfunctional American sociology.” Here is the bit on sociology:

And without widespread racism, there would have been fewer ghettoized African-Americans.

Hard to ignore this. See the work of scholar Thomas Sugrue in The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit.

Here is more of the argument for regional governance:

In a European-style metro Detroit, unified regional planning would favor reconstruction of the old city centre over new buildings and new highways in ever more distant locations. Some of the tax revenue raised in what are today separate affluent suburban jurisdictions would be spent in the centre of the city. With better roads, schools, police and services, Detroit’s slums would be less slummy and the culture of crime and despair would probably be less entrenched.

There’s actually no need to go to Europe to find better ways to arrange urban jurisdictions. As David Rusk points out in his book “Cities without Suburbs”, the American cities that have expanded their city limits along with their populations generally have stronger economies, less racial segregation and more equal income distribution than the mostly older cities with rigid borders.

The ethical issue can be reduced to an old question: who is my neighbor? Everyone, even economists who believe people should be selfish, recognizes that it is helpful to work together as a community. Almost everyone, perhaps excluding a few cold-hearted economists, would agree that the strong in a community have some obligation to help the weak. But how large is the relevant community?…

David Rusk, Myron Orfield, and others have made the argument for regional governance for decades but it has had difficult gaining traction, particularly in wealthier suburbs that do not see this as such a clear-cut ethical issue. Opposition to regional governance is rooted in longer issues between cities and less urban areas where cities are viewed as bad places full of crime, race, immigrants, densities that are too high, uncleanliness, and other “urban problems.” Why should people who made the choice to move to suburbs be held responsible for the problems of people in other communities? Ultimately, perhaps this is rooted in American individualism which views all moves to the suburbs as the result of individual merit and also tends to lead to an interest in government or control that is as local as possible.

Social psychologist on quest to find researchers who falsify data

The latest Atlantic magazine includes a short piece about a social psychologist who is out to catch other researchers who falsify data. Here is part of the story:

Simonsohn initially targeted not flagrant dishonesty, but loose methodology. In a paper called “False-Positive Psychology,” published in the prestigious journal Psychological Science, he and two colleagues—Leif Nelson, a professor at the University of California at Berkeley, and Wharton’s Joseph Simmons—showed that psychologists could all but guarantee an interesting research finding if they were creative enough with their statistics and procedures.

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.

Just as Simonsohn was thinking about how to follow up on the paper, he came across an article that seemed too good to be true. In it, Lawrence Sanna, a professor who’d recently moved from the University of North Carolina to the University of Michigan, claimed to have found that people with a physically high vantage point—a concert stage instead of an orchestra pit—feel and act more “pro-socially.” (He measured sociability partly by, of all things, someone’s willingness to force fellow research subjects to consume painfully spicy hot sauce.) The size of the effect Sanna reported was “out-of-this-world strong, gravity strong—just super-strong,” Simonsohn told me over Chinese food (heavy on the hot sauce) at a restaurant around the corner from his office. As he read the paper, something else struck him, too: the data didn’t seem to vary as widely as you’d expect real-world results to. Imagine a study that calculated male height: if the average man were 5-foot?10, you wouldn’t expect that in every group of male subjects, the average man would always be precisely 5-foot-10. Yet this was exactly the sort of unlikely pattern Simonsohn detected in Sanna’s data…

Simonsohn stressed that there’s a world of difference between data techniques that generate false positives, and fraud, but he said some academic psychologists have, until recently, been dangerously indifferent to both. Outright fraud is probably rare. Data manipulation is undoubtedly more common—and surely extends to other subjects dependent on statistical study, including biomedicine. Worse, sloppy statistics are “like steroids in baseball”: Throughout the affected fields, researchers who are too intellectually honest to use these tricks will publish less, and may perish. Meanwhile, the less fastidious flourish.

The current research may just provide incentives for researchers to cut corners and end up with false results. Publishing is incredibly important for the career of an academic and there is little systematic oversight of a researcher’s data. I’ve written before about ways that data could be made more open but it would take some work to put these ideas into practice.

What I wouldn’t want to happen is have people read a story like this and conclude that fields like social psychology have nothing to offer because who knows how many of the studies might be flawed. I also wonder about the vigilante edge to this story – it makes a journalistic piece to tell about a social psychologist who is battling his own field but this isn’t how science should work. Simonsohn should be joined by others who should also be concerned by these potential issues. Of course, there may not be many incentives to pursue this work as it might invite criticism from inside and outside the discipline.