An expanding sociological concept: emotional labor

As society changes, sociological concepts can be used in new ways or change their definition. As one example, sociologist Arlie Hochschild is asked about the expanding use of “emotional labor”:

Beck: Since the time you coined it, have you noticed the term becoming more popular? How is its use expanding?

Hochschild: It is being used to apply to a wider and wider range of experiences and acts. It’s being used, for example, to refer to the enacting of to-do lists in daily life—pick up the laundry, shop for potatoes, that kind of thing. Which I think is an overextension. It’s also being applied to perfectionism: You’ve absolutely got to do the perfect Christmas holiday. And that can be a confusion and an overextension. I do think that managing anxiety associated with obligatory chores is emotional labor. I would say that. But I don’t think that common examples I could give are necessarily emotional labor. It’s very blurry and over-applied…

We’re trying to have an important conversation but having it in a very hazy way, working with blunt concept. I think the answer is to be more precise and careful in our ideas and to bring this conversation into families and to the office in a helpful way.

If you have an important conversation using muddy ideas, you cannot accomplish your purpose. You won’t be understood by others. And you won’t be clear to yourself. That’s what’s going on. It’d be like going to a bad therapist—“Well, just try to have a better day tomorrow.” You’re doing the right thing, you’re seeking help, but you’re not getting clarification and communicating clearly. It can defeat the purpose; it can backfire.

Sociologists and other scholars can spend a lot of time developing precise definitions for particular social phenomena. While this may seem like arcane or unnecessary work, it is a critical task: having a clear definition then often leads to more precise measurement which can then lead to more productive use of data.

At the same time, sociologists need to be nimble in updating concepts to changing conditions. A great concept from several decades might no longer fit – or it could still be highly relevant. The originator of the concept could adjust the idea (though it is easy to see why this might be difficult to do given the amount of time one invests in the original concept) or the academic community could come to a consensus. Some concepts from the early days of sociology are still regularly discussed and taught while others were abandoned long ago. Such as in the case above, concepts might be adapted by others in unique ways. This could lead to disagreement or an acknowledgement that the concept now means something different in broader circles.

It would be interesting to analyze the changing conceptualization of key ideas within sociology. The concept of emotional labor is now 35 years old. Is that a normal lifetime adhering to an original definition?

Teaching how science and research actually works

As a regular instructor of Statistics and Social Research classes, I took note at this paragraph in a recent profile of Bruno Latour:

Latour believes that if scientists were transparent about how science really functions — as a process in which people, politics, institutions, peer review and so forth all play their parts — they would be in a stronger position to convince people of their claims. Climatologists, he says, must recognize that, as nature’s designated representatives, they have always been political actors, and that they are now combatants in a war whose outcome will have planetary ramifications. We would be in a much better situation, he has told scientists, if they stopped pretending that “the others” — the climate-change deniers — “are the ones engaged in politics and that you are engaged ‘only in science.’ ” In certain respects, new efforts like the March for Science, which has sought to underscore the indispensable role that science plays (or ought to play) in policy decisions, and groups like 314 Action, which are supporting the campaigns of scientists and engineers running for public office, represent an important if belated acknowledgment from today’s scientists that they need, as one of the March’s slogans put it, to step out of the lab and into the streets. (To this Latour might add that the lab has never been truly separate from the streets; that it seems to be is merely a result of scientific culture’s attempt to pass itself off as above the fray.)

Textbooks on Statistics and Social Research say there are right ways and wrong ways to do the work. There are steps to follow, guidelines to adhere to, clear cut answers on how to do the work right. It is all presented in a logical and consistent format.

There are hints that this may not happen all the time. Certain known factors as well as unknown issues can push a researcher off track a bit. But, to do a good job, to do work that is scientifically interesting and acceptable to the scientific community, you would want to stick to the guidelines as much as possible.

This provides a Weberian ideal type of how science should operate. Or, perhaps the opposite ideal type occasionally provides a contrast. The researcher who committed outright fraud. The scholar who stepped way over ethical boundaries.

I see one of my jobs of teaching these classes as providing how these steps work out in actuality. You want to follow those guidelines but here is what can often happen. I regularly talk about the constraints of time and money: researchers often want to answer big questions with ideal data and that does not always happen. You make mistakes, such as in collecting data or analyzing results. You send the manuscript off for review and people offer all sorts of suggestions of how to fix it. The focus of the project and the hypothesis changes, perhaps even multiple times. It takes years to see everything through to publication.

On one hand, students often want the black and white presentation because it offers clear guidelines. If this happens, do this. On the other hand, presenting the cleaner version is an incomplete education into how research works. Students need to know how to respond when the process does not go as planned and know that this does not necessarily mean their work is doomed.

Scientific research is not easy nor is it always clear cut. Coming back to the ideal type concept, perhaps we should present it as we aspire to certain standards and particular matters may be non-negotiable but there are parts of the process, sometimes small and sometimes large, that are more flexible depending on circumstances.

Study suggests cities and farming began more than 40,000 years ago

A recent study suggests cities may have started much earlier:

For centuries, archaeologists believed that ancient people couldn’t live in tropical jungles. The environment was simply too harsh and challenging, they thought. As a result, scientists simply didn’t look for clues of ancient civilizations in the tropics. Instead, they turned their attention to the Middle East, where we have ample evidence that hunter-gatherers settled down in farming villages 9,000 years ago during a period dubbed the “Neolithic revolution.” Eventually, these farmers’ offspring built the ziggurats of Mesopotamia and the great pyramids of Egypt. It seemed certain that city life came from these places and spread from there around the world.

But now that story seems increasingly uncertain. In an article published in Nature Plants, Max Planck Institute archaeologist Patrick Roberts and his colleagues explain that cities and farms are far older than we think. Using techniques ranging from genetic sampling of forest ecosystems and isotope analysis of human teeth, to soil analysis and lidar, the researchers have found ample evidence that people at the equator were actively changing the natural world to make it more human-centric.

It all started about 45,000 years ago. At that point, people began burning down vegetation to make room for plant resources and homes. Over the next 35,000 years, the simple practice of burning back forest evolved. People mixed specialized soils for growing plants; they drained swamps for agriculture; they domesticated animals like chickens; and they farmed yam, taro, sweet potato, chili pepper, black pepper, mango, and bananas…

“The tropics demonstrate that where we draw the lines of agriculture and urbanism can be very difficult to determine. Humans were clearly modifying environments and moving even small animals around as early as 20,000 years ago in Melanesia, they were performing the extensive drainage of landscapes at Kuk Swamp to farm yams [and] bananas… From a Middle East/European perspective, there has always been a revolutionary difference (“Neolithic revolution”) between hunter gatherers and farmers, [but] the tropics belie this somewhat.”

Two things strike me:

  1. The article suggests that this finding just occurred now because scholars assumed it wasn’t worth examining the tropics. This happens more often than researchers want to admit: we explore certain phenomena for certain reasons and this may blind us to other phenomena or explanations. In a perfect world, there would be so many researchers that everything could be covered and research that rules out explanations or shows a lack of phenomena would be valued more highly.
  2. That cities and agriculture took a longer time to develop does not seem too surprising. The shift to more anchored lives – tied to farming and larger population centers – would have been quite a change. Arguably, the world is still going through this process with the pace of urbanization increasing tremendously in the last century and nations and cities desperately trying to catch up.

Now that scientists are looking into this matter, hopefully we get a more complete understanding soon.

Good data is foundational to doing good sociological work

I’ve had conversations in recent months with a few colleagues outside the discipline about debates within sociology over the work of ethnographers like Alice Goffman, Matt Desmond, and Sudhir Venkatesh. It is enlightening to hear how outsiders see the disagreements and this has pushed me to consider more fully how I would explain the issues at hand. What follows is my one paragraph response to what is at stake:

In the end, what separates the work of sociologists from perceptive non-academics or journalists? (An aside: many of my favorite journalists often operate like pop sociologists as they try to explain and not just describe social phenomena.) To me, it comes down to data and methods. This is why I enjoy teaching both our Statistics course and our Social Research course: undergraduates rarely come into them excited but they are foundational to who sociologists are. What we want to do is have data that is (1) scientific – reliable and valid – and (2) generalizable – allowing us to see patterns across individuals and cases or settings. I don’t think it is a surprise that the three sociologists under fire above wrote ethnographies where it is perhaps more difficult to fit the method under a scientific rubric. (I do think it can be done but it doesn’t always appear that way to outsiders or even some sociologists.) Sociology is unique in both its methodological pluralism – we do everything from ethnography to historical analysis to statistical models to lab or natural experiments to mass surveys – and we aim to find causal explanations for phenomena rather than just describe what is happening. Ultimately, if you can’t trust a sociologist’s data, why bother considering their conclusions or why would you prioritize their explanations over that of an astute person on the street?

Caveats: I know no data is perfect and sociologists are not in the business of “proving” things but rather we look for patterns. There is also plenty of disagreement within sociology about these issues. In a perfect world, we would have researchers using different methods to examine the same phenomena and develop a more holistic approach. I also don’t mean to exclude the role of theory in my description above; data has to be interpreted. But, if you don’t have good data to start with, the theories are abstractions.

When software – like Excel – hampers scientific research

Statistical software can be very helpful but it does not automatically guarantee correct analyses:

A team of Australian researchers analyzed nearly 3,600 genetics papers published in a number of leading scientific journals — like Nature, Science and PLoS One. As is common practice in the field, these papers all came with supplementary files containing lists of genes used in the research.

The Australian researchers found that roughly 1 in 5 of these papers included errors in their gene lists that were due to Excel automatically converting gene names to things like calendar dates or random numbers…

Genetics isn’t the only field where a life’s work can potentially be undermined by a spreadsheet error. Harvard economists Carmen Reinhart and Kenneth Rogoff famously made an Excel goof — omitting a few rows of data from a calculation — that caused them to drastically overstate the negative GDP impact of high debt burdens. Researchers in other fields occasionally have to issue retractions after finding Excel errors as well…

For the time being, the only fix for the issue is for researchers and journal editors to remain vigilant when working with their data files. Even better, they could abandon Excel completely in favor of programs and languages that were built for statistical research, like R and Python.

Excel has particular autoformatting issues but all statistical programs have unique ways of handling data. Spreadsheets of data – often formatted with cases in the rows and variables in the columns – don’t automatically read in correctly.

Additionally, user error can lead to issues with any sort of statistical software. Different programs may have different quirks but various researchers can do all sort of weird things from recoding incorrectly to misreading missing data to misinterpreting results. Data doesn’t analyze itself and statistical software is just a tool that needs to be used correctly.

A number of researchers have in recent years called for open data once a paper is published and this could help those in an academic field spot mistakes. Of course, the best solution is to double-check (at least) data before review and publication. Yet, when you are buried in a quantitative project and there are dozens of steps of data work and analysis, it can be hard to (1) keep track of everything and (2) closely watch for errors. Perhaps we need independent data review even before publication.

Quick Review: League of Denial

I had a chance this past week to read the book League of Denial and see the PBS documentary by the same name. Some thoughts about the story of the NFL and concussion research (focusing mostly on the book which provides a more detailed narrative):

1. I know some fans are already complaining of “concussion fatigue” but it is hard to think of football the same way after hearing this story. For decades, we have held up players for their toughness and yet it may be ruining their brains.

2. The human story in all of this is quite interesting. This includes some of the former football players who have been driven to the edge by their football-related brain injuries. At the same time, the story amongst the doctors is also pretty fascinating, the chase for fame, publishing articles, and acquiring brains. Running through the whole book is this question of “who is really doing this research for the right reasons?” Even if the NFL research appears to be irrevocably tainted, are the researchers on the other side completely neutral or pure of heart?

3. The whole scientific process is laid out in the book (glossed over more in the documentary)…and I’m not sure how it fares. You have scientists fighting each other to acquire brains. You have peer-reviewed research – supposed to help prevent erroneous findings – that is viewed by many as erroneous from the start. You have scientists fighting for funding, an ongoing battle for all researchers as they must support their work and have their own livelihoods. In the end, consensus seems to be emerging but the book and documentary highlight the messy process it takes to get there.

4. The comparisons of the NFL to Big Tobacco seem compelling: the NFL tried to bury concussions research for a few decades and still doesn’t admit to a long-term impact of concussions on its players. One place where the comparison might break down for the general public (and scientific research could change this in the near future): the worst problems seem to be in long-time NFL players. When exactly does CTE start in the brains of football players? There is some evidence younger players, college or high school, might already have CTE but we need more evidence of this to be sure. If that is established, that perhaps kids as young as junior high already have CTE and that CTE is derived from regular hits at a young age (not the big knock-out blows), the link to Big Tobacco might be complete.

5. It is not really part of this story but I was struck again by how relatively little we know about the brain. Concussion research didn’t really take off until the 1990s, even as this had happened with football players for decades. (One sports area where it had been studied: boxing.) Much of this research is quite new and is a reminder that we humans don’t know as much as we might think.

6. This also provides a big reminder that the NFL is big business. Players seem the most aware of this: they can be cut at any time and an injury outside of their control could end their careers. The league and owners do not come off well here as they try to protect their holdings. The employees – the players – are generally treated badly: paid well if they perform but thrown aside otherwise. This may lead to a “better product” on the field but the human toll is staggering.

7. How exactly you change people’s opinions, both fans and players, regarding concussions will be fascinating to watch. It will take quite a shift among players from the tough-guy image to being willing to consider their futures more carefully. For fans, they may become more understanding as their favorite players consider what concussions might do to their lives. Will the NFL remain as popular? Hard to say though I imagine most fans this past weekend of football had little problem watching lots of gridiron action Saturday and Sunday.

Sociologists = people who look at “boring data compiled during endless research”

If this is how a good portion of the public views what sociologists do, sociologists may be in trouble:

Anthony Campolo is a sociologist by trade, used to looking at boring data compiled during endless research.

Data collection and analysis may not be glamorous but a statement like this suggests sociologists may have some PR issues. Data collection and analysis are often time consuming and even tedious. But, there are reasons for working so hard to get data and do research: so sociologists can make substantiated claims about how the social world works. Without rigorous methods, sociologists would just be settling for interpretation, opinion, or anecdotal evidence. For example, we might be left with stories like that of a homeless man in Austin, Texas who was “testing”  which religious groups contributed more money to him. Of course, his one case tells us little to nothing.

Perhaps this opening sentence should look something like this: time spent collecting and analyzing data will pay off in stronger arguments.

 

Social psychology can move forward by pursuing more replication

Here is an argument that a renewed emphasis on replicating studies will help the field of social psychology move beyond some public issues:

Things aren’t quite as bad as they seem, though. Although Natures report was headlined “Disputed results a fresh blow for social psychology,” it scarcely noted that there have been some replications of experiments modelled on Dijksterhuis’s phenomenon. His finding could still out turn to be right, if weaker than first thought. More broadly, social priming is just one thread in the very rich fabric of social psychology. The field will survive, even if social priming turns out to have been overrated or an unfortunate detour.

Even if this one particular line of work is under a shroud, it is important not to lose sight of the fact many of the old standbys from social psychology have been endlessly replicated, like the Milgram effect—the old study of obedience in which subjects turned up electrical shocks (or what they thought were electrical shocks) all the way to four hundred and fifty volts, apparently causing great pain to their subjects, simply because they’d been asked to do it. Milgram himself replicated the experiment numerous times, in many different populations, with groups of differing backgrounds. It is still robust (in hands of other researchers) nearly fifty years later. And even today, people are still extending that result; just last week I read about a study in which intrepid experimenters asked whether people might administer electric shocks to robots, under similar circumstances. (Answer: yes.)

More importantly, there is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. Throughout my career, and long before it, journals emphasized that new papers have to publish original results; I completely failed to replicate a particular study a few years ago, but at the time didn’t bother to submit it to a journal because I knew few people would be interested. Now, happily, the scientific culture has changed. Since I first mentioned these issues in late December, several leading researchers in psychology have announced major efforts to replicate previous work, and to change the incentives so that scientists can do the right thing without feeling like they are spending time doing something that might not be valued by tenure committees.

The Reproducibility Project, from the Center for Open Science is now underway, with its first white paper on the psychology and sociology of replication itself. Thanks to Daniel Simons and Bobbie Spellman, the journal Perspectives in Psychological Science is now accepting submissions for a new section of each issue devoted to replicability. The journal Social Psychology is planning a special issue on replications for important results in social psychology, and has already received forty proposals. Other journals in neuroscience and medicine are engaged in similar efforts: my N.Y.U. colleague Todd Gureckis just used Amazon’s Mechanical Turk to replicate a wide range of basic results in cognitive psychology. And just last week, Uri Simonsohn released a paper on coping with the famous file-drawer problem, in which failed studies have historically been underreported.

It is a good thing if the social sciences were able to be more sure of their findings. Replication could go a long way to moving the conversation away from headline-grabbing findings based on small Ns to be more certain results that a broader swath of an academic field can agree with. The goal is to get it right in the long run with evidence about human behaviors and attitudes, not necessarily in the short-term.

Even with a renewed emphasis on replication, there might still be some issues:

1. The ability to publish more replication studies would certainly help but is there enough incentive for researchers, particularly those trying to establish themselves, to pursue replication studies over innovative ideas and areas that gain more attention?

2. What about the number of studies that are conducted with WEIRD populations, primarily US undergraduate students? If studies continue to be replicated with skewed populations, is much gained?

Debate over priming effect illustrates need for replication

A review of the literature regarding the priming effect highlights the need in science for replication:

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers. Psychology isn’t the only field with fakers, but it has its share. Plus there’s the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another’s work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the “anchoring effect,” which happens, for instance, when a store lists a competitor’s inflated price next to its own to make you think you’re getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient. A small group of skeptical psychologists—let’s call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can’t get those results. The studies don’t check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

While some may find this discouraging, it sounds like the scientific process is being followed. A researcher, Bargh, finds something interesting. Others follow up to see if Bargh was right and to try to extend the idea. Debate ensues once a number of studies have been done. Perhaps there is one stage left to finish off in this process: the research community has to look at the accumulated evidence at some point and decide whether the priming effect exists or not. What does the overall weight of the evidence suggest?

For the replication process to work well, a few things need to happen. Researchers need to be willing to repeat the studies of others as well as their own studies. They need to be willing to report both positive and negative findings, regardless of which side of the debate they are on. Journals need to provide space for positive and negative findings. This incremental process will take time and may not lead to big headlines but its steady approach should pay off in the end.

Science more about consensus than proven facts

A new book titled The Half-Life of Facts looks at how science is more about consensus than canon. A book review in the Wall Street Journal summarizes the argument:

Knowledge, then, is less a canon than a consensus in a state of constant disruption. Part of the disruption has to do with error and its correction, but another part with simple newness—outright discoveries or new modes of classification and analysis, often enabled by technology. A single chapter in “The Half-Life of Facts” looking at the velocity of knowledge growth starts with the author’s first long computer download—a document containing Plato’s “Republic”—journeys through the rapid rise of the “@” symbol, introduces Moore’s Law describing the growth rate of computing power, and discusses the relevance of Clayton Christensen’s theory of disruptive innovation. Mr. Arbesman illustrates the speed of technological advancement with examples ranging from the magnetic properties of iron—it has become twice as magnetic every five years as purification techniques have improved—to the average distance of daily travel in France, which has exponentially increased over the past two centuries.

To cover so much ground in a scant 200 pages, Mr. Arbesman inevitably sacrifices detail and resolution. And to persuade us that facts change in mathematically predictable ways, he seems to overstate the predictive power of mathematical extrapolation. Still, he does show us convincingly that knowledge changes and that scientific facts are rarely as solid as they appear…

More commonly, however, changes in scientific facts reflect the way that science is done. Mr. Arbesman describes the “Decline Effect”—the tendency of an original scientific publication to present results that seem far more compelling than those of later studies. Such a tendency has been documented in the medical literature over the past decade by John Ioannidis, a researcher at Stanford, in areas as diverse as HIV therapy, angioplasty and stroke treatment. The cause of the decline may well be a potent combination of random chance (generating an excessively impressive result) and publication bias (leading positive results to get preferentially published)…

Science, Mr. Arbesman observes, is a “terribly human endeavor.” Knowledge grows but carries with it uncertainty and error; today’s scientific doctrine may become tomorrow’s cautionary tale. What is to be done? The right response, according to Mr. Arbesman, is to embrace change rather than fight it. “Far better than learning facts is learning how to adapt to changing facts,” he says. “Stop memorizing things . . . memories can be outsourced to the cloud.” In other words: In a world of information flux, it isn’t what you know that counts—it is how efficiently you can refresh.

To add to the conclusion of this review as cited above, it is less about the specific content of the scientific facts and more about the scientific method one uses to arrive at scientific conclusions. There is a reason the scientific process is taught starting in grade school: the process is supposed to help observers get around their own biases and truly observe reality in a reliable and valid way. Of course, whether our bias can actually be eliminated and how we go about observing both matter for our results but it is the process itself that remains intact.

This also gets to an issue some colleagues and I have noticed where college students talk about “proving” things about the world (natural or social). The language of “proof” implies that data collection and analysis can yield unchanging facts which cannot be disputed. But, as this book points out, this is not how science works. When a researcher finds something interesting, they report on their finding and then others go about retesting the findings or applying the findings to new areas. Over time, knowledge accumulates. To put it in the terms of this review, a consensus is eventually reached. But, new information can counteract this consensus and the paradigm building process starts over again (a la Thomas Kuhn in The Structure of Scientific Revolutions). This doesn’t mean science can’t tell us anything but it does mean that the theories and findings of science can change over time (and here is another interesting discussion point: what exactly is a law, theory, and a finding).

In the end, science requires a longer view. As I’ve noted before, the media tends to play up new scientific findings but we are better served looking at the big picture of scientific findings and waiting for a consensus to emerge.