“So what are the rules of ethnography, and who enforces them?”

A journalist looking into the Goffman affair discusses the ethics of ethnography:

To find out, I called several sociologists and anthropologists who had either done ethnographic research of their own or had thought about the methodology from an outside perspective. Ethnography, they explained, is a way of doing research on groups of people that typically involves an extended immersion in their world. If you’re an ethnographer, they said, standard operating procedure requires you to take whatever steps you need to in order to conceal the identities of everyone in your sample population. Unless you formally agree to fulfill this obligation, I was told, your research proposal will likely be blocked by the institutional review board at your university…

The frustration is not merely a matter of academics resenting oversight out of principle. Many researchers think the uncompromising demand for total privacy has a detrimental effect on the quality of scholarship that comes out of the social sciences—in part because anonymization makes it impossible to fact-check the work…

According to Goffman, her book is no less true than Leovy’s or LeBlanc’s. That’s because, as she sees it, what sociologists set out to capture in their research isn’t truths about specific individuals but general truths that tell us how the world works. In her view, On the Run is a true account because the general picture it paints of what it’s like to live in a poor, overpoliced community in America is accurate.

“Sociology is trying to document and make sense of the major changes afoot in society—that’s long been the goal,” Goffman told me. Her job, she said, as a sociologist who is interested in the conditions of life in poor black urban America, is to identify “things that recur”—to observe systemic realities that are replicated in similar neighborhoods all over the country. “If something only happens once, [sociologists are] less interested in it than if it repeats,” she wrote to me in an email. “Or we’re interested in that one time thing because of what it reveals about what usually happens.” This philosophy goes back to the so-called Chicago school of sociology, Goffman added, which represented an attempt by observers of human behavior to make their work into a science “by finding general patterns in social life, principles that hold across many cases or across time.”…

Goffman herself is the first to admit that she wasn’t treating her “study subjects” as a mere sample population—she was getting to know them as human beings and rendering the conditions of their lives from up close. Her book makes for great reading precisely because it is concerned with specifics—it is vivid, tense, and evocative. At times, it reads less like an academic study of an urban environment and more like a memoir, a personal account of six years living under extraordinary circumstances. Memoirists often take certain liberties in reconstructing their lives, relying on memory more than field notes and privileging compelling narrative over strict adherence to the facts. Indeed, in a memoir I’m publishing next month, there are several moments I chose to present out of order in order to achieve a less convoluted timeline, a fact I flag for the reader in a disclaimer at the front of the book.

Not surprisingly, there is disagreement within the discipline of sociology as well as across disciplines about how ethnography could and should work. It is a research method that requires so much time and personal effort that it can be easy to tie to a particular researcher and their laudable steps or mistakes. This might miss the forest for the trees; I’ve thought for a while that we need more discussion across ethnographies rather than seeing them as either the singular work on the subject. In other words, does Goffman’s data line up with what others have found in studying race, poor neighborhoods, and the criminal justice system? And if there are not comparisons to make with Goffman’s work, why aren’t more researchers wrestling with the same topic?

Additionally, this particular discussion highlights longstanding tensions in sociology: qualitative vs. quantitative data (with one often assumed to be more “fact”); “facts” versus “interpretation”; writing academic texts versus books for more general audiences; emphasizing individual stories (which often appeals to the public) versus the big picture; dealing with outside regulations such as IRBs that may or may not be accustomed to dealing with ethnographic methods in sociology; and how to best do research to help disadvantaged communities. Some might see these tensions as more evidence that sociology (and other social sciences) simply can’t tell us much of anything. I would suggest the opposite: the realities of the social world are so complex that these tensions are necessary in gathering and interpreting comprehensive data.

More evidence for having IRBs: sociologist finds that US Army released toxic cadmium into St. Louis air in the 1950s and 1960s

A sociologist in St. Louis says she has discovered an unknown story involving the US Army releasing cadmium into the air in the 1950s.

The aerosol was sprayed from blowers installed on rooftops and mounted on vehicles. ”The Army claims that they were spraying a quote ‘harmless’ zinc cadmium sulfide,” says Dr. Lisa Martino-Taylor, Professor of Sociology, St. Louis Community College. Yet Martino-Taylor points out, cadmium was a known toxin at the time of the spraying in the mid 50?s and mid 60?s. Worse, she says the aerosol was laced with a fluorescent additive – a suspected radiological compound – produced by U.S. Radium, a company linked to the deaths of workers at a watch factory decades before.

Martino-Taylor says thousands upon thousands of St. Louis residents likely inhaled the spray. ”The powder was milled to a very, very fine particulate level.  This stuff traveled for up to 40 miles.  So really all of the city of St. Louis was ultimately inundated by  the stuff.”

Martino-Taylor says she’s obtained documents from multiple federal agencies showing the government concocted an elaborate story to keep the testing secret. “There was a reason this was kept secret.  They knew that the people of St. Louis would not tolerate it.” She says part of the deception came from false news reports planted by government agencies.  “And they told local officials and media that they were going to test clouds under which to hide the city in the event of aerial attack.” Martino-Taylor says some of the key players in the cover-up were also members of the Manhattan Atomic Bomb Project and involved in other radiological testing across the United States at the time. “This was against all military guidelines of the day, against all ethical guidelines, against all international codes such as the Nuremberg Code.”

She says the spraying occurred between 1953 and 54 and again from 1963 to 65 in areas of North St. Louis and eventually in parts of South St. Louis. Martino-Taylor launched her research after hearing independent reports of cancers among city residents living in those areas at the time.

When students ask why we have Institutional Review Boards (IRBs) and why it may seem they have researchers jump through a series of hoops, I remind them of stories like this. This experiment even took place after the establishment of the beginnings of the modern ethical guidelines for science  through the Nuremberg Code. It is not too long ago when the government and other organizations undertook silent experiments and violated two of the primary ethical principles sociologists and others hold to: do not harm participants and ensure that they are participating on a voluntary basis.

Another note: it sounds like these experiments were justified in the name of safety. The tests were conducted under the cover that the city needed to prepare for a possible bombing, presumably by Russia.

Quick Review: The Immortal Life of Henrietta Lack

After a few people mentioned a particular New York Times bestseller to me recently, I decided to read The Immortal Life of Henrietta Lack. While the story itself was interesting, there is a lot of material here that could be used in research methods and ethics classes. A few thoughts about the book:

1. The story is split into two narratives. One is about both the progress science has made with a Lack’s cells but also the struggle of her family to understand what actually has been done with her cells. The story of scientific progress is unmistakable: we have come a long way in identifying and curing some diseases in the last sixty years. (This narrative reminded me of the book The Emperor of All Maladies.)

2. The second narrative is about the personal side of scientific research and how patients and relatives interpret what is going on. The author initially finds that the Lacks know very little about how their sister or mother’s cells have been used. These problems are compounded by race, class, and educational differences between the Lacks and the doctors utilizing Henrietta’s cells. In my opinion, this aspect is understated in this book. At the least, this is a reminder about how inequality can affect health care. But I think this personal narrative is the best part of the book. When I talk in class about the reasons for Institutional Review Boards, informed consent, and ethics, students often wonder how much social science research can really harm people. As this book discusses, there are some moments in relatively recent history that we would agree were atrocious: Nazi experiments, the Tuskegee experiments, experiments in Guatemala, and so on. Going beyond those egregious cases, this book illustrates the kind of mental and social harm that can result from research even if using Henrietta’s cells never physically harmed the Lacks. I’m thinking about using some sections of this narrative in class to illustrate what could happen; even if new research appears to be safe, we have to make sure we are protecting our research subjects.

3. This book reminded me of the occasional paternalistic side of the medical field. This book seems to suggest this isn’t just an artifact of the 1950s or a racial division; doctors appear slow in addressing concerns some people might have about the use of human tissue in research. I realize that there is a lot at stake here: the afterward of the book makes clear how difficult it would be to regulate this all and how this might severely limit needed medical research. At the same time, doctors and other medical professionals could go further in explaining the processes and the possible outcomes to patients. Perhaps this is why the MCAT is moving toward involving more sociology and psychology.

4. There is room here to contrast the discussions about using body tissue for research and online privacy. In both cases, a person is giving up something personal. Are people more disturbed by their tissue being used or their personal information being used and sold online?

All in all, this book discusses both scientific breakthroughs, how patients can be hurt by the system, and a number of ethical issues that have yet to be resolved.

Dutch social psychologist commits massive science fraud

This story is a few days old but still interesting: a Dutch social psychologist has admitted to using fraudulent data for years.

Social psychologist Diederik Stapel made a name for himself by pushing his field into new territory. His research papers appeared to demonstrate that exposure to litter and graffiti makes people more likely to commit small crimes and that being in a messy environment encourages people to buy into racial stereotypes, among other things.

But these and other unusual findings are likely to be invalidated. An interim report released last week from an investigative committee at his university in the Netherlands concluded that Stapel blatantly faked data for dozens of papers over several years…

More than 150 papers are being investigated. Though the studies found to contain clearly falsified data have not yet been publicly identified, the journal Science last week published an “editorial expression of concern” regarding Stapel’s paper on stereotyping. Of 21 doctoral theses he supervised, 14 were reportedly compromised. The committee recommends a criminal investigation in connection with “the serious harm inflicted on the reputation and career opportunities of young scientists entrusted to Mr. Stapel,” according to the report…

I think the interesting part of the story here is how this was able to go on so long. It sounds like because Stapel handled more of the data himself rather than follow typical practices of handing it off to graduate students, he was able to falsify data for longer.

This also raises questions about how much scientific data might be faked or unethically tampered with. The article references a forthcoming study on the topic:

In a study to be published in a forthcoming edition of the journal Psychological Science, Loewenstein, John, and Drazen Prelec of MIT surveyed more than 2,000 psychologists about questionable research practices. They found that a significant number said they had engaged in 10 types of potentially unsavory practices, including selectively reporting studies that ‘worked’ (50%) and outright falsification of data (1.7%).

Pushing positive results, generally meaning papers that prove an alternative hypothesis, is also known to be favored by journals who don’t like negative results as much. Of course, both sets of results are needed for science to advance as both help prove and disprove arguments and theories. “Outright falsification” is another story…and perhaps even underreported (given social desirability bias and prevailing norms in scientific fields).

Given these occurrences, I wonder if scientists of all kinds would push for more regulation (IRBs, review boards, etc.) or less regulation with scientists policing themselves more (some more training in ethics, more commonly sharing data or linking studies to available data so readers could do their own analysis, etc.)

More details of unethical US medical experiments in Guatemala in the 1940s

Research methods courses tend to cover the same classic examples of unethical studies. With more details emerging from a government panel, the US medical experiments undertaken in Guatemala during the 1940s could join this list.

From 1946-48, the U.S. Public Health Service and the Pan American Sanitary Bureau worked with several Guatemalan government agencies to do medical research — paid for by the U.S. government — that involved deliberately exposing people to sexually transmitted diseases…

The research came up with no useful medical information, according to some experts. It was hidden for decades but came to light last year, after a Wellesley College medical historian discovered records among the papers of Dr. John Cutler, who led the experiments…

During that time, other researchers were also using people as human guinea pigs, in some cases infecting them with illnesses. Studies weren’t as regulated then, and the planning-on-the-fly feel of Cutler’s work was not unique, some experts have noted.

But panel members concluded that the Guatemala research was bad even by the standards of the time. They compared the work to a 1943 experiment by Cutler and others in which prison inmates were infected with gonorrhea in Terre Haute, Ind. The inmates were volunteers who were told what was involved in the study and gave their consent. The Guatemalan participants — or many of them — received no such explanations and did not give informed consent, the commission said.

Ugh – a study that gives both researchers and Americans a bad name. It is also a good reminder of why we need IRBs.

While the article suggests President Obama apologized to the Guatemalan president, is anything else going to be done to try to make up for this? I also wonder how this is viewed in Central America: yet more details about the intrusiveness of Americans over the last century?

(See my original post on this here.)

Wired’s “seven creepy experiments” short on social science options

When I first saw the headline for this article in my copy of Wired, I was excited to see what they had dreamed up. Alas, the article “Seven Creepy Experiments That Could Teach Us So Much (If They Weren’t So Wrong)” is mainly about biological experiments. One experiment, splitting up twins and fixing their environments, could be interesting: it would provide insights into the ongoing nature vs. nurture debate.

I would be interested to see how social scientists would respond to a question about what “creepy” or unethical experiments they would like to see happen. In research methods class, we have the classic examples of experiments that should not be replicated. Milgram’s experiment about obedience to authority, Zimbardo’s Stanford Prison Experiment, and Humphrey’s Tearoom Trade Study tend to come up. From more popular sources, we could talk about a setup like the one depicted in The Truman Show or intentionally creating settings like those found in Lord of the Flies or The Hunger Games.

What sociological experiments would produce invaluable information but would never pass an IRB?

The troubles with studying Facebook profiles at Harvard

Many researchers would like to get their hands on SNS/Facebook profile data but one well-known dataset put together by Harvard researchers has come under fire:

But today the data-sharing venture has collapsed. The Facebook archive is more like plutonium than gold—its contents yanked offline, its future release uncertain, its creators scolded by some scholars for downloading the profiles without students’ knowledge and for failing to protect their privacy. Those students have been identified as Harvard College’s Class of 2009…

The Harvard sociologists argue that the data pulled from students’ Facebook profiles could lead to great scientific benefits, and that substantial efforts have been made to protect the students. Jason Kaufman, the project’s principal investigator and a research fellow at Harvard’s Berkman Center for Internet & Society, points out that data were redacted to minimize the risk of identification. No student seems to have suffered any harm. Mr. Kaufman accuses his critics of acting like “academic paparazzi.”…

The Facebook project began to unravel in 2008, when a privacy scholar at the University of Wisconsin at Milwaukee, Michael Zimmer, showed that the “anonymous” data of Mr. Kaufman and his colleagues could be cracked to identify the source as Harvard undergraduates…

But that boon brings new pitfalls. Researchers must navigate the shifting privacy standards of social networks and their users. And the committees set up to protect research subjects—institutional review boards, or IRB’s—lack experience with Web-based research, Mr. Zimmer says. Most tend to focus on evaluating biomedical studies or traditional, survey-based social science. He has pointed to the Harvard case in urging the federal government to do more to educate IRB’s about Web research.

It sounds like academics, IRBs, and granting agencies still need to figure out acceptable standards for collecting such data. But I’m not surprised that the primary issue that arose had to do with identifying individual users and their profiles as this is a common issue when researchers ask for or collect personal information. Additionally, this dataset intersects with a lot of open concerns about Internet privacy. Perhaps some IRBs could take on the task of leading the way for academics and other researchers who want to get their hands on such data.

It is interesting that these concerns arose because of the growing interest in sharing datasets. The Harvard researchers and IRB allowed the research to take place so I wonder if all of this would have ever happened if the dataset didn’t have to be shared where others could then raise issues.

I understand that the researchers wanted to collect the profiles quietly but why not ask for permission? How many Harvard students would have turned them down? I think most college students are quite aware of what can happen with their profile data and they take care of the issue on the front end by making selections about what they display. The researchers could then offer some protections in terms of anonymity and who would have access to the data. Or what about having interviews with students who would then be asked to load their profile and walk the researcher through what they have put online and why it is there?

Getting better data on how students use laptops in class: spy on them

Professors like to talk about how students use laptops in the classroom. Two recent studies shed some new light on this issue and they are unique in how they obtained the data: they spied on students.

Still, there is one notable consistency that spans the literature on laptops in class: most researchers obtained their data by surveying students and professors.

The authors of two recent studies of laptops and classroom learning decided that relying on student and professor testimony would not do. They decided instead to spy on students.

In one study, a St. John’s University law professor hired research assistants to peek over students’ shoulders from the back of the lecture hall. In the other, a pair of University of Vermont business professors used computer spyware to monitor their students’ browsing activities during lectures.

The authors of both papers acknowledged that their respective studies had plenty of flaws (including possibly understating the extent of non-class use). But they also suggested that neither sweeping bans nor unalloyed permissions reflect the nuances of how laptops affect student behavior in class. And by contrasting data collected through surveys with data obtained through more sophisticated means, the Vermont professors also show why professors should be skeptical of previous studies that rely on self-reporting from students — which is to say, most of them.

While these studies might be useful for dealing with the growing use of laptops in classrooms, discussing the data itself would be interesting. A few questions come to mind:

1. What discussions took place with an IRB? It seems that this might have been a problem in the study using spyware on student computers and this was reflected in the generalizability of the data with just 46% of students agreeing to have the spyware on their computer. The other study also could run into issues if students were identifiable. (Just a thought: could a professor insist on spyware being on student computers if the students insisted on having a laptop in class?)

2. These studies get at the disparities between self-reported data and other forms of data collection. I would guess that students would underestimate their distractable laptop use on self-reported surveys because they would suspect that this is the answer that they should give (social desirability bias). But it could also reveal things about how cognizant computer/Internet users are about how many windows and applications they actually cycle through.

3. Both of these studies are on a relatively small scale: one had 45 students, the other had a little more than 1,000 but the data was “less precise” since it involved TAs sitting in the back monitoring students. Expanding the Vermont study and linking laptop use to outcomes on a larger scale is even better: move beyond just talking about the classroom experience and look at its impact on learning outcomes. Why doesn’t someone do this on a larger scale and in multiple settings? Would it be too difficult to get past some of the IRB issues?

In looking at the comments about this story, it seems like having better data on this topic would go a long ways to moving the discussion beyond anecdotal evidence.

Another reason for IRBs and ethical guidelines for research

There is a body of known research from around the mid 20th century that led to the formation of ethical guidelines for research and the establishment of Institutional Review Boards (IRBs). Here is another study in the news that shows why these guidelines are necessary:

The United States apologized on Friday for an experiment conducted in the 1940s in which U.S. government researchers deliberately infected Guatemalan prison inmates, women and mental patients with syphilis.

In the experiment, aimed at testing the then-new drug penicillin, inmates were infected by prostitutes and later treated with the antibiotic.

“The sexually transmitted disease inoculation study conducted from 1946-1948 in Guatemala was clearly unethical,” Secretary of State Hillary Clinton and Health and Human Services Secretary Kathleen Sebelius said in a statement.

A researcher discovered this case while doing research that followed up on the Tuskegee experiments of the 1960s.