More details of unethical US medical experiments in Guatemala in the 1940s

Research methods courses tend to cover the same classic examples of unethical studies. With more details emerging from a government panel, the US medical experiments undertaken in Guatemala during the 1940s could join this list.

From 1946-48, the U.S. Public Health Service and the Pan American Sanitary Bureau worked with several Guatemalan government agencies to do medical research — paid for by the U.S. government — that involved deliberately exposing people to sexually transmitted diseases…

The research came up with no useful medical information, according to some experts. It was hidden for decades but came to light last year, after a Wellesley College medical historian discovered records among the papers of Dr. John Cutler, who led the experiments…

During that time, other researchers were also using people as human guinea pigs, in some cases infecting them with illnesses. Studies weren’t as regulated then, and the planning-on-the-fly feel of Cutler’s work was not unique, some experts have noted.

But panel members concluded that the Guatemala research was bad even by the standards of the time. They compared the work to a 1943 experiment by Cutler and others in which prison inmates were infected with gonorrhea in Terre Haute, Ind. The inmates were volunteers who were told what was involved in the study and gave their consent. The Guatemalan participants — or many of them — received no such explanations and did not give informed consent, the commission said.

Ugh – a study that gives both researchers and Americans a bad name. It is also a good reminder of why we need IRBs.

While the article suggests President Obama apologized to the Guatemalan president, is anything else going to be done to try to make up for this? I also wonder how this is viewed in Central America: yet more details about the intrusiveness of Americans over the last century?

(See my original post on this here.)

Wired’s “seven creepy experiments” short on social science options

When I first saw the headline for this article in my copy of Wired, I was excited to see what they had dreamed up. Alas, the article “Seven Creepy Experiments That Could Teach Us So Much (If They Weren’t So Wrong)” is mainly about biological experiments. One experiment, splitting up twins and fixing their environments, could be interesting: it would provide insights into the ongoing nature vs. nurture debate.

I would be interested to see how social scientists would respond to a question about what “creepy” or unethical experiments they would like to see happen. In research methods class, we have the classic examples of experiments that should not be replicated. Milgram’s experiment about obedience to authority, Zimbardo’s Stanford Prison Experiment, and Humphrey’s Tearoom Trade Study tend to come up. From more popular sources, we could talk about a setup like the one depicted in The Truman Show or intentionally creating settings like those found in Lord of the Flies or The Hunger Games.

What sociological experiments would produce invaluable information but would never pass an IRB?

Wanting to fit in leads to interesting behavior

A new study in the Journal of Consumer Research finds that people are willing to alter their behavior in a quest to try to fit in:

“Social exclusion is a very painful experience, which makes it a strong motivator,” explains Tyler Stillman, a visiting sociology professor at Southern Utah University, who is one of the study’s co-authors.

In one experiment, researchers paired study participants with a partner who left midway through the study. Some of the participants believed their partners left because they didn’t like them — and those people were more easily talked into buying a silly school spirit trinket. In another study, people who felt excluded were more likely to say they were willing to try cocaine. Researchers say their findings could have real-life implications.

Interesting results. If these results are all based on lab experiments, how much more willing would people be to change their behavior to fit in when confronted with real people?

I would be curious to find if the study looked at different age groups. If lab experiments were only conducted with undergraduate students, might the results change if the same experiments were done with older adults?

The value of using multiple coders

A well-known psychologist from Harvard is in trouble for allegedly reporting false data from laboratory studies. How the allegations surfaced is illustrative of why researchers should have more than just one person looking at data. As reported in the Chronicle of Higher Education, here is what happened after the psychologist and a graduate student coded an experiment involving rhesus monkeys:

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant’s codes, he found that the monkeys didn’t seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.

These discrepancies in the data led to indications that something similar had happened in other experiments.

Having multiple coders is good for several reasons:

1. Helping to eliminate or catch problems such as these where someone might be tempted to falsify data.

2. To help interpret ambiguous situations.

3. To demonstrate to the broader research community that the results are more than just one person’s conclusions. (This should also be aided by the review process as other researchers look over the work.)

Experiments in the social sciences

Jim Manzi writes in City Journal about using experiments in the social sciences to help make decisions like whether the economic stimulus in the United States was successful. Manzi writes:

Another way of putting the problem is that we have no reliable way to measure counterfactuals—that is, to know what would have happened had we not executed some policy—because so many other factors influence the outcome. This seemingly narrow problem is central to our continuing inability to transform social sciences into actual sciences. Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs.

Manzi provides an overview of experimentation and discusses using randomized field trials. An interesting look at how we know – and don’t know – about the social world.

Using undergraduates in research experiments

It is common for research experiments to use undergraduates as subjects: they are a convenient and often willing sample pool for researchers. These studies then draw conclusions about human behavior based on undergraduate subjects.

In Newsweek, Sharon Begley writes about a new study that suggests American undergraduates are unlike many people in the world and therefore, it is difficult to make generalizations based on them.

Three psychology researchers have done a systematic search of experiments with subjects other than American undergrads, who made up two thirds of the subjects in all U.S. psych studies. From basics such as visual perception to behaviors and beliefs about fairness, cooperation, and the self, U.S. undergrads are totally unrepresentative, Joseph Henrich of the University of British Columbia and colleagues explain in a paper in Behavioral and Brain Sciences. They share responses with subjects from societies that are also Western, educated, industrialized, rich, and democratic (WEIRD), but not with humanity at large.

One way around such issues is to replicate studies with different people groups. The article describes some of these attempts, such as with the ultimatum game where two people have to negotiate a split of $10. When done with different people, the studies produce different results, suggesting that what we might think is “human nature” is heavily culturally dependent.

Another possible outcome of this study is that researchers may continue to use undergraduates but would have to scale back on their ability to generalize about humanity as a whole.

Finally, this study is a reminder that “typical” behavior in one culture is not guaranteed to be the same in another culture. What we may think of as givens can be quite different with people who do not share our cultural assumptions and practices.