Don’t dismiss social science research just because of one fradulent scientist

Andrew Ferguson argued in early December that journalists fall too easily for bad academic research. However, he seems to base much of his argument on the actions of one fraudulent scientist:

Lots of cultural writing these days, in books and magazines and newspapers, relies on the so-called Chump Effect. The Effect is defined by its discoverer, me, as the eagerness of laymen and journalists to swallow whole the claims made by social scientists. Entire journalistic enterprises, whole books from cover to cover, would simply collapse into dust if even a smidgen of skepticism were summoned whenever we read that “scientists say” or “a new study finds” or “research shows” or “data suggest.” Most such claims of social science, we would soon find, fall into one of three categories: the trivial, the dubious, or the flatly untrue.

A rather extreme example of this third option emerged last month when an internationally renowned social psychologist, Diederik Stapel of Tilburg University in the Netherlands, was proved to be a fraud. No jokes, please: This social psychologist is a fraud in the literal, perhaps criminal, and not merely figurative, sense. An investigative committee concluded that Stapel had falsified data in at least “several dozen” of the nearly 150 papers he had published in his extremely prolific career…

But it hardly seems to matter, does it? The silliness of social psychology doesn’t lie in its questionable research practices but in the research practices that no one thinks to question. The most common working premise of social-psychology research is far-fetched all by itself: The behavior of a statistically insignificant, self-selected number of college students or high schoolers filling out questionnaires and role-playing in a psych lab can reveal scientifically valid truths about human behavior…

Who cares? The experiments are preposterous. You’d have to be a highly trained social psychologist, or a journalist, to think otherwise. Just for starters, the experiments can never be repeated or their results tested under controlled conditions. The influence of a hundred different variables is impossible to record. The first group of passengers may have little in common with the second group. The groups were too small to yield statistically significant results. The questionnaire is hopelessly imprecise, and so are the measures of racism and homophobia. The notions of “disorder” and “stereotype” are arbitrary—and so on and so on.

Yet the allure of “science” is too strong for our journalists to resist: all those numbers, those equations, those fancy names (say it twice: the Self-Activation Effect), all those experts with Ph.D.’s!

I was afraid that the actions of one scientist might taint the work of many others.

But there are a couple of issues here and several are worth pursuing:

1. The fact that Stapel committed fraud doesn’t mean that all scientists do bad work. Ferguson seems to want to blame other scientists for not knowing Stapel was committing fraud – how exactly would they have known?

2. Ferguson doesn’t seem to like social psychology. He does point to some valid methodological concerns: many studies involve small groups of undergraduates. Drawing large conclusions from these studies is difficult and indeed, perhaps dangerous. But this isn’t all social psychology is about.

2a. More generally, Ferguson could be writing about a lot of disciplines. Medical research tends to start with small groups and then decisions are made. Lots of research, particularly in the social sciences, could be invalidated if Ferguson was completely right. Ferguson really would suggest “Most such claims of social science…fall into one of three categories: the trivial, the dubious, or the flatly untrue.”?

3. I’ve said it before and I’ll say it again: journalists need more training in order to understand what scientific studies mean. Science doesn’t work in the way that journalists suggests where there is a steady stream of big findings. Rather, scientists find something and then others try to replicate the findings in different settings with different populations. Science is more like an accumulation of evidence than a lot of sudden lightning strikes of new facts. One small study of undergraduates may not tell us much but dozens of such experiments among different groups might.

4. I can’t help but wonder if there is a political slant to this: what if scientists were reporting positive things about conservative viewpoints? Ferguson complains that measuring things like racism and homophobia are difficult but this is the nature of studying humans and society. Ferguson just wants to say that it is all “arbitrary” – this is simply throwing up our hands and saying the world is too difficult to comprehend so we might as well quit. If there isn’t a political edge here, perhaps Ferguson is simply anti-science? What science does Ferguson suggest is credible and valid?

In the end, you can’t dismiss all of social psychology because of the actions of one scientist or because journalists are ill-prepared to report on scientific findings.

h/t Instapundit

Why cases of scientific fraud can affect everyone in sociology

The recent case of a Dutch social psychologist admitting to working with fraudulent data can lead some to paint social psychology or the broader discipline of sociology as problematic:

At the Weekly Standard, Andrew Ferguson looks at the “Chump Effect” that prompts reporters to write up dubious studies uncritically:

The silliness of social psychology doesn’t lie in its questionable research practices but in the research practices that no one thinks to question. The most common working premise of social-psychology research is far-fetched all by itself: The behavior of a statistically insignificant, self-selected number of college students or high schoolers filling out questionnaires and role-playing in a psych lab can reveal scientifically valid truths about human behavior.

And when the research reaches beyond the classroom, it becomes sillier still…

Described in this way, it does seem like there could be real journalistic interest in this study – as a human interest story like the three-legged rooster or the world’s largest rubber band collection. It just doesn’t have any value as a study of abstract truths about human behavior. The telling thing is that the dullest part of Stapel’s work – its ideologically motivated and false claims about sociology – got all the attention, while the spectacle of a lunatic digging up paving stones and giving apples to unlucky commuters at a trash-strewn train station was considered normal.

A good moment for reaction from a conservative perspective: two favorite whipping boys, liberal (and fraudulent!) social scientists plus journalists/the media (uncritical and biased!), can be tackled at once.

Seriously, though: the answer here is not to paint entire academic disciplines as problematic because of one case of fraud. Granted, some of the questions raised are good ones that social scientists themselves have raised recently: how much about human activity can you discover through relatively small sample tests of American undergraduates? But good science is not based on one study anyway. An interesting finding should be corroborated by similar studies done in different places at different times with different people. These multiple tests and observations help establish the reliability and validity of findings. This can be a slow process, another issue in a media landscape where new stories are needed all the time.

This reminds me of Joel Best’s recommendations regarding dealing with statistics. One common option is to simply trust all statistics. Numbers look authoritative, often come from experts, and they can be overwhelming. Just accepting them can be easy. At the other pole is the common option of saying that all statistics are simply interpretation and are manipulated so we can’t trust any of them. No numbers are trustworthy. Neither approaches are good options but they are relatively easy options. The better route to go when dealing with scientific studies is to have the basic skills necessary to understand whether they are good studies or not and how the process of science works. In this case, this would be a great time to call for better training among journalists about scientific studies so they can provide better interpretations for the public.

In the end, when one prominent social psychologist admits to massive fraud, the repercussions might be felt by others in the field for quite a while.

Dutch social psychologist commits massive science fraud

This story is a few days old but still interesting: a Dutch social psychologist has admitted to using fraudulent data for years.

Social psychologist Diederik Stapel made a name for himself by pushing his field into new territory. His research papers appeared to demonstrate that exposure to litter and graffiti makes people more likely to commit small crimes and that being in a messy environment encourages people to buy into racial stereotypes, among other things.

But these and other unusual findings are likely to be invalidated. An interim report released last week from an investigative committee at his university in the Netherlands concluded that Stapel blatantly faked data for dozens of papers over several years…

More than 150 papers are being investigated. Though the studies found to contain clearly falsified data have not yet been publicly identified, the journal Science last week published an “editorial expression of concern” regarding Stapel’s paper on stereotyping. Of 21 doctoral theses he supervised, 14 were reportedly compromised. The committee recommends a criminal investigation in connection with “the serious harm inflicted on the reputation and career opportunities of young scientists entrusted to Mr. Stapel,” according to the report…

I think the interesting part of the story here is how this was able to go on so long. It sounds like because Stapel handled more of the data himself rather than follow typical practices of handing it off to graduate students, he was able to falsify data for longer.

This also raises questions about how much scientific data might be faked or unethically tampered with. The article references a forthcoming study on the topic:

In a study to be published in a forthcoming edition of the journal Psychological Science, Loewenstein, John, and Drazen Prelec of MIT surveyed more than 2,000 psychologists about questionable research practices. They found that a significant number said they had engaged in 10 types of potentially unsavory practices, including selectively reporting studies that ‘worked’ (50%) and outright falsification of data (1.7%).

Pushing positive results, generally meaning papers that prove an alternative hypothesis, is also known to be favored by journals who don’t like negative results as much. Of course, both sets of results are needed for science to advance as both help prove and disprove arguments and theories. “Outright falsification” is another story…and perhaps even underreported (given social desirability bias and prevailing norms in scientific fields).

Given these occurrences, I wonder if scientists of all kinds would push for more regulation (IRBs, review boards, etc.) or less regulation with scientists policing themselves more (some more training in ethics, more commonly sharing data or linking studies to available data so readers could do their own analysis, etc.)

Complaint: “they knew and didn’t say”

For those of you wanting to dig into the recently unsealed legal complaint against J.P. Morgan that it turned a blind eye to the Madoff fraud, the Wall Street Journal has posted all 121 pages here (PDF).

I’m working my way through it right now.