This story is a few days old but still interesting: a Dutch social psychologist has admitted to using fraudulent data for years.
Social psychologist Diederik Stapel made a name for himself by pushing his field into new territory. His research papers appeared to demonstrate that exposure to litter and graffiti makes people more likely to commit small crimes and that being in a messy environment encourages people to buy into racial stereotypes, among other things.
But these and other unusual findings are likely to be invalidated. An interim report released last week from an investigative committee at his university in the Netherlands concluded that Stapel blatantly faked data for dozens of papers over several years…
More than 150 papers are being investigated. Though the studies found to contain clearly falsified data have not yet been publicly identified, the journal Science last week published an “editorial expression of concern” regarding Stapel’s paper on stereotyping. Of 21 doctoral theses he supervised, 14 were reportedly compromised. The committee recommends a criminal investigation in connection with “the serious harm inflicted on the reputation and career opportunities of young scientists entrusted to Mr. Stapel,” according to the report…
I think the interesting part of the story here is how this was able to go on so long. It sounds like because Stapel handled more of the data himself rather than follow typical practices of handing it off to graduate students, he was able to falsify data for longer.
This also raises questions about how much scientific data might be faked or unethically tampered with. The article references a forthcoming study on the topic:
In a study to be published in a forthcoming edition of the journal Psychological Science, Loewenstein, John, and Drazen Prelec of MIT surveyed more than 2,000 psychologists about questionable research practices. They found that a significant number said they had engaged in 10 types of potentially unsavory practices, including selectively reporting studies that ‘worked’ (50%) and outright falsification of data (1.7%).
Pushing positive results, generally meaning papers that prove an alternative hypothesis, is also known to be favored by journals who don’t like negative results as much. Of course, both sets of results are needed for science to advance as both help prove and disprove arguments and theories. “Outright falsification” is another story…and perhaps even underreported (given social desirability bias and prevailing norms in scientific fields).
Given these occurrences, I wonder if scientists of all kinds would push for more regulation (IRBs, review boards, etc.) or less regulation with scientists policing themselves more (some more training in ethics, more commonly sharing data or linking studies to available data so readers could do their own analysis, etc.)
Pingback: After case of fraud, researchers discuss others means of “misusing research data” | Legally Sociable
Pingback: Why cases of scientific fraud can affect everyone in sociology | Legally Sociable
Pingback: Don’t dismiss social science research just because of one fradulent scientist | Legally Sociable
Pingback: Increase in retractions of scientific articles tied to problems in scientific process | Legally Sociable