After case of fraud, researchers discuss others means of “misusing research data”

The news that a prominent Dutch social psychologist published fraudulent work has pushed other researchers to talk about other forms of “misusing research data”:

Even before the Stapel case broke, a flurry of articles had begun appearing this fall that pointed to supposed systemic flaws in the way psychologists handle data. But one methodological expert, Eric-Jan Wagenmakers, of the University of Amsterdam, added a sociological twist to the statistical debate: Psychology, he argued in a recent blog post and an interview, has become addicted to surprising, counterintuitive findings that catch the news media’s eye, and that trend is warping the field…

In September, in comments quoted by the statistician Andrew Gelman on his blog, Mr. Wagenmakers wrote: “The field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself.”…

To show just how easy it is to get a nonsensical but “statistically significant” result, three scholars, in an article in November’s Psychological Science titled “False-Positive Psychology,” first showed that listening to a children’s song made test subjects feel older. Nothing too controversial there.

Then they “demonstrated” that listening to the Beatles’ “When I’m 64” made the test subjects literally younger, relative to when they listened to a control song. Crucially, the study followed all the rules for reporting on an experimental study. What the researchers omitted, as they went on to explain in the rest of the paper, was just how many variables they poked and prodded before sheer chance threw up a headline-making result—a clearly false headline-making result.

If the pressure is great to publish (and it certainly is), then there have to be some countermeasures to limit unethical research practices. Here are a few ideas:

1. Giving more people access to the data. In this way, people could check up on other people’s published findings. But if the fraudulent studies are already published, perhaps this is too late.

2. Having more people have oversight over the project along the way. This doesn’t necessarily have to be a bureaucratic board but only having one researcher looking at the data and doing the analysis (such as in the Stapel case) means that there is more opportunity for an individual to twist the data. This could be an argument for collaborative data.

3. Could there be more space within disciplines and journals to discuss the research project? While papers tend to have very formal hypotheses, there is a lot of messy work that goes into these but very little room to discuss how the researchers arrived at them.

4. Decrease the value of media attention. I don’t know how to deal with this one. What researcher doesn’t want to have more people read their research?

5. Have a better educated media so that they don’t report so many inconsequential and shocking studies. We need more people like Malcolm Gladwell who look at a broad swath of research and summarize it rather than dozens of reports grabbing onto small studies. This is the classic issue with nutrition reporting: eggs are great! A new study says they are terrible! A third says they are great for pregnant women and no one else! We rarely get overviews of this research or real questions about the value of all this research. We just get: “a study proved this oddity today…”

6. Resist data mining. Atheoretical correlations don’t help much. Let theories guide statistical models.

7. Have more space to publish negative findings. This would help researchers feel less pressure to come up with positive results.

Sir James Dyson discusses the value of failure

Sir James Dyson, noted inventor of the Dyson vacuum cleaners, discusses how failure is necessary on the path to innovation:

It’s time to redefine the meaning of the word “failure.” On the road to invention, failures are just problems that have yet to be solved…

From cardboard and duct tape to ABS polycarbonate, it took 5,127 prototypes and 15 years to get it right. And, even then there was more work to be done. My first vacuum, DC01, went to market in 1993. We’re up to DC35 now, having improved with each iteration. More efficiency, faster motors, new materials…

The ability to learn from mistakes — trial and error — is a valuable skill we learn early on. Recent studies show that encouraging children to learn new things on their own fosters creativity. Direct instruction leads to children being less curious and less likely to discover new things.

Unfortunately, society doesn’t always look kindly on failure. Punishing mistakes doesn’t lead to better solutions or faster results. It stifles invention.

If the American Dream is now about attaining perfection, where is there room for failure? Dyson goes on to talk about how education might be changed to incorporate more room for failure but getting to the point where the broader society would be more accepting of failure is another matter.

I wonder how much this idea about innovation and failure could be tied to issues regarding publishing “negative findings” in academia.