As part of its “Cheat Code to Life,” Wired includes four tips for researchers to obtain positive results in their studies:
Many a budding scientist has found themself one awesome result from tenure and unable to achieve that all-important statistical significance. Don’t let such setbacks deter you from a life of discovery. In a recent paper, Joseph Simmons, Leif Nelson, and Uri Simonsohn describe “p-hacking”—common tricks that researchers use to fish for positive results. Just promise us you’ll be more responsible when you’re a full professor. —MATTHEW HUTSON
Create Options. Let’s say you want to prove that listening to dubstep boosts IQ (aka the Skrillex effect). The key is to avoid predefining what exactly the study measures—then bury the failed attempts. So use two different IQ tests; if only one shows a pattern, toss the other.
Expand the Pool. Test 20 dubstep subjects and 20 control subjects. If the findings reach significance, publish. If not, run 10 more subjects in each group and give the stats another whirl. Those extra data points might randomly support the hypothesis.
Get Inessential. Measure an extraneous variable like gender. If there’s no pattern in the group at large, look for one in just men or women.
Run Three Groups. Have some people listen for zero hours, some for one, and some for 10. Now test for differences between groups A and B, B and C, and A and C. If all comparisons show significance, great. If only one does, then forget about the existence of the p-value poopers.
Wait for the NSF Grant. Use all four of these fudges and, even if your theory is flat wrong, you’re more likely than not to confirm it—with the necessary 95 percent confidence.
This might be summed up as “things that are done but would never be explicitly taught in a research methods course.” Several quick thoughts:
1. This is a reminder of how important 95% significant is in the world of science. My students often ask why the cut-point is 95% – why do we accept 5% error and not 10% (which people sometimes “get away with” in some studies) or 1% (wouldn’t we be more sure of our results?).
2. Even if significance is important and scientists hack their way to more positive results, they can still have a humility about their findings. Reaching 95% significance still means there is a 5% chance of error. Problems arise when findings are countered or disproven but we should expect this to happen occasionally. Additionally, results can be statistically significant but have little substantive significance. All together, having a significant finding is not the end of the process for the scientist: it still needs to be interpreted and then tested again.
3. This is also tied to the pressure of needing to find positive results. In other words, publishing an academic study is more likely if you disprove the null hypothesis. At the same time, not disproving the hypothesis is still useful knowledge and such studies should also be published. Think of the example of Edison’s quest to find the proper material for a lightbulb filament. The story is often told in such a way to suggest that he went through a lot of work to finally find the right answer. But, this is often how science works: you go through a lot of ideas and data before the right answer emerges.