Here is an argument that a renewed emphasis on replicating studies will help the field of social psychology move beyond some public issues:
Things aren’t quite as bad as they seem, though. Although Nature’s report was headlined “Disputed results a fresh blow for social psychology,” it scarcely noted that there have been some replications of experiments modelled on Dijksterhuis’s phenomenon. His finding could still out turn to be right, if weaker than first thought. More broadly, social priming is just one thread in the very rich fabric of social psychology. The field will survive, even if social priming turns out to have been overrated or an unfortunate detour.
Even if this one particular line of work is under a shroud, it is important not to lose sight of the fact many of the old standbys from social psychology have been endlessly replicated, like the Milgram effect—the old study of obedience in which subjects turned up electrical shocks (or what they thought were electrical shocks) all the way to four hundred and fifty volts, apparently causing great pain to their subjects, simply because they’d been asked to do it. Milgram himself replicated the experiment numerous times, in many different populations, with groups of differing backgrounds. It is still robust (in hands of other researchers) nearly fifty years later. And even today, people are still extending that result; just last week I read about a study in which intrepid experimenters asked whether people might administer electric shocks to robots, under similar circumstances. (Answer: yes.)
More importantly, there is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. Throughout my career, and long before it, journals emphasized that new papers have to publish original results; I completely failed to replicate a particular study a few years ago, but at the time didn’t bother to submit it to a journal because I knew few people would be interested. Now, happily, the scientific culture has changed. Since I first mentioned these issues in late December, several leading researchers in psychology have announced major efforts to replicate previous work, and to change the incentives so that scientists can do the right thing without feeling like they are spending time doing something that might not be valued by tenure committees.
The Reproducibility Project, from the Center for Open Science is now underway, with its first white paper on the psychology and sociology of replication itself. Thanks to Daniel Simons and Bobbie Spellman, the journal Perspectives in Psychological Science is now accepting submissions for a new section of each issue devoted to replicability. The journal Social Psychology is planning a special issue on replications for important results in social psychology, and has already received forty proposals. Other journals in neuroscience and medicine are engaged in similar efforts: my N.Y.U. colleague Todd Gureckis just used Amazon’s Mechanical Turk to replicate a wide range of basic results in cognitive psychology. And just last week, Uri Simonsohn released a paper on coping with the famous file-drawer problem, in which failed studies have historically been underreported.
It is a good thing if the social sciences were able to be more sure of their findings. Replication could go a long way to moving the conversation away from headline-grabbing findings based on small Ns to be more certain results that a broader swath of an academic field can agree with. The goal is to get it right in the long run with evidence about human behaviors and attitudes, not necessarily in the short-term.
Even with a renewed emphasis on replication, there might still be some issues:
1. The ability to publish more replication studies would certainly help but is there enough incentive for researchers, particularly those trying to establish themselves, to pursue replication studies over innovative ideas and areas that gain more attention?
2. What about the number of studies that are conducted with WEIRD populations, primarily US undergraduate students? If studies continue to be replicated with skewed populations, is much gained?