Lancet editor suggests “much of the scientific literature, perhaps half, may be simply untrue”

The editor of The Lancet quickly summarizes several major issues regarding scientific studies:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature
with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.

He goes to suggest some solutions such as different incentives, data review before publication, and a higher bar for statistical significance. Are there also some basic questions here about methodology such as whether randomized controlled experiments are the best way to go, particularly if the N is small? Dr. John Ioaniddis has argued for more rigorous methods in medical research, suggesting trials need to compare a new treatment to an existing treatment rather than a new option to a placebo. Perhaps we also need more metastudies that look across various studies to summarize findings rather than relying on a single study or a small group of studies to validate a finding.

At the least, this is a public relations issue for the natural and social sciences. The public tends to trust science but an increasing number of studies that are later retracted amongst breathless pronouncements of new findings will not go over well. Beyond the optics, this gets at a basic question for scientists: are we/they truly interested in finding reality? What is this scientific work intended to do anyway?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s