What counts as “good science,” happiness studies edition

Looking across studies that examined factors leading to happiness, several researchers concluded only two of five factors commonly discussed stood up to scrutiny:

Photo by Jill Wellington on Pexels.com

But even these studies failed to confirm that three of the five activities the researchers analyzed reliably made people happy. Studies attempting to establish that spending time in nature, meditating and exercising had either weak or inconclusive results.

“The evidence just melts away when you actually look at it closely,” Dunn said.

There was better evidence for the two other tasks. The team found “reasonably solid evidence” that expressing gratitude made people happy, and “solid evidence” that talking to strangers improves mood.

How might researchers improve their studies and confidence in the results?

The new findings reflect a reform movement under way in psychology and other scientific disciplines with scientists setting higher standards for study design to ensure the validity of the results.

To that end, scientists are including more subjects in their studies because small sample sizes can miss a signal or indicate a trend where there isn’t one. They are openly sharing data so others can check or replicate their analyses. And they are committing to their hypotheses before running a study in a practice known as “pre-registering.” 

These seem like helpful steps for quantitative research. Four solutions are suggested above (one is more implicit):

  1. Analyzing dozens of previous studies. When researchers study similar questions, are their findings consistent? Do they use similar methods? Is there consensus across a field or across disciplines? This summary work is useful.
  2. Avoid small samples. This helps reduce the risk of a chance finding among a smaller group of participants.
  3. Share data so that others can look at procedures and results.
  4. Test certain hypotheses set at the beginning rather than fitting hypotheses to statistically significant findings.

One thing I have not seen in discussions of these approaches intended to create better science: how much better will results be after following these steps? How much can a field improve with better confidence in the results? 5-10% 25% More?

Leave a comment