We need more research to confirm or dispute the first study to claim a causal connection between social media use and depression and loneliness

A new psychology study argues that reduced time spent with social media leads to less depression:

For the study, Hunt and her team studied 143 undergraduates at the University of Pennsylvania over a number of weeks. They tested their mood and sense of well-being using seven different established scales. Half of the participants carried on using social media sites as normal. (Facebook, Instagram and Snapchat did not respond to request for comment.)

The other half were restricted to ten minutes per day for each of the three sites studied: Facebook, Instagram and Snapchat, the most popular sites for the age group. (Use was tracked through regular screen shots from the participants’ phones showing battery data.)

Net result: Those who cut back on social media use saw “clinically significant” falls in depression and in loneliness over the course of the study. Their rates of both measures fell sharply, while those among the so-called “control” group, who did not change their behavior, saw no improvement.

This isn’t the first study to find a link between social media use, on the one hand, and depression and loneliness on the other. But previous studies have mainly just shown there is a correlation, and the researchers allege that this shows a “causal connection.”

I’m guessing this study will get a good amount of attention because of this claim. Here is how this should work in the coming months and years:

  1. Other researchers should work to replicate this study. Do the findings hold with undergraduate students elsewhere in similar conditions?
  2. Other researchers should tweak the conditions of the study in a variety of ways. Move beyond undergraduates to both younger and older participants. (Most social media research involves relatively young people.) Change the national context. Expand the sample size. Lengthen the study beyond three weeks to look at longer-term effects of social media use.
  3. All the researchers involved need time and discussion to reach a consensus about all of the work conducted under #1 and #2 above. This could come relatively soon if most of the studies agree with the conclusions or it could take quite a while if results differ.

All together, once a claim like this has empirical backing, other researchers should follow up and see whether it is correct. In the meantime, it will be hard for the public, the companies involved, and policymakers to know what to do as studies build upon each other.

Reassessing Mead versus Freeman in their studies of Samoa

A new look at anthropologist Derek Freeman’s critique of Margaret Mead’s famous study of sex in Samoa suggests Freeman may have manipulated data:

But Shankman’s new analysis — following his excellent 2009 book, The Trashing of Margaret Mead: Anatomy of an Anthropological Controversy — shows that Freeman manipulated “data” in ways so egregious that it might be time for Freeman’s publishers to issue formal retractions…Now Shankman has delved even deeper into the sources; in 2011, he obtained from Freeman’s archives the first key interview with one of the supposed “joshing” informants, a woman named Fa’apua’a. This interview, conducted in 1987, allegedly bolstered Freeman’s contention that Mead had based her “erroneous” portrait of Samoan sexuality on what Fa’apua’a and her friend Fofoa had jokingly told Mead back in the 1920s.

But Shankman shows that the interview was conducted and then represented in deeply problematic ways. The 1987 interview with Fa’apua’a was arranged and carried out by Fofoa’s son, a Samoan Christian of high rank who was convinced that Mead had besmirched the reputation of Samoans by portraying his mother, her friend Fa’apua’a, and other Samoans as sexually licentious…

But why did Freeman get it so wrong? Shankman’s book suggests Freeman was obsessed with Mead and with what he saw as her dangerous stories about the flexibility of human cultures. He saw himself as a brave “heretic,” a man saving true science from Mead’s mere ideology.

I wonder if Shankman’s work is the start to a solution to this debate. If two anthropologists disagree so much, wouldn’t bringing in other anthropologists to review the data or conduct their own fieldwork a possible answer to adjudicating who got it more right? There is a time factor here that makes the issue more complicated but people in addition to Shankman could review the notes and comparisons could be made to other societies which might be similar and offer insights.

More broadly, I wonder how much incentive there is for researchers to follow up on famous studies. Freeman made a name for himself by arguing against Mead’s famous findings but what if he had gone through the trouble and then found Mead was right? He likely would not have gotten very far.

 

Debate over priming effect illustrates need for replication

A review of the literature regarding the priming effect highlights the need in science for replication:

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers. Psychology isn’t the only field with fakers, but it has its share. Plus there’s the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another’s work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the “anchoring effect,” which happens, for instance, when a store lists a competitor’s inflated price next to its own to make you think you’re getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient. A small group of skeptical psychologists—let’s call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can’t get those results. The studies don’t check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

While some may find this discouraging, it sounds like the scientific process is being followed. A researcher, Bargh, finds something interesting. Others follow up to see if Bargh was right and to try to extend the idea. Debate ensues once a number of studies have been done. Perhaps there is one stage left to finish off in this process: the research community has to look at the accumulated evidence at some point and decide whether the priming effect exists or not. What does the overall weight of the evidence suggest?

For the replication process to work well, a few things need to happen. Researchers need to be willing to repeat the studies of others as well as their own studies. They need to be willing to report both positive and negative findings, regardless of which side of the debate they are on. Journals need to provide space for positive and negative findings. This incremental process will take time and may not lead to big headlines but its steady approach should pay off in the end.

A company offers to replicate research study findings

A company formed in 2011 is offering a new way to validate the findings of research studies:

A year-old Palo Alto, California, company, Science Exchange, announced on Tuesday its “Reproducibility Initiative,” aimed at improving the trustworthiness of published papers. Scientists who want to validate their findings will be able to apply to the initiative, which will choose a lab to redo the study and determine whether the results match.

The project sprang from the growing realization that the scientific literature – from social psychology to basic cancer biology – is riddled with false findings and erroneous conclusions, raising questions about whether such studies can be trusted. Not only are erroneous studies a waste of money, often taxpayers’, but they also can cause companies to misspend time and resources as they try to invent drugs based on false discoveries.

This addresses a larger concern about how many research studies found their results by chance alone:

Typically, scientists must show that results have only a 5 percent chance of having occurred randomly. By that measure, one in 20 studies will make a claim about reality that actually occurred by chance alone, said John Ioannidis of Stanford University, who has long criticized the profusion of false results.

With some 1.5 million scientific studies published each year, by chance alone some 75,000 are probably wrong.

I’m intrigued by the idea of having an independent company assess research results. This could work in conjunction with other methods of verifying research results:

1. The original researchers could run multiple studies. This works better with smaller studies but it could be difficult when the N is larger and more resources are needed.

2. Researchers could also make their data available as they publish their paper. This would allow other researchers to take a look and see if things were done correctly and if the results could be replicated.

3. The larger scientific community should endeavor to replicate studies. This is the way science is supposed to work: if someone finds something new, other researchers should adopt a similar protocol and test it with similar and new populations. Unfortunately, replicating studies is not seen as being very glamorous and it tends not to receive the same kind of press attention.

The primary focus of this article seems to be on medical research. Perhaps this is because it can affect the lives of many and involves big money. But it would be interesting to apply this to more social science studies as well.

Scientists call for more rules and regulations about data

There are a lot of academics and researchers collecting data on a variety of topics. Some scientists argue that we need more regulations about data so that researchers can work with and access data collected by others:

In 10 new articles, also published in Science, researchers in fields as diverse as paleontology and neuroscience say the lack of data libraries, insufficient support from federal research agencies, and the lack of academic credit for sharing data sets have created a situation in which money is wasted and information that could reveal better cancer treatments or the causes of climate change goes by the wayside…

A big problem is the many forms of data and the difficulty of comparing them. In neuroscience, for instance, researchers collect data on scales of time that range from nanoseconds, if they are looking at rates of neuron firing, to years, if they are looking at developmental changes. There are also difference in the kind of data that come from optical microscopes and those that come from electron microscopes, and data on a cellular scale and data from a whole organism…

He added that he was limited by how data are published. “When I see a figure in a paper, it’s just the tip of the iceberg to me. I want to see it in a different form in order to do a different kind of analysis.” But the data are not available in a public, searchable format.

Shared data libraries sound like they could be useful. Based on experience, however, even if data is made available, it still takes a good amount of time to download data, read the documentation, and reshape the data in a way that one can start to replicate findings from journal articles.