University press releases exaggerate scientific findings

A new study suggests exaggerations about scientific findings – for example, suggesting causation when a study only found correlation – start at the level of university press releases.

Yesterday Sumner and colleagues published some important research in the journal BMJ that found that a majority of exaggeration in health stories was traced not to the news outlet, but to the press release—the statement issued by the university’s publicity department…

The goal of a press release around a scientific study is to draw attention from the media, and that attention is supposed to be good for the university, and for the scientists who did the work. Ideally the endpoint of that press release would be the simple spread of seeds of knowledge and wisdom; but it’s about attention and prestige and, thereby, money. Major universities employ publicists who work full time to make scientific studies sound engaging and amazing. Those publicists email the press releases to people like me, asking me to cover the story because “my readers” will “love it.” And I want to write about health research and help people experience “love” for things. I do!

Across 668 news stories about health science, the Cardiff researchers compared the original academic papers to their news reports. They counted exaggeration and distortion as any instance of implying causation when there was only correlation, implying meaning to humans when the study was only in animals, or giving direct advice about health behavior that was not present in the study. They found evidence of exaggeration in 58 to 86 percent of stories when the press release contained similar exaggeration. When the press release was staid and made no such errors, the rates of exaggeration in the news stories dropped to between 10 and 18 percent…

Sumner and colleagues say they would not shift liability to press officers, but rather to academics. “Most press releases issued by universities are drafted in dialogue between scientists and press officers and are not released without the approval of scientists,” the researchers write, “and thus most of the responsibility for exaggeration must lie with the scientific authors.”

Scientific studies are often complex and probabilistic. It is difficult to model and predict complex natural and social phenomena and scientific studies often give our best estimate or interpretation of the data. But, science tends to steadily accumulate findings and knowledge more than a model where every single study definitively proves things. This can mean that individual studies contribute to the larger whole but often don’t set the agenda or have a radically new finding.

Yet, translating that understanding into something fit for public consumption is difficult. Academics are often criticized for dense and jargon-filled language so pieces for the general public have to be written differently. Academics want their findings to matter and colleges and universities like good publicity as well. Presenting limited or weaker findings doesn’t get as much attention.

All that said, there is an opportunity here to improve the reporting of scientific findings.

Adding a chart to scientific findings makes it more persuasive

A new research study suggests charts of data are more persuasive compared to just text:

Then for a randomly selected subsample, the researchers supplemented the description of the drug trial with a simple chart. But here’s the kicker: That chart contained no new information; it simply repeated the information in the original vignette, with a tall bar illustrating that 87 percent of the control group had the illness, and a shorter bar showing that that number fell to 47 percent for those who took the drug.

But taking the same information and also showing it as a chart made it enormously more persuasive, raising the proportion who believed in the efficacy of the drug to 97 percent from 68 percent. If the researchers are correct, the following chart should persuade you of their finding.

What makes simple charts so persuasive? It isn’t because they make the information more memorable — 30 minutes after reading about the drug trials, those who saw the charts were not much more likely to recall the results than those who had just read the description. Rather, the researchers conjecture, charts offer the veneer of science. And indeed, the tendency to find the charts more persuasive was strongest among those who agreed with the statement “I believe in science.”

Charts = science? If veneer of science is the answer, why does the chart support science? Scientists are the ones who use charts? Or they are the ones who are trusted more with charts?

I wonder if there are other explanations:

1. Seeing a clear difference in bars (87% vs. 47%) makes a stronger impression than simply reading the difference. A 40% difference is abstract but is more striking in an image.

2. More people accept the power of visual data today compared to written text. Think of all those Internet infographics with interesting information.

Recent sociological findings: many evangelicals think science and religion can work together, few highly invested in evolution/creation debate

Two recent studies suggest there may be less conflict between religious Americans and science than is typically portrayed.

1. Sociologist Elaine Ecklund on how religion and science interact:

“We found that nearly 50 percent of evangelicals believe that science and religion can work together and support one another,” Ecklund said. “That’s in contrast to the fact that only 38 percent of Americans feel that science and religion can work in collaboration.”…

  • Nearly 60 percent of evangelical Protestants and 38 percent of all surveyed believe “scientists should be open to considering miracles in their theories or explanations.”
  • 27 percent of Americans feel that science and religion are in conflict.
  • Of those who feel science and religion are in conflict, 52 percent sided with religion.
  • 48 percent of evangelicals believe that science and religion can work in collaboration.
  • 22 percent of scientists think most religious people are hostile to science.
  • Nearly 20 percent of the general population think religious people are hostile to science.
  • Nearly 22 percent of the general population think scientists are hostile to religion.
  • Nearly 36 percent of scientists have no doubt about God’s existence.

RUS is the largest study of American views on religion and science. It includes the nationally representative survey of more than 10,000 Americans, more than 300 in-depth interviews with Christians, Jews and Muslims — more than 140 of whom are evangelicals — and extensive observations of religious centers in Houston and Chicago.

Ecklund comes to similar conclusions in her 2010 book about scientists and religious faith Science vs. Religion.

2. Sociologist Jon Hill on how Americans view the evolution debate:

As part of a recent project funded by the BioLogos Foundation, I have fielded a new, nationally representative survey of the American public: The National Study of Religion and Human Origins (NSRHO).

Unlike existing surveys, this one includes extensive questions about human origins that allow us to develop a more accurate portrait of what the general public—and, in particular, Christians—actually believe. The survey includes questions on belief in human evolution, divine involvement, the existence of Adam and Eve, historical timeframe, original sin, and more. For each of these questions, participants are allowed to respond with “not at all sure” about what they believe. If they claim a position, they are also asked to rate how confident they are that their belief is correct. Lastly, they are asked to report how important having the right beliefs about human origins is to them personally…

If only eight percent of respondents are classified as convinced creationists whose beliefs are dear to them, and if only four percent are classified as atheistic evolutionists whose beliefs are dear to them, then perhaps Americans are not as deeply divided over human origins as polls have indicated. In fact, most Americans fall somewhere in the middle, holding their beliefs with varying levels of certainty. Most Americans do not fall neatly into any of the existing camps, and only a quarter claimed their beliefs were important to them personally.

So what does this mean for the church? I think it shows that most people, even regular church-going evangelicals, are not deeply entrenched on one side of a supposed two-sided battle. Certainly, the issue divides Christians. But Christian beliefs about human origins are complex. There’s no major single chasm after all.

In other words, the average religious American doesn’t have think this issue is a matter of life and death, even if the rhetoric from both sides is that the other is a clear enemy.

Journalists: stop saying scientists “proved” something in studies

One comment after a story about a new study on innovation in American films over time reminds journalists that scientists do not “prove” things in studies.

The front page title is “Scientist Proves…”

I’m willing to bet the scientist said no such thing. Rather it was probably more along the lines of “the data gives an indication that…”

Terms in science have pretty specific meanings that differ from our day-to-day usage. “Prove” and “theory, among others, are such terms. Indeed, science tends to avoid “prove” or “proof.” To quote another article “Proof, then, is solely the realm of logic and mathematics (and whiskey).”

[end pedantry]

To go further, using the language of proof/prove tends to relay a particular meaning to the public: the scientist has shown without a doubt and that in 100% of cases that a causal relationship exists. This is not how science, natural or social, works. We tend to say outcomes are more or less likely. There can also be relationships that are not causal – correlation without causation is a common example. Similarly, a relationship can still be true even if it doesn’t apply to all or even most cases. When teaching statistics and research methods, I try to remind my students of this. Early on, I suggest we are into “proving” things but rather looking for relationships between things using methods, quantitative or qualitative, that still have some measure of error built-in. If we can’t have 100% proof, that doesn’t mean science is dead – it just means that done correctly, we can be more confident about our observations.

See an earlier post regarding how Internet commentors often fall into similar traps when responding to scientific studies.

 

Argument: scientists need help in handling big data

Collecting, analyzing, and interpreting big data may just be a job that requires more scientists:

For projects like NEON, interpreting the data is a complicated business. Early on, the team realized that its data, while mid-size compared with the largest physics and biology projects, would be big in complexity. “NEON’s contribution to big data is not in its volume,” said Steve Berukoff, the project’s assistant director for data products. “It’s in the heterogeneity and spatial and temporal distribution of data.”

Unlike the roughly 20 critical measurements in climate science or the vast but relatively structured data in particle physics, NEON will have more than 500 quantities to keep track of, from temperature, soil and water measurements to insect, bird, mammal and microbial samples to remote sensing and aerial imaging. Much of the data is highly unstructured and difficult to parse — for example, taxonomic names and behavioral observations, which are sometimes subject to debate and revision.

And, as daunting as the looming data crush appears from a technical perspective, some of the greatest challenges are wholly nontechnical. Many researchers say the big science projects and analytical tools of the future can succeed only with the right mix of science, statistics, computer science, pure mathematics and deft leadership. In the big data age of distributed computing — in which enormously complex tasks are divided across a network of computers — the question remains: How should distributed science be conducted across a network of researchers?

Two quick thoughts:

1. There is a lot of potential here for crossing disciplinary boundaries to tackle big data projects. This isn’t just about parceling out individual pieces of the project; bringing multiple perspectives together could lead to an improved final outcome.

2. I wonder if sociologists aren’t particularly well-suited for this kind of big data work. Given our emphasis on theory and methods, we both emphasize the big picture as well as how to effectively collect, analyze, and interpret data. Sociology students could be able to step into such projects and provide needed insights.

Internet commenters can’t handle science because they argue by anecdote, think studies apply to 100% of cases

Popular Science announced this week they are not allowing comments on their stories because “comments can be bad for science”:

But even a fractious minority wields enough power to skew a reader’s perception of a story, recent research suggests. In one study led by University of Wisconsin-Madison professor Dominique Brossard, 1,183 Americans read a fake blog post on nanotechnology and revealed in survey questions how they felt about the subject (are they wary of the benefits or supportive?). Then, through a randomly assigned condition, they read either epithet- and insult-laden comments (“If you don’t see the benefits of using nanotechnology in these kinds of products, you’re an idiot” ) or civil comments. The results, as Brossard and coauthor Dietram A. Scheufele wrote in a New York Times op-ed:

Uncivil comments not only polarized readers, but they often changed a participant’s interpretation of the news story itself.
In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.
Simply including an ad hominem attack in a reader comment was enough to make study participants think the downside of the reported technology was greater than they’d previously thought.

Another, similarly designed study found that just firmly worded (but not uncivil) disagreements between commenters impacted readers’ perception of science…

A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics. Everything, from evolution to the origins of climate change, is mistakenly up for grabs again. Scientific certainty is just another thing for two people to “debate” on television. And because comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.

In addition to rude comments and ad hominem attacks leading to changed perceptions about scientific findings, here are two common misunderstandings of how science works often found in online comments (these are also common misconceptions offline):

1. Internet conversations are ripe for argument by anecdote. This happens all the time: a study is described and then the comments are full of people saying that the study doesn’t apply to them or someone they know. Providing a single counterfactual usually says very little and scientific studies are often designed to be as generalizable as they can be. Think of jokes made about global warming: just because there is one blizzard or one cold season doesn’t necessarily invalidate a general trend upward for temperatures.

2. Argument by anecdote is related to a misconception about scientific studies: the findings do not often apply to 100% of cases. Scientific findings are probabilistic, meaning there is some room for error (this does not mean science doesn’t tell us anything – it means it is hard to measure and analyze the real world – and scientists try to limit error as much as possible). Thus, scientists tend to talk in terms of relationships being more or less likely. This tends to get lost in news stories that suggest 100% causal relationships.

In other words, in order to have online conversations about science, you have to have readers who know the basics of scientific studies. I’m not sure my two points above are necessarily taught before college but I know I cover these ideas in both Statistics and Research Methods courses.

Science problem: study says there is not enough information in methods sections of science articles to replicate

A new study suggests the methods sections in science articles are incomplete, making it very difficult to replicate the studies:

Looking at 238 recently published papers, pulled from five fields of biomedicine, a team of scientists found that they could uniquely identify only 54 percent of the research materials, from lab mice to antibodies, used in the work. The rest disappeared into the terse fuzz and clipped descriptions of the methods section, the journal standard that ostensibly allows any scientist to reproduce a study.

“Our hope would be that 100 percent of materials would be identifiable,” said Nicole A. Vasilevsky, a project manager at Oregon Health & Science University, who led the investigation.

The group quantified a finding already well known to scientists: No one seems to know how to write a proper methods section, especially when different journals have such varied requirements. Those flaws, by extension, may make reproducing a study more difficult, a problem that has prompted, most recently, the journal Nature to impose more rigorous standards for reporting research.

“As researchers, we don’t entirely know what to put into our methods section,” said Shreejoy J. Tripathy, a doctoral student in neurobiology at Carnegie Mellon University, whose laboratory served as a case study for the research team. “You’re supposed to write down everything you need to do. But it’s not exactly clear what we need to write down.”

A new standard could be adopted across journals and subfields: enough information has to be given in the methods section for another scientist to replicate the study. Another advantage of this might be that it pushes authors to try to read their paper from the perspective of outsiders who are looking at the study for the first time.

I wonder how well sociology articles would fare in this analysis. Knowing everything needed for replication can get voluminous or technical, depending on the work that went into collecting the data and then getting it ready for analysis. There are a number of choices along the way that add up.

Krugman: prediction problems in economics due to the “sociology of economics”

Looking at the predictive abilities of macroeconomics, Paul Krugman suggests there is an issue with the “sociology of economics”:

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

This is an occasional discussion in social sciences like economics or sociology: how much are they really like a science in the sense of making testable predictions (not about the natural world but for social behavior) versus whether they are more interpretive. I’m not surprised Krugman takes this stance but it is interesting that he says the issue is within the discipline itself for rewarding the wrong things. If this is the case, what could be done to reward successful predictions? At this point, Krugman is suggesting a problem without offering much of a solution. As a number of people, like Nassim Taleb and Nate Silver, have noted in recent years, making predictions is quite difficult, requires a more humble approach, and requires particular methodological and statistical approaches.

Scholars suggest switch from urban studies to urban science and the DNA of cities

Several scholars recently called for pursuing urban science:

William Solecki compares the current study of cities to natural history in the 19th century. Back then most natural scientists were content to explore and document the extent of biological and behavioral differences in the world. Only recently has science moved from cataloguing life to understanding the genetic code that forms its very basis.

It’s time for urban studies to evolve the same way, says Solecki, a geographer at Hunter College who’s also director of the C.U.N.Y. Institute for Sustainable Cities. Scholars from any number of disciplines — economics and history to ecology and psychology — have explored and documented various aspects of city life through their own unique lenses. What’s needed now, Solecki contends, is a new science of urbanization that looks beyond the surface of cities to the fundamental laws that form their very basis too…

In Environment, the researchers outline three basic research goals for their proposed science of urbanization:

  1. To define the basic components of urbanization across time, space, and place.
  2. To identify the universal laws of city-building, presenting urbanization as a natural system.
  3. To link this new system of urbanization with other fundamental processes that occur in the world.

The result, Solecki believes, will be a stronger understanding of the “DNA” of cities — and, by extension, an improved ability to address urban problems in a systemic manner. Right now, for instance, urban transport scholars respond to the problem of sprawl and congestion with ideas like bike lanes or bus-rapid transit lines. Those programs can be great for cities, but in a way they fix a symptom of a problem that still lingers. An improved science of urbanization would isolate the underlying processes that caused this unsustainable development in the first place.

Three quick thoughts:

1. I think this assumes we have the kind of data and methodology that could get at the “DNA of cities.” Presumably, this is big data collected in innovative ways. To use the natural science metaphor, it is one thing to know about the existence of DNA and it is another thing to collect and analyze it. With this new kind of data, cities can then be viewed as complex systems with lots of moving pieces.

2. Are there necessarily universal laws underlying cities? We are currently in an academic world where there are a variety of theories about urban growth but they tend to be idiosyncratic to particular cities, apply to particular time periods, and emphasize different aspects of social, economic, and political life. Is this because no one has really put it all together yet or because it is really hard to find universal laws?

3.

When scientific papers are redacted, how does it impede the progress of science?

An article about a recent controversial paper published in Nature includes a summary of how many scientific papers were redacted or the result of fraud since 1975:

In the meantime, the paper has been cited 11 times by other published papers building on the findings.

It may be impossible for anyone from outside to know the extent of the problems in the Nature paper. But the incident comes amid a phenomenon that some call a “retraction epidemic.”

Last year, research published in the Proceedings of the National Academy of Sciences found that the percentage of scientific articles retracted because of fraud had increased tenfold since 1975.

The same analysis reviewed more than 2,000 retracted biomedical papers and found that 67 percent of the retractions were attributable to misconduct, mainly fraud or suspected fraud.

“You have a lot of people who want to do the right thing, but they get in a position where their job is on the line or their funding will get cut, and they need to get a paper published,” said Ferric C. Fang, one of the authors of the analysis and a medical professor at the University of Washington. “Then they have this tempting thought: If only the data points would line up .?.?.?”

Fang said retractions may be rising because it is simply easier to cheat in an era of digital images, which can be easily manipulated. But he said the increase is caused at least in part by the growing competition for publication and for NIH grant money.

There are two consequences of this commonly discussed in the media. One is the price for taxpayers who fund some of the big money scientific and medical research through federal grants. Second is the credibility of science itself.

But, I think there is a third issue that is perhaps even more important. What does this say about what we actually know about the world? In other worlds, how many subsequent papers are built on the fraudulent or redacted work? Science often works in a chain or pyramid; later work builds on earlier findings, particularly ones published in more prestigious journals. So when a paper is questioned, like the piece in Nature, it isn’t just about the nature of that one paper. It is also about the 11 papers that have already cited it.

So what does this mean for what we actually know? How much does a redacted piece set back science? Or, do researchers hardly even notice? I suspect many of these redacted papers don’t slow down things too much but there is always the potential that a redacted paper could pull the rug out of important findings.

h/t Instapundit