Why we need “duh science”

There are a lot of studies that are completed every year. The results of some seem quite obvious than others, what this article calls “duh research.” Here is why experts say these studies are still necessary:

But there’s more to duh research than meets the eye. Experts say they have to prove the obvious — and prove it again and again — to influence perceptions and policy.

“Think about the number of studies that had to be published for people to realize smoking is bad for you,” said Ronald J. Iannotti, a psychologist at the National Institutes of Health. “There are some subjects where it seems you can never publish enough.”…

There’s another reason why studies tend to confirm notions that are already widely held, said Daniele Fanelli, an expert on bias at the University of Edinburgh in Scotland. Instead of trying to find something new, “people want to draw attention to problems,” especially when policy decisions hang in the balance, he said.

Kyle Stanford, a professor of the philosophy of science at UC Irvine, thinks the professionalization of science has led researchers — who must win grants to pay their bills — to ask timid questions. Research that hews to established theories is more likely to be funded, even if it contributes little to knowledge.

Here we get three possible answers as to why “duh research” takes place:

1. It takes time for studies to draw attention and become part of cultural “common sense.” One example cited in this article is cigarette smoking. One study wasn’t enough to show a relationship between smoking and negative health outcomes. Rather, it took a number of studies until there was a critical mass that the public accepted. While the suggestion here is that this is mainly about convincing the public, this also makes me think of the general process of science where numerous studies find the same thing and knowledge becomes accepted.

2. These studies could be about social problems. There are many social ills that could be deserving of attention and funding and one way to get attention is to publish more studies. The findings might already be widely accepted but the studies help keep the issue in the public view.

3. It is about the structure of science/the academy where researchers are rewarded for publications and perhaps not so much for advancing particular fields of study. “Easy” findings help scientists and researchers keep their careers moving forward. These structures could be altered to promote more innovative research.

All three of these explanations make some sense to me. I wonder how much the media plays a role in this; why do media sources cite so much “duh research” where there are other kinds of research going on as well? Could these be “easy” journalistic stories that fit particular established narratives or causes? Do universities/research labs tend to promote these studies more?

Of course, the article also notes that some of these studies can also turn out unexpected results. I would guess that there are quite a few important findings that came out of research that someone at the beginning could have easily predicted a well-established answer.

(It would be interesting to think more about the relationship between sociology and “duh research.” One frequent knock against sociology is that it is all “common sense.” Aren’t we aware of our interactions with others as well as how our culture operates? But we often don’t have time for analysis and understanding in our everyday activities and we often simply go along with prevailing norms and behaviors. It all may seem obvious until we are put in situations that challenge our understandings, like stepping into new situations or different cultures.

Additionally, sociology goes beyond the individual, anecdotal level at which many of us operate. We can often create a whole understanding of the world based on our personal experiences and what we have heard from others. Sociology looks at the structural level and works with data, looking to draw broad conclusions about human interaction.)

A “children at play” sign as a symptom of a larger issue rather than the solution

In Traffic, Tom Vanderbilt argues that Americans rely on a lot of road signs even though there is little to no evidence that having more signs increases the safety of drivers and pedestrians. As an example, Vanderbilt looks at the “children at play” signs:

Despite the continued preponderance of “Children at Play” on streets across the land, it is no secret in the world of traffic engineering that “Children at Play” signs—termed, with subtle condescension, “advisory signs”—have been proven neither to change driver behavior nor to do anything to improve the safety of children in a traffic setting. The National Cooperative Highway Research Program, in its “Synthesis of Highway Practice No. 139,” sternly advises that “non-uniform signs such as “CAUTION—CHILDREN AT PLAY,” “SLOW—CHILDREN,” or similar legends should not be permitted on any roadway at any time.” Moreover, it warns that “the removal of any nonstandard signs should carry a high priority.”…

If the sign is so disliked by the profession charged with maintaining order and safety on our streets, why do we seem to see so many of them? In a word: Parents. Talk to a town engineer, and you’ll often get the sense it’s easier to put up a sign than to explain to local residents why the sign shouldn’t be put up. (This official notes that “Children at Play” signs are the second-most-common question he’s asked about at town meetings.) Residents have also been known to put up their own signs, perhaps using the DIY instructions provided by eHow (which notes, in a baseless assertion typical of the whole discussion, that “Notifying these drivers there are children at play may reduce your child’s risk”). States and municipalities are also free to sanction their own signs (hence the rise of “autistic child” traffic signs)…

One of the things that is known, thanks to peer-reviewed science, is that increased traffic speeds (and volumes) increase the risk of children’s injuries. But “Children at Play” signs are a symptom, rather than a cure—a sign of something larger that is out of whack, whether the lack of a pervasive safety culture in driving, a system that puts vehicular mobility ahead of neighborhood livability, or non-contextual street design. After all, it’s roads, not signs, that tell people how to drive. People clamoring for “Children at Play” signs are often living on residential streets that are inordinately wide, lacking any kind of calming obstacles (from trees to “bulb-outs”), perhaps having unnecessary center-line markings—three factors that will boost vehicle speed more than any sign will lower them.

So the signs are more of a band-aid to a larger problem which Vanderbilt discusses more in his book: streets and roads are generally designed in America for cars to go fast rather than as structures that also accommodate pedestrians and other neighborhood activities. Signs can’t do a whole lot to reduce the effects of this structure even though citizens, local officials, and some traffic engineers continue to aid their proliferation. In a car-obsessed culture, perhaps we shouldn’t be too surprised by all of this: people want to be able to move quickly from place to place.

This all reminds me of the efforts of groups like the New Urbanists who suggest the solution is to redesign the streetscape so that the automobile is given a less prominent place. By putting houses and sidewalks closer to the street, planting trees near the roadway, allowing parking on the sides of streets, and narrowing the width of streets can reduce the speed of drivers and reduce accidents. Of course, one could go even further and remove all traffic signs altogether (see here and text plus pictures and video here).

I wonder if we could use Vanderbilt’s examples as evidence of a larger public discussion about the role of science versus other kinds of evidence. There may be a lot of research that suggests signs don’t help much but how does that science reach the typical suburban resident who is concerned about their kids playing near the street? If confronted with the sort of evidence that Vanderbilt provides, how would the typical suburban resident or official respond?

Quick Review: The Canon

When recently at the Field Museum in Chicago, I encountered several books in the bookstore. I tracked down one of them, a former bestseller, down at the library: The Canon: A Whirligig Tour of the Beautiful Basics of Science by Natalie Angier. A few quick thoughts about the book:

1. This book is an overview of the basic building blocks of science (there are the chapters in order): thinking scientifically, probabilities, scale (different sizes), physics, chemistry, evolutionary biology, molecular biology, geology, and astronomy. Angier interviewed a number of scientists and she both quotes and draws upon their ideas. For someone looking for a quick understanding of these subjects, this is a decent find. From this book, one could delve into more specialized writings.

2. Angier is a science writer for the New York Times. While she tries to bring exuberance to the subject, her descriptions and adjectives are often over the top. This floweriness was almost enough to stop me from reading this book at a few points.

3. To me, the most rewarding chapters were the first three. As a social scientist, I could relate to all three of these and plan to bring some of these thoughts to my students. Thinking scientifically is quite different than the normal experience most of us have of building ideas and concepts on anecdotal data.

a. A couple of the ideas stuck out to me. The first is a reminder about scientific theories: while some think a theory means that it isn’t proven yet so it can be disregarded, scientists view theories differently. Theories are explanations that are constantly being built upon and tested but they often represent the best explanations scientists currently have. A theory is not a law.

b. The second was about random data. Angier tells the story of a professor who runs this activity: at the beginning of class, half the students are told to flip a coin 100 times and record the results. The other half of the students are told to make up the results for 100 imaginary coin flips. The professor leaves the room while the students do this. When she returns, she examines the different recordings and most of the time is able to identify which were the real and imaginary results. How? Students don’t quite understand random data; usually after two consecutive heads or tails, they think they have to have the opposite result. In real random data, there can be runs of 6 of 7 heads or tails in a row even as the results tend to average out in the end.

Overall, I liked the content of the book even as I was often irritated with its delivery. For a social scientist, this was a profitable read as it helped me understand subjects far afield.