How the honeybee problem was identified: through the social networks of scientists

Learning about how exactly scientific advances are made can be very interesting. While people might have images of people squirreled away in offices feverishly conducting experiments and reading articles, social networks play a large role in solving problems. Buried in this story about the discovery of the fungus and virus combination that is killing honeybees is how the two teams that solved the problem came together:

But it took a family connection — through David Wick, Charles’s brother — to really connect the dots. When colony collapse became news a few years ago, Mr. Wick, a tech entrepreneur who moved to Montana in the 1990s for the outdoor lifestyle, saw a television interview with Dr. Bromenshenk about bees.

Mr. Wick knew of his brother’s work in Maryland, and remembered meeting Dr. Bromenshenk at a business conference. A retained business card and a telephone call put the Army and the Bee Alert team buzzing around the same blossom.

The first steps were awkward…The process eventually was refined.

By working through this family connection, two teams that each had their own expertise were able to pool their knowledge and resources and come to a solution.

Opinions on science derailed by poor online sample?

Scientific American and Nature recently joined forces to poll readers around the world about their opinions of science. The findings include opinions about science and politics, climate denial, nuclear power, the flu and more.

While this data seems interesting, it might be questionable due to the sample:

More than 21,000 people responded via the Web sites of Nature and of Scientific American and its international editions. As expected, it was a supportive and science-literate crowd—19 percent identified themselves as Ph.Ds. But attitudes differed widely depending on particular issues—climate, evolution, technology—and on whether respondents live in the U.S., Europe or Asia.

So the findings may really be about the opinions of a more “supportive and science-literate crowd” rather than a true representation of international opinion. This is a common issue with open online surveys: it is very difficult to get a sample that is representative of a larger population.

Quick Review: The Canon

When recently at the Field Museum in Chicago, I encountered several books in the bookstore. I tracked down one of them, a former bestseller, down at the library: The Canon: A Whirligig Tour of the Beautiful Basics of Science by Natalie Angier. A few quick thoughts about the book:

1. This book is an overview of the basic building blocks of science (there are the chapters in order): thinking scientifically, probabilities, scale (different sizes), physics, chemistry, evolutionary biology, molecular biology, geology, and astronomy. Angier interviewed a number of scientists and she both quotes and draws upon their ideas. For someone looking for a quick understanding of these subjects, this is a decent find. From this book, one could delve into more specialized writings.

2. Angier is a science writer for the New York Times. While she tries to bring exuberance to the subject, her descriptions and adjectives are often over the top. This floweriness was almost enough to stop me from reading this book at a few points.

3. To me, the most rewarding chapters were the first three. As a social scientist, I could relate to all three of these and plan to bring some of these thoughts to my students. Thinking scientifically is quite different than the normal experience most of us have of building ideas and concepts on anecdotal data.

a. A couple of the ideas stuck out to me. The first is a reminder about scientific theories: while some think a theory means that it isn’t proven yet so it can be disregarded, scientists view theories differently. Theories are explanations that are constantly being built upon and tested but they often represent the best explanations scientists currently have. A theory is not a law.

b. The second was about random data. Angier tells the story of a professor who runs this activity: at the beginning of class, half the students are told to flip a coin 100 times and record the results. The other half of the students are told to make up the results for 100 imaginary coin flips. The professor leaves the room while the students do this. When she returns, she examines the different recordings and most of the time is able to identify which were the real and imaginary results. How? Students don’t quite understand random data; usually after two consecutive heads or tails, they think they have to have the opposite result. In real random data, there can be runs of 6 of 7 heads or tails in a row even as the results tend to average out in the end.

Overall, I liked the content of the book even as I was often irritated with its delivery. For a social scientist, this was a profitable read as it helped me understand subjects far afield.

Varying statistics about DNA matches

NewScientist has a story about a criminal case that demonstrates how scientists can disagree about statistics regarding DNA analysis:

The DNA analyst who testified in Smith’s trial said the chances of the DNA coming from someone other than Jackson were 1 in 95,000. But both the prosecution and the analyst’s supervisor said the odds were more like 1 in 47. A later review of the evidence suggested that the chances of the second person’s DNA coming from someone other than Jackson were closer to 1 in 13, while a different statistical method said the chance of seeing this evidence if the DNA came from Jackson is only twice that of the chance of seeing it if it came from someone else…

[W]e show how, even when analysts agree that someone could be a match for a piece of DNA evidence, the statistical weight assigned to that match can vary enormously.

I recall reading something recently that suggested while the public thinks having DNA samples in a criminal case makes the case very clear, this is not necessarily the case. This article suggests is a lot more complicated and it depends on what lab and scientists are looking at the DNA samples.

Thinking about economics: science or ideology?

Barbara Kiviat discusses whether economics is a science or an ideology. Part of her conclusion:

And when you think about it, it is a little odd that we think economics would be able to do these things. After all, the economy is as much a product of sociology and policy as it is pure-form economics. Yet we’d not expect a sociologist or a political scientist to be able to write a computer model to accurately capture system-wide decision-making. The conclusion I’ve come to: while economists may have an important perspective on whether it’s time for stimulus or austerity, maybe we should stop looking to them as if they are people who are in the ultimate position to know.

Sociologists have been arguing for some time now that sociology has a lot to say about economics, including about how cultural values and ideology guide economic decision-making and actions.