Why cases of scientific fraud can affect everyone in sociology

The recent case of a Dutch social psychologist admitting to working with fraudulent data can lead some to paint social psychology or the broader discipline of sociology as problematic:

At the Weekly Standard, Andrew Ferguson looks at the “Chump Effect” that prompts reporters to write up dubious studies uncritically:

The silliness of social psychology doesn’t lie in its questionable research practices but in the research practices that no one thinks to question. The most common working premise of social-psychology research is far-fetched all by itself: The behavior of a statistically insignificant, self-selected number of college students or high schoolers filling out questionnaires and role-playing in a psych lab can reveal scientifically valid truths about human behavior.

And when the research reaches beyond the classroom, it becomes sillier still…

Described in this way, it does seem like there could be real journalistic interest in this study – as a human interest story like the three-legged rooster or the world’s largest rubber band collection. It just doesn’t have any value as a study of abstract truths about human behavior. The telling thing is that the dullest part of Stapel’s work – its ideologically motivated and false claims about sociology – got all the attention, while the spectacle of a lunatic digging up paving stones and giving apples to unlucky commuters at a trash-strewn train station was considered normal.

A good moment for reaction from a conservative perspective: two favorite whipping boys, liberal (and fraudulent!) social scientists plus journalists/the media (uncritical and biased!), can be tackled at once.

Seriously, though: the answer here is not to paint entire academic disciplines as problematic because of one case of fraud. Granted, some of the questions raised are good ones that social scientists themselves have raised recently: how much about human activity can you discover through relatively small sample tests of American undergraduates? But good science is not based on one study anyway. An interesting finding should be corroborated by similar studies done in different places at different times with different people. These multiple tests and observations help establish the reliability and validity of findings. This can be a slow process, another issue in a media landscape where new stories are needed all the time.

This reminds me of Joel Best’s recommendations regarding dealing with statistics. One common option is to simply trust all statistics. Numbers look authoritative, often come from experts, and they can be overwhelming. Just accepting them can be easy. At the other pole is the common option of saying that all statistics are simply interpretation and are manipulated so we can’t trust any of them. No numbers are trustworthy. Neither approaches are good options but they are relatively easy options. The better route to go when dealing with scientific studies is to have the basic skills necessary to understand whether they are good studies or not and how the process of science works. In this case, this would be a great time to call for better training among journalists about scientific studies so they can provide better interpretations for the public.

In the end, when one prominent social psychologist admits to massive fraud, the repercussions might be felt by others in the field for quite a while.

What is “The Big Data Boom”on the Internet good for?

The Internet is a giant source of ready-to-use data:

Today businesses can measure their activities and customer relationships with unprecedented precision. As a result, they are awash with data. This is particularly evident in the digital economy, where clickstream data give precisely targeted and real-time insights into consumer behavior…

Much of this information is generated for free, by computers, and sits unused, at least initially. A few years after installing a large enterprise resource planning system, it is common for companies to purchase a “business intelligence” module to try to make use of the flood of data that they now have on their operations. As Ron Kohavi at Microsoft memorably put it, objective, fine-grained data are replacing HiPPOs (Highest Paid Person’s Opinions) as the basis for decision-making at more and more companies.

The wealth of data also makes it easy to run experiments:

Consider two “born-digital” companies, Amazon and Google. A central part of Amazon’s research strategy is a program of “A-B” experiments where it develops two versions of its website and offers them to matched samples of customers. Using this method, Amazon might test a new recommendation engine for books, a new service feature, a different check-out process, or simply a different layout or design. Amazon sometimes gets sufficient data within just a few hours to see a statistically significant difference…

According to Google economist Hal Varian, his company is running on the order of 100-200 experiments on any given day, as they test new products and services, new algorithms and alternative designs. An iterative review process aggregates findings and frequently leads to further rounds of more targeted experimentation.

This sounds like a social scientist’s dream – if we could get our hands on the data.

My big question about all of this data is this: what should be done with it? This article, and others I’ve seen, have said that it will transform business. If this is just a way for businesses to become more knowledgeable, more efficient, and ultimately, more profitable, is this enough? Occasionally, we hear of things like discovering and/or tracking epidemics by looking at search queries or tools like the “mechanical turk” to crowdsource small but needed work. On the whole, does the data from the Internet advance human flourishing or concentrate some benefits in the hands of a few or even hinder flourishing? Does this data give us insights into health and medicine, international relations, and social interactions or does it primarily give entrepreneurs and established companies the chance to make more money? Are these questions that anyone really asks or cares about?

Dutch social psychologist commits massive science fraud

This story is a few days old but still interesting: a Dutch social psychologist has admitted to using fraudulent data for years.

Social psychologist Diederik Stapel made a name for himself by pushing his field into new territory. His research papers appeared to demonstrate that exposure to litter and graffiti makes people more likely to commit small crimes and that being in a messy environment encourages people to buy into racial stereotypes, among other things.

But these and other unusual findings are likely to be invalidated. An interim report released last week from an investigative committee at his university in the Netherlands concluded that Stapel blatantly faked data for dozens of papers over several years…

More than 150 papers are being investigated. Though the studies found to contain clearly falsified data have not yet been publicly identified, the journal Science last week published an “editorial expression of concern” regarding Stapel’s paper on stereotyping. Of 21 doctoral theses he supervised, 14 were reportedly compromised. The committee recommends a criminal investigation in connection with “the serious harm inflicted on the reputation and career opportunities of young scientists entrusted to Mr. Stapel,” according to the report…

I think the interesting part of the story here is how this was able to go on so long. It sounds like because Stapel handled more of the data himself rather than follow typical practices of handing it off to graduate students, he was able to falsify data for longer.

This also raises questions about how much scientific data might be faked or unethically tampered with. The article references a forthcoming study on the topic:

In a study to be published in a forthcoming edition of the journal Psychological Science, Loewenstein, John, and Drazen Prelec of MIT surveyed more than 2,000 psychologists about questionable research practices. They found that a significant number said they had engaged in 10 types of potentially unsavory practices, including selectively reporting studies that ‘worked’ (50%) and outright falsification of data (1.7%).

Pushing positive results, generally meaning papers that prove an alternative hypothesis, is also known to be favored by journals who don’t like negative results as much. Of course, both sets of results are needed for science to advance as both help prove and disprove arguments and theories. “Outright falsification” is another story…and perhaps even underreported (given social desirability bias and prevailing norms in scientific fields).

Given these occurrences, I wonder if scientists of all kinds would push for more regulation (IRBs, review boards, etc.) or less regulation with scientists policing themselves more (some more training in ethics, more commonly sharing data or linking studies to available data so readers could do their own analysis, etc.)

Space for sociological factors when looking at scientific research

I ran into this blog post discussing a recent study published in Hormones and Behavior titled “Maternal tendencies in women are associated with estrogen levels and facial femininity.” This particular blogger at Scientific American starts out by suggesting she doesn’t like the results:

Friend of the blog Cackle of Rad was the first person to send me this paper, and when I first tried to read it, I got…pretty angry. Being a rather obsessively logical person, I know why I felt angry about this paper, and I worked very hard to step back from it and approach it in a thoroughly scientific manner.

It didn’t work, I called in Kate. That helped a little.

In the end, it’s not a bad paper. The data are the data, as my graduate advisor always says. But data need to be interpreted, and interpretations require context. And I think what’s missing from this paper is not data or adequate methods. It’s context.

In the end, the blogger suggests the “context” needed really are a number of sociological factors that might influence perceptions:

So I wonder if the authors should make more effort to look into sociological factors. How does the intense pressure on women to become wives and mothers change as a function of how feminine the girl looks? I think you can’t separate any of this from this whole “women with higher estrogen want to be mothers” idea. This is why papers like this bug me, because they try to sell this as a evolutionary thing, without really acknowledging how much sociological pressure goes in to making women want to be mothers. And of course now I read them and I instantly get bristly, because what I see is people making assumptions about what I want, and what I must feel like, based on a few aspects of my physiology. It can be of value scientifically…but I don’t want it to apply to ME. I know it might be science, but I also find it more than a bit insulting.

I don’t know this area of research so I don’t have much room to dispute the results of the original study. However, how this blogger goes about this argument for adding sociological factors is interesting. Here are two possible options for making this argument:

1. Argument #1: the study actually could benefit from sociological factors. Definitions of femininity are wrapped up in cultural assumptions and patterns. There is a lot of research to back this up and perhaps we can point to specific parts of this study that would be altered if context was taken into account. But this doesn’t seem to be conclusion of this blog post.

2. Argument #2: there must be some sociological factors involved here because I don’t like these results. On one hand, perhaps it is admirable to admit one doesn’t like these research results. This can often be true about scientific results: it challenges our personal understandings of the world. So why end the post by again emphasizing that the blogger doesn’t like the results? Does this simply reduce sociology to the backup science that one only calls in to suggest that everything is cultural or relative or socially conditioned?

Perhaps I am simply reading too much into this. I don’t know how much natural science research could be improved by including sociological factors, whether it is often considered, or whether this is simply an unusual blog post. Argument #1 is the stronger scientific argument and is the one that should be emphasized more here.

Fermilab closes Tevatron; what’s the effect on nearby suburbs and the Chicago region?

The need for the Tevatron, a particle accelerator, at the Fermi National Accelerator Laboratory, known commonly as Fermilab, has been drastically reduced after the construction of the Large Hadron Collidor in Europe. Therefore, the Tevatron is being shut down and Fermilab is looking to transition to new areas of physics research. My question is this: what effect this will have on the nearby suburbs and the Chicago region?

The article says that several local politicians want to keep research at Fermilab going:

Fermilab will still have star quality, and the estimated 2,300 scientists there will continue playing a critical role in particle physics. The lab could even re-emerge a few decades from now as the leader, officials say.

However, one daunting hurdle remains: obtaining what may be hundreds of billions of dollars in federal funding that officials say is needed to guide the lab’s work into the next generation of research via two projects, known as Long-Baseline Neutrino Experiment and Project X…

Over the decades, the cost of upgrades at Fermi could reach hundreds of billions of dollars, a frightening prospect in this troubled economy. But U.S. Reps. Randy Hultgren of Winfield and Judy Biggert of Hinsdale said the funding is crucial. On Wednesday, the two Republican congressmen held a round table on the underground particle-physics program at Fermi.

“I think basic science is the most important thing that will help us to compete in the global economy,” Biggert said. “We have to realize that basic science really drives industry and creates the jobs our children and grandchildren will enjoy.”

I assume most places would want to get federal money and remain competitive globally. The Chicago region, as a global city, needs research facilities like these.

But what about the local jobs and the greater impact on nearby suburbs? Several researchers, including Michael Ebner, have suggested that Fermilab played a crucial role in the development of the area. This 2006 overview of Naperville in Chicago sums up this perspective:

With the creation in 1946 of Argonne National Laboratory (near Lemont, about 15 miles southeast of Naperville) and the establishment, in 1967, of the National Accelerator Laboratory-now called Fermilab-in Batavia (about 15 miles northwest of town), Naperville was on its way to becoming “Chicago’s Technoburb,” as Lake Forest College history professor Michael Ebner later dubbed it. Bell Labs, Amoco, Nalco Chemical, NI-Gas, and Miles Laboratories were among the corporations that set up facilities in Naperville during the 1960s, ’70s, and ’80s.

In particular, Ebner argues that this facility plus Argonne National Laboratory meant that scientists and other staff moved to Naperville and then pushed for better schools. While Naperville was still relatively small in the 1950s and 1960s, this influx of educated residents gave the city a world-class educational system, helping to contribute to Naperville’s later growth. Here is one of the outcomes that could be tied to this from the Naperville District #203 website:

In the Third International Mathematics and Science Study-Repeat (1999 TIMSS-R), District 203 eighth graders achieved the highest score in science and sixth highest in mathematics among the 38 participating nations and consortiums worldwide.

I am somewhat skeptical of this argument. One, I’ve never seen hard figures that show how many Fermilab or Argonne researchers actually settled in Naperville. If these researchers also lived in other communities, did their school districts experience the same changes? Two, I haven’t seen evidence that these people directly influenced school changes in the community. Third, I would argue that the 1964 announcement that Bell Laboratories was locating a facility just north of Naperville was much more consequential in understanding Naperville’s growth.

Additionally, Fermilab has often been included in promotional materials as part of the Illinois Technology Research Corridor, providing the research and development foundation to the many notable corporations that have located along I-88 between Oak Brook and Aurora. This article from summer 2011 briefly recognized the impact of the corridor:

While the top-five states were unchanged from 2010, rankings 6 to 10 saw a few surprise movers. Illinois gained 8 spots (14/6) from last year, bumping Pennsylvania down to 7th place. What happened?

As it turns out, Illinois’ improvement is the result of the amount of scientific grant money awarded to the state — $185 million to be exact — from the National Science Foundation to the University of Illinois at Urbana/Champaign.

While many know the state for politics and sports, Illinois’ Technology and Research Corridor is a major scientific hub in northeastern Illinois, linking intellectual capital and corporate innovation.

Big name companies such as Motorola Solutions and Mobility, Boeing, and Telephone and Data Systems spacer among others are headquartered in Illinois in large part to benefit from the concentration of technical expertise.

I assume the state of Illinois, the city of Chicago, DuPage County, and nearby suburbs would like Fermilab to continue to be scientifically relevant as this brings in federal money, jobs, businesses, and educated residents. Whether the transition Fermilab makes to new research areas also includes these benefits for nearby communities remains to be seen.

A sociological view of science

A while back, I had a conversation with friends about how undergraduate students understand and use the word “proof” when talking about knowing about the world. Echoing some of our conversation, a sociologist describes science:

I am a sociologist and read philosophy guardedly. As a social scientist, I tell my students again and again that while a theory or a sparkling generalization may be beautiful, the real test is always an “appeal to the empirical.” A proposition may be very appealing and may seem to provide powerful and enticing descriptions and understandings. However, until we gather evidence that shows that the proposition can be supported by information confirmed by the senses we must hold any proposition as one possibility among other competing explanations. Further, even when a theory or a set of ideas has been measured repeatedly against the empirical world, science never leads to certainty. Rather, science is always a modest enterprise. Even at its best and most rigorous, science is inherently “probabilistic” — we can have varying degrees of confidence in a finding, but certainty is not possible. As humans, our knowing is contingent and limited. Even the best designed scientific tests carry with them the possibility of disconfirmation in later tests. Science at its best offers acceptable levels of persuasiveness but cannot offer final conclusions.

Several things stand out to me in this explanation:

1. The appeal to data and weighing information versus existing explanations.

2. The lack of certainty in science and a probabilistic view of the world. Certainty here might be defined as “100% knowledge.” I think we can be functionally certain about some things. But the last bit about persausiveness is interesting.

3. Human knowledge is limited. There are always new things to learn, particularly about people and societies.

4. Scientific tests are undertaken to test existing theories and discover new information.

This sounds like a reasonable sociological perspective on science.

A call for more TV shows about science and academia

Certain television genres are well-established. One academic suggests TV should branch out and include a show about science, knowledge, and academia:

No matter what new sitcoms and dramas the networks dream up this coming fall, I can almost guarantee the absence of one type of show: a show about academia. But a television show about academics — professors, scientist and graduate students — is more necessary than ever before. And with a film being made out of Piled Higher and Deeper — an online comic about the trials and tribulations of graduate students — the time may be right to fill this gaping hole on the small screen…

The interplay between the objective quest for knowledge and the all-too-human drama that surrounds it is something that the average viewer has probably heard of, but does not know much about.

And there’s no shortage of real drama to fuel story lines. This show, which I would call The Ivory Tower, would be packed with backstabbing and gossip, glimpses into the intellectual servitude of graduate students and postdoctoral fellows, the agony of dissertation defenses, the thrill of scientific discoveries, the ulcer-creating tenure process, professors’ quests for 15 minutes of fame, and, of course, the inevitable lab love affairs.

Episodes could revolve around topics ranging from the conflict-of-interest riddled nature of how scientific ideas are vetted by peers, to those rare but gut-wrenching cases of academic dishonesty and faking data, to the intense deliberations over thesis defenses. Academia is a very non-rational endeavor.

Here are a few things such a show would have to deal with:

1. There seems to be a good number of Americans who think academics are elitist or liberal or Godless (or perhaps all three). Viewers need to be able to relate to the characters or the settings. This is an image problem.

2. As the writer suggests, the show would have to revolve around relationships in the same way that every other show does. Yes, it would have to include all of TV’s tropes including unrequited love between co-workers and bad/incompetent bosses.

3. I have a sneaking suspicion that this whole proposal is a joke. Who wants to watch “the agony of dissertation defenses” or the “ulcer-creating tenure process”?

4. Perhaps such a show could be based around an innovative science or research project. Therefore, the overall payoff of the show wouldn’t just be the episode-to-episode relationships but rather a large story arc about curing cancer or developing space travel vehicles for humans that would go beyond the moon.

4a. Why couldn’t the project-driven show work as a reality show on Discovery or National Geographic?

5. I suspect many academics get into academia because they are excited about “the objective quest for knowledge.” But how many professors have given such a speech to students about the joys of research, hard work, and discovery only to be met with blank stares? Some students enjoy this – but would the general public?

6. Which discipline would get to be featured in such a show? I wonder how TV creators and producers would make this choice. I imagine they would have to go with something relatively well-known and/or controversial.

7. There are plenty of shows and movies about high school. There still aren’t that many about college, let alone the academic side of college. Is this because high school is a more universal experience or is it more uniform across schools?

Why we need “duh science”

There are a lot of studies that are completed every year. The results of some seem quite obvious than others, what this article calls “duh research.” Here is why experts say these studies are still necessary:

But there’s more to duh research than meets the eye. Experts say they have to prove the obvious — and prove it again and again — to influence perceptions and policy.

“Think about the number of studies that had to be published for people to realize smoking is bad for you,” said Ronald J. Iannotti, a psychologist at the National Institutes of Health. “There are some subjects where it seems you can never publish enough.”…

There’s another reason why studies tend to confirm notions that are already widely held, said Daniele Fanelli, an expert on bias at the University of Edinburgh in Scotland. Instead of trying to find something new, “people want to draw attention to problems,” especially when policy decisions hang in the balance, he said.

Kyle Stanford, a professor of the philosophy of science at UC Irvine, thinks the professionalization of science has led researchers — who must win grants to pay their bills — to ask timid questions. Research that hews to established theories is more likely to be funded, even if it contributes little to knowledge.

Here we get three possible answers as to why “duh research” takes place:

1. It takes time for studies to draw attention and become part of cultural “common sense.” One example cited in this article is cigarette smoking. One study wasn’t enough to show a relationship between smoking and negative health outcomes. Rather, it took a number of studies until there was a critical mass that the public accepted. While the suggestion here is that this is mainly about convincing the public, this also makes me think of the general process of science where numerous studies find the same thing and knowledge becomes accepted.

2. These studies could be about social problems. There are many social ills that could be deserving of attention and funding and one way to get attention is to publish more studies. The findings might already be widely accepted but the studies help keep the issue in the public view.

3. It is about the structure of science/the academy where researchers are rewarded for publications and perhaps not so much for advancing particular fields of study. “Easy” findings help scientists and researchers keep their careers moving forward. These structures could be altered to promote more innovative research.

All three of these explanations make some sense to me. I wonder how much the media plays a role in this; why do media sources cite so much “duh research” where there are other kinds of research going on as well? Could these be “easy” journalistic stories that fit particular established narratives or causes? Do universities/research labs tend to promote these studies more?

Of course, the article also notes that some of these studies can also turn out unexpected results. I would guess that there are quite a few important findings that came out of research that someone at the beginning could have easily predicted a well-established answer.

(It would be interesting to think more about the relationship between sociology and “duh research.” One frequent knock against sociology is that it is all “common sense.” Aren’t we aware of our interactions with others as well as how our culture operates? But we often don’t have time for analysis and understanding in our everyday activities and we often simply go along with prevailing norms and behaviors. It all may seem obvious until we are put in situations that challenge our understandings, like stepping into new situations or different cultures.

Additionally, sociology goes beyond the individual, anecdotal level at which many of us operate. We can often create a whole understanding of the world based on our personal experiences and what we have heard from others. Sociology looks at the structural level and works with data, looking to draw broad conclusions about human interaction.)

Interpreting data regarding scientists and religion

In looking at some data regarding what scientists think about religion, a commentator offers this regarding interpreting sociological data:

The point about asking such questions is not because we know the answers but to emphasise that the interpretation of sociological data is a tricky business. From the perspective of science, ants and humans are far more complex than stars and rocks. A discussion of atheism and science in the US context leads us straight to a discussion of the structure of the American educational system, the role of elites, the present polarisation of the political electorate along religious faultlines, and much else besides…

The challenge then is to think hard about the complex data and not be too dogmatic about the interpretations.

When the phrase “tricky business” is used, it sounds like it is referring to the complex nature of the social world. In order to understand the relationship between science and religion, one must account for a variety of possible factors. It is one thing to say that there are multiple possible interpretations of the same data, another to say that some twist data to support their personal interpretations, and another to suggest that we can get to a correct or right interpretation if we properly account for complexity.

While this commentary is ultimately about using caution when interpreting statistics regarding the religious beliefs of scientists, it also is a little summary of social science research regarding the religious beliefs of scientists. The 2010 study Science vs. Religion is discussed as well as a few other works.

A commercial reminder of the importance of the American lawn

There is little doubt that Americans pay a lot of attention to their lawns and a green lawn is pretty much a necessity in front of the American single-family home. On the way to work today, I heard two grass seed commercials within the same commercial break and they reinforced this interest in lawns.

First, I heard about Pennington Grass Seed. Pennington claimed their bags included all seed while their competitor Scotts only had half a bag of seed and half of bag of filler. Additionally, their seeds required less water. I was invited to go online and check out the science behind the seeds. Second, I heard from Scotts which didn’t name Pennington but went through their claims: Scotts seed doesn’t need more water (actually, it retains water much better than Pennington’s) and it has a special filler whereas Pennington simply uses paper for filler.

Three things struck me about these two commercials:

1. Both ads referred to the science of grass seeds with both claiming they had the better mix. Are consumers really going to pay much attention to this?

2. It was interesting to hear how the two companies approach each other. Pennington went right at Scotts while Scotts didn’t used Pennington’s name (though it wasn’t hard to figure out who they were talking about). From this, can I infer that Scotts is the market leader and Pennington is looking for some way to gain ground?

3. Referring back to my first point, how much of this just really comes down to price and brand recognition? When I go to the store to buy mulch this weekend, would I buy seed based on the science or the price?