After case of fraud, researchers discuss others means of “misusing research data”

The news that a prominent Dutch social psychologist published fraudulent work has pushed other researchers to talk about other forms of “misusing research data”:

Even before the Stapel case broke, a flurry of articles had begun appearing this fall that pointed to supposed systemic flaws in the way psychologists handle data. But one methodological expert, Eric-Jan Wagenmakers, of the University of Amsterdam, added a sociological twist to the statistical debate: Psychology, he argued in a recent blog post and an interview, has become addicted to surprising, counterintuitive findings that catch the news media’s eye, and that trend is warping the field…

In September, in comments quoted by the statistician Andrew Gelman on his blog, Mr. Wagenmakers wrote: “The field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself.”…

To show just how easy it is to get a nonsensical but “statistically significant” result, three scholars, in an article in November’s Psychological Science titled “False-Positive Psychology,” first showed that listening to a children’s song made test subjects feel older. Nothing too controversial there.

Then they “demonstrated” that listening to the Beatles’ “When I’m 64” made the test subjects literally younger, relative to when they listened to a control song. Crucially, the study followed all the rules for reporting on an experimental study. What the researchers omitted, as they went on to explain in the rest of the paper, was just how many variables they poked and prodded before sheer chance threw up a headline-making result—a clearly false headline-making result.

If the pressure is great to publish (and it certainly is), then there have to be some countermeasures to limit unethical research practices. Here are a few ideas:

1. Giving more people access to the data. In this way, people could check up on other people’s published findings. But if the fraudulent studies are already published, perhaps this is too late.

2. Having more people have oversight over the project along the way. This doesn’t necessarily have to be a bureaucratic board but only having one researcher looking at the data and doing the analysis (such as in the Stapel case) means that there is more opportunity for an individual to twist the data. This could be an argument for collaborative data.

3. Could there be more space within disciplines and journals to discuss the research project? While papers tend to have very formal hypotheses, there is a lot of messy work that goes into these but very little room to discuss how the researchers arrived at them.

4. Decrease the value of media attention. I don’t know how to deal with this one. What researcher doesn’t want to have more people read their research?

5. Have a better educated media so that they don’t report so many inconsequential and shocking studies. We need more people like Malcolm Gladwell who look at a broad swath of research and summarize it rather than dozens of reports grabbing onto small studies. This is the classic issue with nutrition reporting: eggs are great! A new study says they are terrible! A third says they are great for pregnant women and no one else! We rarely get overviews of this research or real questions about the value of all this research. We just get: “a study proved this oddity today…”

6. Resist data mining. Atheoretical correlations don’t help much. Let theories guide statistical models.

7. Have more space to publish negative findings. This would help researchers feel less pressure to come up with positive results.

Does sex also sell sociological research?

A common assumption is that “sex sells.” Could this also apply to sociological research? I have watched as two stories about sociological research have made their way through the media.

1. Do a Google search for “erotic capital” and you will find reference to sociologist Catherine Hakim’s term. Read a quick overview of the term here.

2. A New York Times article from the weekend titled “Another Reason to Avoid His Friends” briefly discusses a study in the July 2011 issue American Journal of Sociology titled “Network Position and Sexual Dysfunction: Implications of Partner Betweenness for Men.”

If these two pieces of research could be distributed to a broad representative sample of American sociologists, here are a few things that I would want to ask:

1. Do you think research that covers a topic like sex (or celebrity or political scandals, etc.) is more likely to get a positive reception and more coverage from the media and the American public?

2. Does publicity about a sociological research finding make the research more or less important within the field of sociology?

3. Do you think it is good for sociologists to promote any research that would appeal to the public rather than research that might be more consequential? In other words, is all publicity good publicity?

For the record, I have not looked closely into either pieces of research and therefore could not assess the quality myself. A sociologist from the London School of Economics and a piece published in AJS might get attention anyway since they have already come from respected institutions. But I think these pieces could lead to interesting discussions about how research within the discipline matches what might be popular among the American public and whether these two interests should match up and whether this helps the academic discipline of sociology.

More details of unethical US medical experiments in Guatemala in the 1940s

Research methods courses tend to cover the same classic examples of unethical studies. With more details emerging from a government panel, the US medical experiments undertaken in Guatemala during the 1940s could join this list.

From 1946-48, the U.S. Public Health Service and the Pan American Sanitary Bureau worked with several Guatemalan government agencies to do medical research — paid for by the U.S. government — that involved deliberately exposing people to sexually transmitted diseases…

The research came up with no useful medical information, according to some experts. It was hidden for decades but came to light last year, after a Wellesley College medical historian discovered records among the papers of Dr. John Cutler, who led the experiments…

During that time, other researchers were also using people as human guinea pigs, in some cases infecting them with illnesses. Studies weren’t as regulated then, and the planning-on-the-fly feel of Cutler’s work was not unique, some experts have noted.

But panel members concluded that the Guatemala research was bad even by the standards of the time. They compared the work to a 1943 experiment by Cutler and others in which prison inmates were infected with gonorrhea in Terre Haute, Ind. The inmates were volunteers who were told what was involved in the study and gave their consent. The Guatemalan participants — or many of them — received no such explanations and did not give informed consent, the commission said.

Ugh – a study that gives both researchers and Americans a bad name. It is also a good reminder of why we need IRBs.

While the article suggests President Obama apologized to the Guatemalan president, is anything else going to be done to try to make up for this? I also wonder how this is viewed in Central America: yet more details about the intrusiveness of Americans over the last century?

(See my original post on this here.)

Measuring colleges by their service to community and country

Many publications want to get into the college rankings business and Washington Monthly released their own take today. The difference? They emphasize how the college gives back to society:

The Monthly’s list aims to be a corrective to the annual ranking of colleges published by U.S. News World & Report–the industry-standard roster that typically leads with well-endowed Ivy League schools that turn away the vast majority of applicants.

Instead, the Monthly ranks schools using three main categories: how many low-income students the college enrolls, how much community and national service a given college’s students engage in, and the volume of groundbreaking research the university produces (in part measured by how many undergraduates go on to get PhDs). To paraphrase the long-ago dictum of President John F. Kennedy, the Monthly is seeking, in essence, to ask not so much what colleges can to for themselves as what they can be doing for their country.

By that measure, only one Ivy cracked the top 10–Harvard. The University of California system dominated, with six of California state schools among the top 30 national universities. Texas A&M, which is ranked 63rd by U.S. News, shot into the top 20 in part because of how many of its students participate in ROTC. Meanwhile, Washington University in St. Louis plunged in these rankings to 112 from U.S. News’ 13, because only 6 percent of its student body qualifies for federal Pell grants, an indication that Washington’s students come almost entirely from upper- and middle-class backgrounds.

The U.S. News & World Report “relies on crude and easily manipulated measures of wealth, exclusivity, and prestige for its rankings,” Washington Monthly editor Paul Glastris wrote. The U.S. News’ rankings take into account freshmen retention rate, admissions’ selectivity, high school counselors’ opinions of the school, faculty salary, per-pupil spending and the rate of alumni giving, among other things.

While the editor suggests these new rankings are not as influenced by status and wealth, I wonder if the measures really get away from these entirely. It takes resources to enroll low-income students, provide resources ground-breaking research, and perhaps extra time for students to be able to be engaged in community and national service. On the other hand, colleges make decisions about how to spend their money and could choose to put their resources into these particular areas.

I’m sure there will be questions about methodology: how did they measure impactful research? How much should ROTC count for and how did they measure community engagement?

New rankings also give more schools an opportunity to claim that they are at the top. For example, Northwestern College in Iowa now trumpets on their main page that “Washington Monthly ranks NWC third in the nation.” Read past the headline and you find that it is third within baccalaureate colleges. On the other side, will schools like Washington University in St. Louis even acknowledge these new rankings since they don’t look so good?

Moving away from academic journals and toward “Performative Social Science”

Most sociologists aim to publish research in academic journals or books. One sociologist suggests a new venue for sharing research: creating fiction films.

Kip Jones hates PowerPoint presentations. He doesn’t care much for academic journals, either. An American-born sociologist, who teaches in the school of health and social care at Bournemouth University in England, Mr. Jones says that “the shame of research is that you spend a lot of money and the knowledge just disappears — or worse, ends up as an article in a scholarly journal.”

So when he was invited to participate in “The Gay and Pleasant Land” project — an investigation into the lives of older gay men and lesbians living in rural England and Wales — Mr. Jones decided that the best way to present the project’s findings to the public wasn’t by publishing the results or delivering a paper at a scholarly conference, but by making a short fiction film…

That’s what Mr. Jones is counting on. “Most of my own work is around developing a method — what’s known as Performative Social Science. I’ve worked with theater. I’ve worked with dancers,” he said. The idea is to combine serious scholarship and popular culture, using performance-based tools to present research outcomes.

Jones suggests that research is often forgotten and that is why he sought to make a film. This raises some questions:

1. Is a film more “permanent” than a research article or book? Without widespread distribution, I suspect the film is less permanent.

2. Is this really about reaching a bigger audience? Academics sometimes joke about how journal articles might reach a few hundred people in the world who care. A film could reach more people but it would need effective distribution or a number of showings for this to happen. This also requires work and how many academic films are actually able to reach a broad audience?

3. Can a film acceptably convey research results compared to a more data-driven paper? Both data-driven work and films need to tell a story and/or make an argument but they are different venues.

In the end, I don’t think we will have a sudden rush to make such films as opposed to writing more academic work. However, I wouldn’t be surprised if we see more established researchers create films and documentaries to supplement their work. (See Mitchell Duneier’s Sidewalk disc which included a documentary.) Such films could reach a broader and younger audience, i.e., putting it in the Youtube world of today’s college students.

(Another note: can you find many academics who would actually defend the use of Powerpoint? It seems like an odd way to begin the story.)

David Brooks: keep government funding for social science research

Last Thursday, David Brooks made a case for retaining government money for social science research:

Fortunately, today we are in the middle of a golden age of behavioral research. Thousands of researchers are studying the way actual behavior differs from the way we assume people behave. They are coming up with more accurate theories of who we are, and scores of real-world applications. Here’s one simple example:

When you renew your driver’s license, you have a chance to enroll in an organ donation program. In countries like Germany and the U.S., you have to check a box if you want to opt in. Roughly 14 percent of people do. But behavioral scientists have discovered that how you set the defaults is really important. So in other countries, like Poland or France, you have to check a box if you want to opt out. In these countries, more than 90 percent of people participate.

This is a gigantic behavior difference cued by one tiny and costless change in procedure.

Yet in the middle of this golden age of behavioral research, there is a bill working through Congress that would eliminate the National Science Foundation’s Directorate for Social, Behavioral and Economic Sciences. This is exactly how budgets should not be balanced — by cutting cheap things that produce enormous future benefits.

Here is what I think works in this column:

1. The examples are interesting and address important issues. I wish there were more people highlighting interesting research in such large venues.

2. The idea that a small research investment can have large results.

3. The reminder in the last paragraph: “People are complicated.”

Here is where I think this column could use some more work: why exactly should the government, as opposed to other organizations or sources, provide this money? (See a counterargument here.) Brooks could have made this case more clearly: there are a lot of social problems that affect our country and the government has the resources and clout to promote research. In certain areas, like poverty or public health, the government has a compelling interest in tackling these concerns as there are few other bodies that could handle the scope of these issues. Of course, many of these issues are politicized but that doesn’t necessarily mean that the government shouldn’t address these issues at all.

Lack of protection for confidentiality in oral histories

Sociologists and other researchers can offer confidentiality in consent forms but whether this promise would stand up in court is another question:

Researchers who conduct oral history have no right to expect courts to respect confidentiality pledges made to interview subjects, according to a brief filed by the U.S. Justice Department on Friday.

The brief further asserts that academic freedom is not a defense to protect the confidentiality of such documents.

With the filing, the U.S. government has come down firmly on the side of the British government, which is fighting for access to oral history records at Boston College that authorities in the U.K. say relate to criminal investigations of murder, kidnapping and other violent crimes in Northern Ireland. The college has been trying to quash the British requests, arguing that those interviewed as part of an archive on the unrest in Northern Ireland were promised confidentiality during their lifetimes…

Many historians have been backing Boston College in the case. Clifford M. Kuhn, a historian at Georgia State University who is a past president of the Oral History Association, filed an affidavit on behalf of Boston College in which he said that if Britain’s request is granted, the field of oral history could be damaged.

This is part of a long-running battle involving researchers and courts. Some of this is covered in the 2000 piece “Don’t Talk to the Humans.” When I’ve used this particular article in class, students always ask why researchers don’t have the same legal rights regarding confidentiality that journalists have.

Whenever these sorts of cases pop up, it always seems like we get the slippery slope argument: if they start with oral histories, how long until there is no confidentiality in other forms of research? In the meantime, we’ll have to see whether this goes beyond just this one brief.

h/t Instapundit

Ethics and social science: grad student gets 6 months sentence for studying animal rights’ groups

This is an update of a story I have been tracking for a while: a sociology graduate student who had studied animal rights’ groups was  sentenced to six months in jail. Here is a brief summary of where the case now stands:

Scott DeMuth, a sociology graduate student at the University of Minnesota, was sentenced yesterday to 6 months in federal prison for his role in a 2006 raid on a Minnesota ferret farm. A judge in Davenport, Iowa, ordered that DeMuth be taken into custody immediately.

In 2009, DeMuth was charged with felony conspiracy in connection with a separate incident, a 2004 lab break-in at the University of Iowa that caused more than $400,000 in damage. DeMuth argued that anything he might know about the Iowa incident had been collected as part of his research on radical activist groups and was therefore protected by confidentiality agreements with his research subjects. A petition started by DeMuth’s graduate advisor, David Pellow, argued that the charges violated DeMuth’s academic freedom.

Last year, prosecutors offered to drop all charges related to the Iowa break-in if DeMuth would plead guilty to a lesser misdemeanor charge related to the ferret farm incident. DeMuth took the deal. No one has been convicted in the Iowa break-in.

This has been an interesting case to introduce to students when teaching ethics amongst sociology and anthropology majors in a research class. Just how far should participant observation go? Couple this with another story, like Venkatesh knowing about possible crimes in Gang Leader for a Day, and a good conversation typically ensues.

However, this case does bring up some larger questions about how protected researchers and their subjects should be when carrying out their research. Should researchers have shield laws? How exactly do courts define “academic freedom” in cases like this?

The globalization of scientific research

A recent report from the United Nations suggests that while the West (and the United States, in particular) still dominate scientific work, other countries are gaining ground. Here are some of the measures from the UNESCO report:

In 2007 Japan spent 3.4% of its GDP on R&D, America 2.7%, the European Union (EU) collectively 1.8% and China 1.4% (see chart 1). Many countries seeking to improve their global scientific standing want to increase these figures. China plans to push on to 2.5% and Barack Obama would like to nudge America up to 3%. The number of researchers has also grown everywhere. China is on the verge of overtaking both America and the EU in the quantity of its scientists. Each had roughly 1.5m researchers out of a global total of 7.2m in 2007…

One indicator of prowess is how much a country’s researchers publish. As an individual country, America still leads the world by some distance. Yet America’s share of world publications, at 28% in 2007, is slipping. In 2002 it was 31%. The EU’s collective share also fell, from 40% to 37%, whereas China’s has more than doubled to 10% and Brazil’s grew by 60%, from 1.7% of the world’s output to 2.7%…

UNESCO’s latest attempt to look at patents has therefore focused on the offices of America, Europe and Japan, as these are deemed of “high quality”. In these patent offices, America dominated, with 41.8% of the world’s patents in 2006, a share that had fallen only slightly over the previous our years. Japan had 27.9%, the EU 26.4%, South Korea 2.2% and China 0.5%.

Even though the United States still dominates a number of measures, UNESCO concluded Asia is the “dominant scientific continent in the coming years.”

A couple of things are interesting here:

1. Even if jobs have left the United States for cheaper locales, the US still has advantages in scientific research. How long this advantage holds up remains to be seen.

2. These are just three possible measures of scientific output. Other ones, such as journal citations, could be used but this seems fairly effective to quickly look at several measures.

3. It is interesting to think about how science itself will change based on increased research roles in non-Western nations.

h/t Instapundit

What cities are the most conducive to scientific research?

A new study in Nature examines which cities are the best for scientific research. The article cites some different measures to get at things like output and quality. Here are some of the findings:

-The top cities for number of articles produced: “Tokyo, London, Beijing, the San Francisco Bay Area, Paris and New York.”

-The top cities based on quality of research (measured as average citations of articles): “Boston and Cambridge, Massachusetts, come out on top — attracting more than twice as many citations per paper as the global average. US cities dominate the quality table, with only Cambridge, UK, breaking into the top 10. Cities with the most improved relative quality in the past decade include Austin, Texas, and Singapore City — which has moved from 15% below average to 22% above it. Beijing, however, is below par in the quality stakes: its papers in the five-year period ending 2008 attracted 63% of the global average-citation rate.”

-According to a sociologist, the three factors that lead to more research: “freedom, funding, and lifestyle.”

Several of the experts also caution that cities shouldn’t just throw money at research in the expectation that this will lead to significant wealth generated for the city.

I wonder how much of a role historical factors play in this. Once a city acquires a reputation for prestigious universities and research (think: Boston), how quickly could it lose its status if drastic things started to take place (such as the bankruptcy of Harvard and MIT)? It seems like certain cities gain a reputation or character and that character becomes an inertia that continues to attract new research facilities and scientists.