Just how many scientific studies are fraudulent?

I’m not sure whether these figures are high or low regarding how many scientific studies contain midconduct:

Although deception in science is rare, it’s probably more common than many people think. Surveys show that roughly 2 percent of researchers admit to behavior that would constitute misconduct—the big three sins are fabrication of data, fraud, and plagiarism (other forms can include many other actions, including failure to get ethics approval for studies that involve humans). And that’s just those who admit to it—a recent analysis found evidence of problematic figures and images in nearly 4 percent of studies with those graphics, a figure that had quadrupled since 2000.

Here is part of the abstract from the first study cited above (the 2% figure):

A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.

Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.

I hope some of the efforts by researchers to address this – through a variety of means – are successful.

Take a look at the rest of the article as well: just as individual scholars feel a lot of pressure to commit fraud, big schools have a lot of money on the line with certain researchers and may not want to admit possible issues.

What should a sociology journal do if it found a 1 million pound surplus?

I don’t know how much money many sociology journals have on hand but one British journal recently discovered a sizable surplus:

A prominent journal accumulated a surplus of more than £1 million unbeknown to most of its board, a former board member has revealed.

The Sociological Review is one of the UK’s top sociology journals. The fees paid by Wiley-Blackwell for the rights to publish it led it to amass funds in excess of £1.2 million by 2013. However, according to Pnina Werbner, emeritus professor of anthropology at Keele University, she was unaware of this during her time on the board between 2008 and 2013…

Professor Savage said that the journal had “an ambitious plan” to use its surplus to “better support the discipline of sociology, as well as the journal itself”. But he warned that tax liabilities might reduce that surplus “significantly” if the journal’s application for charitable status were rejected.

For some reason, this reminds me of local governmental bodies that sometimes debate returning surpluses to their constituents. I don’t imagine reviewers or subscribers will be getting bonus checks anytime soon. But, it does appear to be an opportunity for an influential journal to do something unique.

Many top-cited papers initially rejected by good journals

A new study finds that top-cited scientific studies are often rejected, sometimes without even going out for peer review:

Using subsequent citations as a proxy for quality, the team found that the journals were good at weeding out dross and publishing solid research. But they failed — quite spectacularly — to pick up the papers that went to on to garner the most citations.

“The shocking thing to me was that the top 14 papers had all been rejected, one of them twice,” says Kyle Siler, a sociologist at the University of Toronto in Canada, who led the study. The work was published on 22 December in the Proceedings of the National Academy of Sciences

But the team also found that 772 of the manuscripts were ‘desk rejected’ by at least one of the journals — meaning they were not even sent out for peer review — and that 12 out of the 15 most-cited papers suffered this fate. “This raises the question: are they scared of unconventional research?” says Siler. Given the time and resources involved in peer review, he suggests, top journals that accept just a small percentage of the papers they receive can afford to be risk averse.

“The market dynamics that are at work right now tend to a certain blandness,” agrees Michèle Lamont, a sociologist at Harvard University in Cambridge, Massachusetts, whose book How Professors Think explores how academics assess the quality of others’ work. “And although editors may be well informed about who to turn to for reviews, they don’t necessarily have a good nose for what is truly creative.”

The gatekeepers seem to be exercising their power. Academic disciplines usually have clear boundaries about what is good or bad research and the journals help to draw these lines.

An alternative explanation: the rejections authors receive help them shape their studies in productive ways which then makes them more likely to be accepted by later journals. If this could be true, you need to expand the methodology of this study to look at the whole process. How do authors respond to the rejection and then what happens in the next steps in the publishing cycle?

“Sociology’s most cited papers by decade”

Kieran Healy looks at the patterns among the most cited sociology papers:

Again, we’re looking at the Top 10 most-cited papers that were published in the 1950s, 1960s, and so on. This means that while the eleventh most-cited paper from the 1980s might outscore the fourth most-cited paper from the 1950s in terms of cumulative citations, the former does not appear here whereas the latter does. There are some striking patterns. One thing to notice is the rise of articles from the Annual Review of Sociology in the 2000s. Another is the increasing heterogeneity of outlets. Of the top ten papers written in the 1950s or before, seven appear in the American Sociological Review, two in the American Journal of Sociology, and one in Social Forces. (That is SF’s only entry in the list, as it happens.) ASR and AJS rule the 1960s, too. After that, though, there’s more variety. Strikingly, for the 2000s only one of the ten most-cited articles is from ASR and none is from AJS—a complete reversal of the pattern of the ‘50s and ‘60s. You can also see the long shadow of post-war university expansion and “Boomer Sociology”. The most-cited work from before 1970 is not nearly as widely cited as the most-cited work from the ‘70s and ‘80s, despite having been around longer. The drop-off in citation numbers in the Top 10s from the ‘90s and ‘00s is to be expected as those papers are younger. American dominance—or insularity—is also evident, as the only non-U.S. journal to make any of the lists is Sociology, and that was in the 1970s.

Turning to the subject matter of the papers, I think you can see the importance of articles whose main contribution is either a methodological technique or a big idea. There are fewer papers where a specific empirical finding is the main contribution. If you want to hang in there as one of the most-remembered papers from your decade, it seems, give people a good concept to work with or a powerful tool to use. Of course, it’s also true that people tend to have a lot of unread books lying around the house and unused drill attachments in the garage.

It is tempting to connect these two patterns in the data. To speculate: ASR and AJS remain amongst the journals with the very highest impact factors in the discipline. Publishing in them has become more important than ever to people’s careers. Yet the most-cited papers of the last two decades appeared elsewhere. These journals demand the papers they publish meet high standards in methods and ideally also innovate theoretically, along with making an empirical contribution to knowledge. That, together with a more competitive and professionalized labor market, produces very high-quality papers. But perhaps it also makes these journals less likely than in the past to publish purely technical or purely theoretical pieces, even though some papers of that sort will in the end have the most influence on the field.

Outlets like Sociological Methods and Research and Sociological Methodology now publish articles that might in the past have appeared in more general journals. Similarly, big-idea pieces that might once have gotten in at ASR or AJS may now be more likely to find a home at places like Theory and Society or Gender and Society. At the same time—perhaps because the state of theory in the field is more confused than that of methods—theoretical papers may also have been partially displaced by ARS articles that make an argument for some idea or approach, but under the shield of a topical empirical literature review. In a relatively fragmented field, it’s also easier for methodological papers to be more widely cited across a range of substantive areas than it is for a theory paper to do the same.

These seem like reasonable arguments to me. It is also interesting to see that a few subfields attract more attention, like theory and methodology but also social networks, social movements, gender, and cultural sociology, while other subfields are not among the most cited.

Most cited works in sociology journals in 2013

Here is an analysis of the 45 most cited works in sociology journals last year. The top ten:

1. Bourdieu, Pierre. Distinction: A social critique of the judgement of taste. Harvard University Press, 1984.
2. Glaser, Barney G., and Anselm L. Strauss. The discovery of grounded theory: Strategies for qualitative research. Transaction Books, 2009.
3. Putnam, Robert D. Bowling alone: The collapse and revival of American community. Simon and Schuster, 2001.
4. Raudenbush, Stephen W. Hierarchical linear models: Applications and data analysis methods. Vol. 1. Sage, 2002.
5. Massey, Douglas S. and Nancy Denton. American apartheid: Segregation and the making of the underclass. Harvard University Press, 1993.
6. Goffman, Erving. The presentation of self in everyday life. Garden City, NY (1959).
7. Steensland, Brian, Lynn D. Robinson, W. Bradford Wilcox, Jerry Z. Park, Mark D. Regnerus, and Robert D. Woodberry. “The measure of American religion: Toward improving the state of the art.” Social Forces 79, no. 1 (2000): 291-318.
8. Swidler, Ann. “Culture in action: Symbols and strategies.” American sociological review (1986): 273-286.
9. McPherson, Miller, Lynn Smith-Lovin, and James M. Cook. “Birds of a feather: Homophily in social networks.” Annual review of sociology (2001): 415-444.
10. Granovetter, Mark S. “The strength of weak ties.” American journal of sociology(1973): 1360-1380.

No major surprises here: several important works on methods (hierarchical linear modeling, grounded theory, defining religion), several dealing with social networks, and key works in important subfields spanning from the sociology of culture to tastes and social class. Are they over-cited or the sort of influential works sociologists will still recognize years from now?

It might also be interesting to see what sociology works are cited the most outside of sociology journals. I assume Bourdieu, Putnam, and Granovetter are cited frequently elsewhere but what about Goffman, Massey and Denton, and Swidler?

Science problem: study says there is not enough information in methods sections of science articles to replicate

A new study suggests the methods sections in science articles are incomplete, making it very difficult to replicate the studies:

Looking at 238 recently published papers, pulled from five fields of biomedicine, a team of scientists found that they could uniquely identify only 54 percent of the research materials, from lab mice to antibodies, used in the work. The rest disappeared into the terse fuzz and clipped descriptions of the methods section, the journal standard that ostensibly allows any scientist to reproduce a study.

“Our hope would be that 100 percent of materials would be identifiable,” said Nicole A. Vasilevsky, a project manager at Oregon Health & Science University, who led the investigation.

The group quantified a finding already well known to scientists: No one seems to know how to write a proper methods section, especially when different journals have such varied requirements. Those flaws, by extension, may make reproducing a study more difficult, a problem that has prompted, most recently, the journal Nature to impose more rigorous standards for reporting research.

“As researchers, we don’t entirely know what to put into our methods section,” said Shreejoy J. Tripathy, a doctoral student in neurobiology at Carnegie Mellon University, whose laboratory served as a case study for the research team. “You’re supposed to write down everything you need to do. But it’s not exactly clear what we need to write down.”

A new standard could be adopted across journals and subfields: enough information has to be given in the methods section for another scientist to replicate the study. Another advantage of this might be that it pushes authors to try to read their paper from the perspective of outsiders who are looking at the study for the first time.

I wonder how well sociology articles would fare in this analysis. Knowing everything needed for replication can get voluminous or technical, depending on the work that went into collecting the data and then getting it ready for analysis. There are a number of choices along the way that add up.

Sociologist receives award in part for one article being cited over 24,000 times

Mark Granovetter’s 1973 article “The Strength of Weak Ties” is a sociological classic and still is cited frequently in top sociology journals (see 2012 data here). This impressive number of citations contributed to the naming of Granovetter as the recipient of an award:

Cited over 24,000 times, Granovetter’s 1973 paper “The Strength of Weak Ties” is a social science classic and a milestone in network theory. Our close friends are strongly in touch with us and each other, he wrote, but our acquaintances – weak ties – are crucial bridges to other densely knit clumps of close friends. The more weak ties we have, the more in touch we are with ideas, fashions, job openings and whatever else is going on in diverse and far-flung communities.

The award honors the late Everett M. Rogers, a former associate dean at the University of Southern California’s Annenberg School for Communication and Journalism and an influential communication scholar whose Diffusion of Innovation is the second-most cited book in the social sciences.  Presented since 2007 on behalf of USC Annenberg by its Norman Lear Center, the award recognizes outstanding scholars and practitioners whose work has contributed path-breaking insights in areas of Rogers’s legacy.

At the USC Annenberg School on Wednesday, September 18 at 12 noon, Granovetter will present “The Strength of Weak Ties” Revisited.  He will discuss how he came to write it; where it fits in the history of social network analysis; how its argument has held up over the years; and its significance in recent social revolutions, where it’s often been claimed that social networks are at the core of the new political developments.  The event is free and open to the public but RSVP is required. (RSVP is available online at: http://bit.ly/189ayDM)

There is no doubt that being cited over 24,000 times is impressive. Granovetter’s work has been utilized in multiple disciplines and came at the forefront of an explosion of research on social networks and their effects.
At the same time, the press release makes a big deal about citations twice while also highlighting Granovetter’s specific findings. Which is more important in the world of science today: the number of citations, which is a measure of importance, or about the actual findings and how it pushed science forward? This award can contribute to existing debates about the importance of citations as a measure. What exactly do they tell us and should we recognize those who are cited the most?