The rush and consternation in finalizing a manuscript for submission

I have gone through this process many times…and it still is not much fun. Here is what submitting a paper to an academic journal can look like:

  1. Come to the point when you feel that you have said all that there is to be said and in a satisfactory way. Perhaps this comes in response to feedback from a previous submission or from your own thinking and conversations. This may have been a quick turnaround or a lengthy period of contemplation and rewriting. Time to find the submission page for a journal.
  2. Go through the author’s guidelines for that particular journal. Even with commonly-used bibliographic formats and some consistency of how papers are put together, there might be changes or small details to attend to. Formatting ensues.
  3. Time to submit the paper. Go through a process that looks similar across journals but might ask for slightly different information or in a different order. Get the details right and look over key parts of the paper again including the abstract and keywords. Approve your submission.

Time to sit back and wait. Will it make it past the editors? Linger in peer reviews? Come back with mixed reviews, get a revise and resubmit, or be accepted? In some ways, the publication process is just underway.

I understand why the process is what it is: each journal has its own approach as does each publisher. The publishing system is meant to provide peer review for academic work, helping to insure good research is published. Even going through the final steps for submission as outlined above can help crystallize arguments and writing.

But, simplifying the process, even within publishers or within disciplines, could help researchers feel better about their submissions. Some of this cannot be changed; it is still a vulnerable point to send off a manuscript into the great unknown and to reviewers who may or may not like what is there. Some of it can be changed: the basic details are usually the same even across venues.

Just how many scientific studies are fraudulent?

I’m not sure whether these figures are high or low regarding how many scientific studies contain midconduct:

Although deception in science is rare, it’s probably more common than many people think. Surveys show that roughly 2 percent of researchers admit to behavior that would constitute misconduct—the big three sins are fabrication of data, fraud, and plagiarism (other forms can include many other actions, including failure to get ethics approval for studies that involve humans). And that’s just those who admit to it—a recent analysis found evidence of problematic figures and images in nearly 4 percent of studies with those graphics, a figure that had quadrupled since 2000.

Here is part of the abstract from the first study cited above (the 2% figure):

A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.

Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.

I hope some of the efforts by researchers to address this – through a variety of means – are successful.

Take a look at the rest of the article as well: just as individual scholars feel a lot of pressure to commit fraud, big schools have a lot of money on the line with certain researchers and may not want to admit possible issues.

What should a sociology journal do if it found a 1 million pound surplus?

I don’t know how much money many sociology journals have on hand but one British journal recently discovered a sizable surplus:

A prominent journal accumulated a surplus of more than £1 million unbeknown to most of its board, a former board member has revealed.

The Sociological Review is one of the UK’s top sociology journals. The fees paid by Wiley-Blackwell for the rights to publish it led it to amass funds in excess of £1.2 million by 2013. However, according to Pnina Werbner, emeritus professor of anthropology at Keele University, she was unaware of this during her time on the board between 2008 and 2013…

Professor Savage said that the journal had “an ambitious plan” to use its surplus to “better support the discipline of sociology, as well as the journal itself”. But he warned that tax liabilities might reduce that surplus “significantly” if the journal’s application for charitable status were rejected.

For some reason, this reminds me of local governmental bodies that sometimes debate returning surpluses to their constituents. I don’t imagine reviewers or subscribers will be getting bonus checks anytime soon. But, it does appear to be an opportunity for an influential journal to do something unique.

Many top-cited papers initially rejected by good journals

A new study finds that top-cited scientific studies are often rejected, sometimes without even going out for peer review:

Using subsequent citations as a proxy for quality, the team found that the journals were good at weeding out dross and publishing solid research. But they failed — quite spectacularly — to pick up the papers that went to on to garner the most citations.

“The shocking thing to me was that the top 14 papers had all been rejected, one of them twice,” says Kyle Siler, a sociologist at the University of Toronto in Canada, who led the study. The work was published on 22 December in the Proceedings of the National Academy of Sciences

But the team also found that 772 of the manuscripts were ‘desk rejected’ by at least one of the journals — meaning they were not even sent out for peer review — and that 12 out of the 15 most-cited papers suffered this fate. “This raises the question: are they scared of unconventional research?” says Siler. Given the time and resources involved in peer review, he suggests, top journals that accept just a small percentage of the papers they receive can afford to be risk averse.

“The market dynamics that are at work right now tend to a certain blandness,” agrees Michèle Lamont, a sociologist at Harvard University in Cambridge, Massachusetts, whose book How Professors Think explores how academics assess the quality of others’ work. “And although editors may be well informed about who to turn to for reviews, they don’t necessarily have a good nose for what is truly creative.”

The gatekeepers seem to be exercising their power. Academic disciplines usually have clear boundaries about what is good or bad research and the journals help to draw these lines.

An alternative explanation: the rejections authors receive help them shape their studies in productive ways which then makes them more likely to be accepted by later journals. If this could be true, you need to expand the methodology of this study to look at the whole process. How do authors respond to the rejection and then what happens in the next steps in the publishing cycle?

“Sociology’s most cited papers by decade”

Kieran Healy looks at the patterns among the most cited sociology papers:

Again, we’re looking at the Top 10 most-cited papers that were published in the 1950s, 1960s, and so on. This means that while the eleventh most-cited paper from the 1980s might outscore the fourth most-cited paper from the 1950s in terms of cumulative citations, the former does not appear here whereas the latter does. There are some striking patterns. One thing to notice is the rise of articles from the Annual Review of Sociology in the 2000s. Another is the increasing heterogeneity of outlets. Of the top ten papers written in the 1950s or before, seven appear in the American Sociological Review, two in the American Journal of Sociology, and one in Social Forces. (That is SF’s only entry in the list, as it happens.) ASR and AJS rule the 1960s, too. After that, though, there’s more variety. Strikingly, for the 2000s only one of the ten most-cited articles is from ASR and none is from AJS—a complete reversal of the pattern of the ‘50s and ‘60s. You can also see the long shadow of post-war university expansion and “Boomer Sociology”. The most-cited work from before 1970 is not nearly as widely cited as the most-cited work from the ‘70s and ‘80s, despite having been around longer. The drop-off in citation numbers in the Top 10s from the ‘90s and ‘00s is to be expected as those papers are younger. American dominance—or insularity—is also evident, as the only non-U.S. journal to make any of the lists is Sociology, and that was in the 1970s.

Turning to the subject matter of the papers, I think you can see the importance of articles whose main contribution is either a methodological technique or a big idea. There are fewer papers where a specific empirical finding is the main contribution. If you want to hang in there as one of the most-remembered papers from your decade, it seems, give people a good concept to work with or a powerful tool to use. Of course, it’s also true that people tend to have a lot of unread books lying around the house and unused drill attachments in the garage.

It is tempting to connect these two patterns in the data. To speculate: ASR and AJS remain amongst the journals with the very highest impact factors in the discipline. Publishing in them has become more important than ever to people’s careers. Yet the most-cited papers of the last two decades appeared elsewhere. These journals demand the papers they publish meet high standards in methods and ideally also innovate theoretically, along with making an empirical contribution to knowledge. That, together with a more competitive and professionalized labor market, produces very high-quality papers. But perhaps it also makes these journals less likely than in the past to publish purely technical or purely theoretical pieces, even though some papers of that sort will in the end have the most influence on the field.

Outlets like Sociological Methods and Research and Sociological Methodology now publish articles that might in the past have appeared in more general journals. Similarly, big-idea pieces that might once have gotten in at ASR or AJS may now be more likely to find a home at places like Theory and Society or Gender and Society. At the same time—perhaps because the state of theory in the field is more confused than that of methods—theoretical papers may also have been partially displaced by ARS articles that make an argument for some idea or approach, but under the shield of a topical empirical literature review. In a relatively fragmented field, it’s also easier for methodological papers to be more widely cited across a range of substantive areas than it is for a theory paper to do the same.

These seem like reasonable arguments to me. It is also interesting to see that a few subfields attract more attention, like theory and methodology but also social networks, social movements, gender, and cultural sociology, while other subfields are not among the most cited.

Most cited works in sociology journals in 2013

Here is an analysis of the 45 most cited works in sociology journals last year. The top ten:

1. Bourdieu, Pierre. Distinction: A social critique of the judgement of taste. Harvard University Press, 1984.
2. Glaser, Barney G., and Anselm L. Strauss. The discovery of grounded theory: Strategies for qualitative research. Transaction Books, 2009.
3. Putnam, Robert D. Bowling alone: The collapse and revival of American community. Simon and Schuster, 2001.
4. Raudenbush, Stephen W. Hierarchical linear models: Applications and data analysis methods. Vol. 1. Sage, 2002.
5. Massey, Douglas S. and Nancy Denton. American apartheid: Segregation and the making of the underclass. Harvard University Press, 1993.
6. Goffman, Erving. The presentation of self in everyday life. Garden City, NY (1959).
7. Steensland, Brian, Lynn D. Robinson, W. Bradford Wilcox, Jerry Z. Park, Mark D. Regnerus, and Robert D. Woodberry. “The measure of American religion: Toward improving the state of the art.” Social Forces 79, no. 1 (2000): 291-318.
8. Swidler, Ann. “Culture in action: Symbols and strategies.” American sociological review (1986): 273-286.
9. McPherson, Miller, Lynn Smith-Lovin, and James M. Cook. “Birds of a feather: Homophily in social networks.” Annual review of sociology (2001): 415-444.
10. Granovetter, Mark S. “The strength of weak ties.” American journal of sociology(1973): 1360-1380.

No major surprises here: several important works on methods (hierarchical linear modeling, grounded theory, defining religion), several dealing with social networks, and key works in important subfields spanning from the sociology of culture to tastes and social class. Are they over-cited or the sort of influential works sociologists will still recognize years from now?

It might also be interesting to see what sociology works are cited the most outside of sociology journals. I assume Bourdieu, Putnam, and Granovetter are cited frequently elsewhere but what about Goffman, Massey and Denton, and Swidler?

Science problem: study says there is not enough information in methods sections of science articles to replicate

A new study suggests the methods sections in science articles are incomplete, making it very difficult to replicate the studies:

Looking at 238 recently published papers, pulled from five fields of biomedicine, a team of scientists found that they could uniquely identify only 54 percent of the research materials, from lab mice to antibodies, used in the work. The rest disappeared into the terse fuzz and clipped descriptions of the methods section, the journal standard that ostensibly allows any scientist to reproduce a study.

“Our hope would be that 100 percent of materials would be identifiable,” said Nicole A. Vasilevsky, a project manager at Oregon Health & Science University, who led the investigation.

The group quantified a finding already well known to scientists: No one seems to know how to write a proper methods section, especially when different journals have such varied requirements. Those flaws, by extension, may make reproducing a study more difficult, a problem that has prompted, most recently, the journal Nature to impose more rigorous standards for reporting research.

“As researchers, we don’t entirely know what to put into our methods section,” said Shreejoy J. Tripathy, a doctoral student in neurobiology at Carnegie Mellon University, whose laboratory served as a case study for the research team. “You’re supposed to write down everything you need to do. But it’s not exactly clear what we need to write down.”

A new standard could be adopted across journals and subfields: enough information has to be given in the methods section for another scientist to replicate the study. Another advantage of this might be that it pushes authors to try to read their paper from the perspective of outsiders who are looking at the study for the first time.

I wonder how well sociology articles would fare in this analysis. Knowing everything needed for replication can get voluminous or technical, depending on the work that went into collecting the data and then getting it ready for analysis. There are a number of choices along the way that add up.

Sociologist receives award in part for one article being cited over 24,000 times

Mark Granovetter’s 1973 article “The Strength of Weak Ties” is a sociological classic and still is cited frequently in top sociology journals (see 2012 data here). This impressive number of citations contributed to the naming of Granovetter as the recipient of an award:

Cited over 24,000 times, Granovetter’s 1973 paper “The Strength of Weak Ties” is a social science classic and a milestone in network theory. Our close friends are strongly in touch with us and each other, he wrote, but our acquaintances – weak ties – are crucial bridges to other densely knit clumps of close friends. The more weak ties we have, the more in touch we are with ideas, fashions, job openings and whatever else is going on in diverse and far-flung communities.

The award honors the late Everett M. Rogers, a former associate dean at the University of Southern California’s Annenberg School for Communication and Journalism and an influential communication scholar whose Diffusion of Innovation is the second-most cited book in the social sciences.  Presented since 2007 on behalf of USC Annenberg by its Norman Lear Center, the award recognizes outstanding scholars and practitioners whose work has contributed path-breaking insights in areas of Rogers’s legacy.

At the USC Annenberg School on Wednesday, September 18 at 12 noon, Granovetter will present “The Strength of Weak Ties” Revisited.  He will discuss how he came to write it; where it fits in the history of social network analysis; how its argument has held up over the years; and its significance in recent social revolutions, where it’s often been claimed that social networks are at the core of the new political developments.  The event is free and open to the public but RSVP is required. (RSVP is available online at:

There is no doubt that being cited over 24,000 times is impressive. Granovetter’s work has been utilized in multiple disciplines and came at the forefront of an explosion of research on social networks and their effects.
At the same time, the press release makes a big deal about citations twice while also highlighting Granovetter’s specific findings. Which is more important in the world of science today: the number of citations, which is a measure of importance, or about the actual findings and how it pushed science forward? This award can contribute to existing debates about the importance of citations as a measure. What exactly do they tell us and should we recognize those who are cited the most?

Patterns in “the most cited works in sociology, 2012 edition”

According to Neal at Scatterplot, here are the most cited sociological books and articles of 2012:


This is an interesting list. Three of the patterns in the data:

So, one in 33 articles cites Distinction. The majority at the top of the list are books along with a pair each from AJS, ASR and the Annual Review, along with one article from Social Forces. The authors and titles are truncated by Web of Science, so don’t blame me. Remember that the lists only counts citations in this group of sociology journals, so being famous in other worlds doesn’t get you on the list.

Fun fact: 2/3 of things that were cited last year were only cited once, and 95% of things cited were cited less than five times. And, unless one of your articles was cited nine or more times in one of these journals last year, you can consider yourself, like me, one of the 99%.

One thing that struck me was how old everything  on this top list was. The median publication year in the top 100 was 1992. Of the top 100, only one piece was published in the last five years.

A few other things stuck out to me from this list:

1. The list involves a number of big name sociologists. I assume they became big names because of the quality of their work, such as in the pieces cited here, but how much could it be that the works are cited more because they came from big names? There is some interesting work that could be done here with individual pieces to look at patterns of citations and how works become well-known.

2. There are several more methodological pieces on the list. The Raudenbush and Bryk 2002 book involves hierarchical linear modeling, a technique that uses multiple equations to nest individual cases within larger groups (like students within schools in the sociology of education). The Strauss and Glaser 1967 book is about the basics of grounded theory, a technique that has been adopted across a variety of qualitative studies. The Steensland et al. 2000 piece is about developing the measure RELTRAD which more effectively categorizes Americans into religious traditions. These methodological works have wide applications and were influential across a variety of subfields.

3. Could we interpret a list like this as one that tell us the “classic works” of sociology today? Could we hand a list like this to undergraduate majors or graduate students and tell them that this is what they need to know to understand the broader field? One way to check on this would be to compare the top cited works year to year to see how much the list changes and how consistently important these works are. Presumably, new works will be added to the list over time but this may not happen quickly.

Activist charged for downloading millions of JSTOR articles

Many academics use databases like JSTOR to find articles from academic journals. However, one user violated the terms of service by downloading millions of articles and is now being charged by the federal government:

Swartz, the 25-year-old executive director of Demand Progress, has a history of downloading massive data sets, both to use in research and to release public domain documents from behind paywalls. He surrendered in July 2011, remains free on bond and faces dozens of years in prison and a $1 million fine if convicted.

Like last year’s original grand jury indictment on four felony counts, (.pdf) the superseding indictment (.pdf) unveiled Thursday accuses Swartz of evading MIT’s attempts to kick his laptop off the network while downloading millions of documents from JSTOR, a not-for-profit company that provides searchable, digitized copies of academic journals that are normally inaccessible to the public…

“JSTOR authorizes users to download a limited number of journal articles at a time,” according to the latest indictment. “Before being given access to JSTOR’s digital archive, each user must agree and acknowledge that they cannot download or export content from JSTOR’s computer servers with automated programs such as web robots, spiders, and scrapers. JSTOR also uses computerized measures to prevent users from downloading an unauthorized number of articles using automated techniques.”

MIT authorizes guests to use the service, which was the case with Swartz, who at the time was a fellow at Harvard’s Safra Center for Ethics.

It sounds like there is some disconnect here: services like JSTOR want to maintain some control over the academic content they provide even as they exist to help researchers find printed scholarly articles. Services like JSTOR can make big money by collating journal articles and requiring libraries to pay for access. Thus, someone like Swartz could download a lot of the articles and then avoid paying for or using JSTOR down the road (though academic users are primarily paying through institutions who pass the costs along to users). But what is “a limited number of journal articles at a time”? Using an automated program is clearly out according to the terms of service but what if a team of undergraduates banded together, downloaded a similar number of articles, and pooled their downloads?

If we are indeed headed toward a world of “big data,” which presumably would include the thousands of scholarly articles published each year, we are likely in for some interesting battles in a number of areas over who gets to control, download, and access this data.

Another thought: does going to open access academic journals eliminate this issue?