Non-fiction books can have limited fact-checking, no peer review

An example of a significant misinterpretation of survey data in a recent book provides a reminder of about reading “facts”:

There are a few major lessons here. The first is that books are not subject to peer review, and in the typical case not even subject to fact-checking by the publishers — often they put responsibility for fact-checking on the authors, who may vary in how thoroughly they conduct such fact-checks and in whether they have the expertise to notice errors in interpreting studies, like Wolf’s or Dolan’s.

The second, Kimbrough told me, is that in many respects we got lucky in the Dolan case. Dolan was using publicly available data, which meant that when Kimbrough doubted his claims, he could look up the original data himself and check Dolan’s work. “It’s good this work was done using public data,” Kimbrough told me, “so I’m able to go pull the data and look into it and see, ‘Oh, this is clearly wrong.’”…

Book-publishing culture similarly needs to change to address that first problem. Books often go to print with less fact-checking than an average Vox article, and at hundreds of pages long, that almost always means several errors. The recent high-profile cases where these errors have been serious, embarrassing, and highly public might create enough pressure to finally change that.

In the meantime, don’t trust shocking claims with a single source, even if they’re from a well-regarded expert. It’s all too easy to misread a study, and all too easy for those errors to make it all the way to print.

These are good steps, particularly the last paragraph above: shocking or even surprising statistics are worth checking against the data or against other sources to verify. After all, it is not that hard for a mutant statistic to spread.

Unfortunately, correctly interpreting data continues to get pushed down the chain to readers and consumers. When I read articles or books in 2019, I need to be fairly skeptical of what I am reading. This is hard to do with (1) the glut of information we all face (so many sources!) and (2) needing to know how to be skeptical of information. This is why it is easy to fall into filtering sources of information into camps of sources we trust versus ones we do not. At the same time, knowing how statistics and data works goes a long way in questioning information. In the main example in the story above, the interpretation issue came down to how the survey questions were asked. An average consumer of the book may have little idea to question the survey data collection process, let alone the veracity of the claim. It took an academic who works with the same dataset to question the interpretation.

To do this individual fact-checking better (and to do it better at a structural level before books are published), we need to combat innumeracy. Readers need to be able to understand data: how it is collected, how it is interpreted, and how it ends up in print or in the public arena. This usually does not require a deep knowledge of particular methods but it does require some familiarity with how data becomes data. Similarly, being cynical about all data and statistics is not the answer; readers need to know when data is good enough.

What should a sociology journal do if it found a 1 million pound surplus?

I don’t know how much money many sociology journals have on hand but one British journal recently discovered a sizable surplus:

A prominent journal accumulated a surplus of more than £1 million unbeknown to most of its board, a former board member has revealed.

The Sociological Review is one of the UK’s top sociology journals. The fees paid by Wiley-Blackwell for the rights to publish it led it to amass funds in excess of £1.2 million by 2013. However, according to Pnina Werbner, emeritus professor of anthropology at Keele University, she was unaware of this during her time on the board between 2008 and 2013…

Professor Savage said that the journal had “an ambitious plan” to use its surplus to “better support the discipline of sociology, as well as the journal itself”. But he warned that tax liabilities might reduce that surplus “significantly” if the journal’s application for charitable status were rejected.

For some reason, this reminds me of local governmental bodies that sometimes debate returning surpluses to their constituents. I don’t imagine reviewers or subscribers will be getting bonus checks anytime soon. But, it does appear to be an opportunity for an influential journal to do something unique.

Many top-cited papers initially rejected by good journals

A new study finds that top-cited scientific studies are often rejected, sometimes without even going out for peer review:

Using subsequent citations as a proxy for quality, the team found that the journals were good at weeding out dross and publishing solid research. But they failed — quite spectacularly — to pick up the papers that went to on to garner the most citations.

“The shocking thing to me was that the top 14 papers had all been rejected, one of them twice,” says Kyle Siler, a sociologist at the University of Toronto in Canada, who led the study. The work was published on 22 December in the Proceedings of the National Academy of Sciences

But the team also found that 772 of the manuscripts were ‘desk rejected’ by at least one of the journals — meaning they were not even sent out for peer review — and that 12 out of the 15 most-cited papers suffered this fate. “This raises the question: are they scared of unconventional research?” says Siler. Given the time and resources involved in peer review, he suggests, top journals that accept just a small percentage of the papers they receive can afford to be risk averse.

“The market dynamics that are at work right now tend to a certain blandness,” agrees Michèle Lamont, a sociologist at Harvard University in Cambridge, Massachusetts, whose book How Professors Think explores how academics assess the quality of others’ work. “And although editors may be well informed about who to turn to for reviews, they don’t necessarily have a good nose for what is truly creative.”

The gatekeepers seem to be exercising their power. Academic disciplines usually have clear boundaries about what is good or bad research and the journals help to draw these lines.

An alternative explanation: the rejections authors receive help them shape their studies in productive ways which then makes them more likely to be accepted by later journals. If this could be true, you need to expand the methodology of this study to look at the whole process. How do authors respond to the rejection and then what happens in the next steps in the publishing cycle?