Several academics make an argument in The Chronicle for Higher Education for reducing the number of published research articles in order to limit low-quality publications. The measure of “poor research” is linked to the idea that later science should build upon previous findings:
Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.
The authors go on to say that uncited articles are akin to “useless information.” This seems a bit premature: uncited articles might be the result of studies in new fields or new approaches to old problems. Graduate students are often told to specialize and perhaps this article glut is due to very specific knowledge and more recent articles that have less of a broad appeal. Regardless, good journals are still publishing these pieces, indicating that somewhere in the peer review process, editors and reviewers thought the authors made a scientific contribution.
Some of the proposed solutions include a greater emphasis on citation and journal impact scores.