The Tangled Web of Scientific Publishing

Science Publishing Is Incoherent, Expensive, and Slow

Communication is essential to science. The aim of scientific publication is to convey new findings as quickly as possible to as many interested parties as possible. But the world of “peer-reviewed” scientific publishing no longer functions as it should. Many publishing practices were devised at a time when scientists were relatively few and digital communication did not exist. Costs have become prohibitive, inhibiting the exchange of ideas. Incentives favor research that is swiftly done and will not offend likely reviewers. In social science, particularly, research contrary to prevailing prejudice has little chance.

These problems are now widely known if not fully understood, and so they persist.

I will argue that the core problem, the main sticking point to reforming the system, is vetting. Some scientific findings are better: more solid, more reliable, more interesting than others; some are more relevant than others to particular research questions. The communication system should signal the area of published research and its probable importance. Above all, editors should filter out false findings. In engineering jargon, the system should be noise-free. All, or at least the great majority, of findings should be true. Partly as a legacy of the discredited postmodernist movement and partly because it is the nature of science that any finding is subject to revision, people are now suspicious of the word true. In experimental research, at least, the term replicable (repeatable) has replaced it. But in science, there is no substitute for valid results, even though certainty will always elude it. Researchers should be able to trust what they read.

When the scientific community was small, communicating science was relatively easy even without digital technology. There was relatively little specialization; contributions were accessible to all, or almost all, interested parties. This has changed with the enormous growth of science. Science historian Derek da Solla Price pointed out more than fifty years ago that scientists then living were 90 percent of all the scientists that have ever existed. The number of scientists continues to grow exponentially: We’re still at around 90 percent!

Back in 1935, fewer than 10,000 PhDs were awarded worldwide, whereas in 2010 the total was close to 160,000. In those early days, submitting an article did not cost money beyond its preparation cost. There were just a handful of places to send your report. Volunteer expert reviewers, usually anonymous, selected by an editor and probably known to the author, were usually prompt with their critiques, and you got a timely decision: accept, accept with minor changes, or reject. After publication, access was via print journal subscriptions—although for most researchers this was a not a problem since most institutions had subscriptions to most relevant journals.

The basic structure for established scientific journals remains the same. Submissions are reviewed by unpaid volunteer peers and publish/not publish decisions are made by an editor. But there have been other changes. Many journals, most offering peer review, are now “open access:” the submitter pays but the reader does not. There are now many more journals available and the cost issue is looming larger.

Editing, formatting, and printing cost money which, in early days, was usually borne by scientific societies. The resulting journals, including those published by the societies such as the American Association for the Advancement of Science (Science) or the Proceedings of the Royal Society, were widely read and reasonably priced. But, as the financial structure of science has changed, so have pricing policies. Subscriptions for individuals are usually reasonable. Science (first published in 1880) now costs just over $100 a year; Nature, a UK-based journal of similar vintage (1869) and published by a commercial publisher is similar: Macmillan offers it for an annual subscription of £64 (ca. $84).

Institutional subscriptions are another matter. The best a librarian can do to get the various Royal Society journals is to buy a ten-journal “Excellence in Science” package for $29,370! Another example, the prestigious Journal of Neuroscience, published by the Society for Neuroscience, has five levels of institutional subscription ranging from $3,260 to $5,990 per year. Even “niche” journals are not cheap. Behavioural Processes, a modest Elsevier journal aimed at behavioral biologists that I edited for many years, now charges $4,186 a year.

Again, in the past, cost was not a problem so long as the number of scientific journals was modest. Now it is not. Estimates vary, but the number of scientific journals worldwide is in the thousands. Wikipedia lists over 100 in psychology alone, and that omits the new journals that pop up in my inbox almost every week. Giving its workers comprehensive journal access imposes an increasingly unaffordable financial burden on any research institution. The alternative, pay-to-publish in open-access journals, just shifts the burden without reducing it.

The whole scientific publishing business is, in fact, a carryover. The present structure is a historical accident; scientific publishing would never be designed this way today. Much of the system is unnecessary. That it must change is certain. What is less certain is the exact nature of a viable alternative.

Cost of Access

In the meantime, those who must pay—academic institutions and research grantors—are beginning to say “Enough!” In the US, since 2005, the National Institutes of Health has required their grantees to place published work in an open-access site, a website that can be accessed for free by anyone. Other US government science-funding agencies have imposed the same requirement since 2013. But, paywalled publishers were allowed a year after publication to meet this constraint. Since time and priority are vital in science, the one-year delay still gives the publisher a competitive edge.

A group of eleven European funding agencies has just gone further. Their joint open access initiative (“Plan S”), known as cOAlition S, begins:

By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.

Ten “Principles of Plan S” follow. There are 18 other European science-support agencies that are not party to this agreement. Only one UK agency is listed (although it comprises seven Research Councils, so probably embraces most UK science funding). No German science agency has yet signed up to the cOAlition S initiative.

It is hard to judge how effective this initiative will be. Existing publishers object strongly, viewing the proposal as a threat to their business model. The existing model is exploitative; but it is also vulnerable to creative destruction. Publishers exploit the unhealthy incentive structure of modern science. Scientists must publish, preferably in so-called “high-impact” journals like Science and Nature, but if not, somewhere. As I pointed out in an earlier piece, these journals are what economists call positional goods. That is, they are prestigious because they attract the “best” (most visible) papers. They attract the best papers because they are the most prestigious. In other words, a self-reinforcing process which keeps those on top on top.

Lower-prestige journals can also succeed because the pressure to publish in a peer-reviewed journal, any peer-reviewed journal, is relentless and affects every active researcher. More evidence for the publish-or-perish culture of modern research institutions is the steady supply of new journals, many with relaxed standards for vetting submissions. As one commentator put it, “Demand [to publish] is inelastic and competition non-existent, because different journals can’t publish the same material.” And, of course, the material itself, the published article, costs the journal nothing. These features, a type of monopoly, plus zero-cost contributions and subsidized editing (those free peer reviewers), continue to make academic publishing very profitable, even though its vulnerability has been apparent for many years.

The existing structure has yielded for commercial publishers, but it is also vulnerable to innovation for two reasons. First, the actual production of a published manuscript is now very much cheaper than it used to be because of word processing and online publication. Much copy editing is now automated. Authors can do most of what is necessary for publication by themselves, perhaps with a little editing help from their institution.

Second, not only is there no need for hard copy and all those typesetters and printing presses, the vast majority of researchers actually prefer searchable digital copies of journal articles over paper ones. Paper scientific journals have been obsolete for some time.

The reason for the publishers’ success seems to be lack of a clear alternative.

What role remains for journal publishers? Well, the question worries the oligopolistic world of commercial science publishing which has been largely successful in blocking every major change to the system so far.

The reason for the publishers’ success seems to be lack of a clear alternative. The sticking point is not the publishing process or access—the internet solves both of those; every institution now offers its researchers an open-access site for their files. And finding relevant papers should also not be a problem for readers. Google in some form or other can presumably locate papers by date and keyword in a way that makes segregation by sub-discipline in journals unnecessary. The problem isn’t even editing: most institutions offer help in preparing research grants which could readily be extended to help with manuscript writing (not that there aren’t other problems with helping people who should be capable of helping themselves!).

 

Peer Review

No, the problem is vetting. Researchers need to know not only about subject relevance and the general interest of a piece of work, but they also need some assurance as to its truth and probable importance, its quality. That is the job of peer review. The existing system is far from perfect. As I have pointed out elsewhere, the increasing subdivision of social science has meant that the “peers” that are chosen to review papers are now likely to come from a small subgroup of the like-minded. Consequently, papers that would not survive scrutiny by the scientific community at large have proliferated. Much of social science can now be seen as either obscure, false, or even nonsensical. Whatever the substitute for the present arrangement, we obviously don’t want to encourage the segregation of sub-disciplines that has led to this situation. Shielding manuscripts from review by qualified or even just interested individuals should not be part of any system.

 

What Now?

How to vet submitted manuscripts, how to reform the inefficient and in many ways corrupt, journal system, how best to limit costs; all are tricky questions. I offer no simple answers, but here are some suggestions as starting points for debate:

  • Consider abolishing the standard paper-journal structure. (The system I describe below also allows for aggregators to arise, selecting papers based on quality or interest area as a substitute for area-specific journals.)
  • Suppose that all submissions, suitably tagged with interest-area labels by the author, were instead to be sent to a central SUBMISSIONS repository. (A pirate site containing “scraped” published papers, Sci-Hub, already exists and there are already open access repositories where anyone can park files.)
  • Suppose there were a second central repository for prospective reviewers of the papers submitted to the first repository. Anyone who is interested in reviewing manuscripts − usually but not necessarily a working scientist—would be invited to submit his or her areas of interest and qualifications to this REVIEWER repository.
  • Reviewing would then consist of somehow matching up manuscripts with suitable reviewers. Exactly how this should be done needs to be debated; many details need to be worked out. How many reviewers? What areas? How much weight should be given to matching reviewers’ expertise, in general, and in relation to the manuscript to be reviewed? What about conflict of interest, etc.? But if rules could be agreed on, the process could probably be automated.
  • Reviewers would be asked to both comment on the submission and give it two scores: (a) validity—are the results true/replicable? (b) Importance—a more subjective judgment.
  • If a reviewer detects a remediable flaw, the manuscript author should have the opportunity to revise and resubmit and hope to get a higher score.
  • Manuscripts should always be publicly available unless withdrawn by the author. But after review, they will be tagged with the reviewers’ evaluation(s). No manuscript need be “rejected.”
  • Employers/reviewers looking at material to evaluate for promotion, salary review, etc. would then have to decide which category of reviewed manuscript to count as a “publication.” Some might see this as a problem—for employers if not for science. But publishing unreviewed material is well accepted in some areas of scholarship. The NBER, for example, has a section called “Working Papers” which “are circulated prior to publication for comment and discussion.”
  • Interested readers can search the database of manuscripts by publication date, reviewers’ scores, topics, etc. in a more flexible and unbiased way than current reliance on a handful of “gatekeeper” journals.

This is not a finished proposal. Each of these suggestions raises questions. But one thing is certain: the present system is slow, expensive, and inadequate. Science needs something better.