How Is Science Judged? How Useful Is Peer Review?

The British journal Nature, home in 1953 to Watson and Crick’s important DNA paper, was by 1966 rather in the doldrums, with a backlog of submitted manuscripts and losing ground to the general-science leader, the U.S. journal Science.

That year, however, the publisher appointed as editor one John Maddox, a slightly eccentric theoretical physicist and science journalist. Rumor has it that when Maddox attempted to tidy up the massive backlog he dispensed with consultants and made all the decisions himself. He subsequently introduced a more conventional system of associate editors and consultant reviewers, possibly because of the sheer volume of work. Nature now competes with Science for the prize of most prestigious general-science journal. There is little evidence that Nature was worse under Maddox’s solo management than it became under a more conventional regime.

The standard system for admitting a manuscript to a scientific journal—or awarding money to a supplicant researcher—is for the editor to submit the work to a small group of experts. Most journals have a number of scientists they consult. For each submission, the editor picks a couple of “relevant” experts and sends each a copy of the manuscript. Then, usually after considerable delay (these folk are usually anonymous and unpaid and have other commitments), the reviewers send in comments and criticisms and either recommend acceptance, acceptance with changes, or rejection.

This is the famous peer review “gold standard” followed by most reputable scientific journals. The system evolved in a day when science was a vocation for a small number of men of mostly independent means, like Charles Darwin, or employed in ways that did not seriously compete with their scientific interests. Isaac Newton was a professor at Trinity College, Cambridge, with light duties, for example. Evolutionary pioneer Alfred Russel Wallace had no private income but made a living collecting biological specimens in exotic locations, a profession that aided his biological work. Albert Einstein worked in the Swiss patent office, but the work was apparently not demanding and he needed no funds to think. In the 19th and much of the 20th century, the pace of science was leisurely and the number of scientists small.

In those days, the need for review was forced more by the cost of publication than by a flood of submissions. Transcribing, typesetting, printing, binding, and circulating hard copy costs money. All has changed in the 21st century. The number of scientists has vastly increased along with their dependence on external sources of funds. Modern scientists need to publish. The number of potential publications is large. On the other hand, since the advent of the internet, the cost of publication has become negligible. So what is the problem? Why is so much review necessary?

The problem is that scientific publication is what economists call a positional good. Nature and Science are top journals because the papers they publish are, for the most part, very good. The papers are good because these journals are perceived as the best and therefore can attract the best submissions, a positive-feedback loop. And “best” is quantified by something called the impact factor which rates a journal according to the number of citations—mentions in other published articles—that its articles receive. Impact factor is a flawed measure in many ways. It will be larger for papers in popular fields, larger in, say, neuroscience, than in ethology. It may not reflect the real contribution of a paper: a paper famous for its influential but fallacious results will have a high impact factor, for example. Biomedicine provides many examples. But quantification adds credibility and Science and Nature have high impact factors.

There is another positive feedback loop in the social system of science. Scientific work costs money, much more now than in days past. Money comes in the form of research grants. Grants are awarded on the basis of competition among submitted research proposals. Competition is fierce. Researchers nowadays spend 50 percent or more of their time writing proposals, often with a payoff probability as low as 10 percent. A proposal is more likely to be funded if its authors have a strong publication record in prestigious journals: publication yields money yields more publication—positive feedback again.

Access to some form of peer-reviewed publication is critical to career success in science. Scientists need to publish in a peer-reviewed journal, preferably a prestigious one. Demand is high and following an elementary axiom of economics, supply has expanded to meet it. A growing list of what I call “pop-up” journals has arisen to meet the need for publication. I get email invitations to publish in, or even to edit, such a journal almost once a week. Here is one, headed “Invitation to Join Editorial Board:”

I represent EnPress Publisher Editorial Office from USA. We have come across your recent article “Daniel T. Cerutti (1956–2010)” published in The Behavior Analyst. We feel that the topic of the article is very interesting. Therefore, we are delighted to invite you to publish your work in our journal, entitled Global Finance Review. We also hope that you can join our Editorial Board…

The article that so piqued the respondent’s interest was an obituary for a much-loved younger colleague. His research had nothing whatever to do with finance. The invitation comes from a bot, not a human being.

These solicitations range from annoying to embarrassing. The best ones are humorous, like this one:

Dear Dr. Staddon,

One of the articles you authored a while ago caught my attention and I am hoping to discuss with you publishing a follow-up article, or even a review article in the Internal Medicne [sic] Review. The article was entitled ‘The dynamics of successive induction in larval zebrafish.’ I am sure our readers would find an article that continues this work valuable. The parameters of the article are flexible and I am happy to help in any way I can…

Lisseth Tovar, M. D., Senior Editor, Internal Medicine Review

I am still puzzling over the links that I may have missed between internal medicine and the reflex behavior of zebrafish larvae. But then a little later, I got an even more exciting offer: to be an actual editor for the Journal of Plant Physiology (my emphasis)—I study animals and people, not plants, but fame hath its rewards, I guess. The algorithmic promoters of these journals are like anglers, dangling a smorgasbord of tasty flies above schools of would-be publishers, hoping for enough bites to sustain at least some of their creations.

This nonsense just shows how strong is the “publish or perish” pressure on working scientists. The fact that these pop-ups work, that people pay to publish in them, and that these publications are not routinely discounted by promotion and grant-awarding authorities, shows a weakness in the system. Predictably, these pressures have led to the occasional fraud and the evolution of research methods that are more or less guaranteed to produce a publishable, if not a valid, result.

This nonsense just shows how strong is the ‘publish or perish’ pressure on working scientists.

But a recent article in the Times Higher Education Supplement points to two other problems, one soluble, the other not. The easy one is that some papers don’t show all the details necessary to repeat the experiment. This problem can be solved to some extent by vigilant reviewers. It can never be completely solved because sometimes even the original experimenter may be unaware of a crucial detail. The great Greek philosopher Aristotle thought that heavy objects fall faster—because he was unaware of the effect of air resistance. Dropping two objects of different weight but the same size and shape gives Aristotle’s result but leads to a wrong conclusion. So, asking for more information from an author is always legitimate.

The tougher problem is that journal reviewers may reinforce a kind of scientific establishment. In a Times Higher Ed article titled “Scientific Peer Review: an Ineffective and Unworthy Institution” the authors comment:

[P]eer review is self-evidently useful in protecting established paradigms and disadvantaging challenges to entrenched scientific authority…by controlling access to publication in the most prestigious [peer review] journals helps to maintain the clearly recognised hierarchies of journals, of researchers, and of universities and research institutes.

Undoubtedly, advocates of an established paradigm have an edge in gaining access to a prestigious journal. Proponents of intelligent design will certainly encounter resistance if they try to publish in Evolution, for example. If most scientists believe “X” to be true, it will take a lot to convince them of “not-X”. This is inevitable, but perhaps the process has gone too far? If so, what are the alternatives? More on that in the next piece.

John Staddon is James B. Duke Professor of Psychology and Professor of Biology, Emeritus, at Duke University. He has been editor-in-chief for two scientific journals and has published in both Science and Nature. Many of these issues are discussed in his new book Scientific Method: How Science Works, Fails to Work or Pretends to Work. (2017) Routledge.