Scientific Peer Review Is Caught in a Self-Reinforcing Crisis, Mathematical Models Show
Peer review is the mechanism through which scientific claims are vetted before publication - a structured process in which qualified researchers evaluate whether a study's methods, analysis, and conclusions hold up to scrutiny. Without it, the scientific literature would have no quality floor.
That mechanism is under serious strain. The volume of papers submitted to scientific journals has grown dramatically over the past two decades, driven by the expanding global research workforce, increased publication pressure in academic careers, and the proliferation of journals. The pool of willing and qualified peer reviewers has not kept pace. Reviewers are being asked to do more, declining more often, or producing lower-quality reviews under time pressure.
A new study published in PLOS Biology uses mathematical models to analyze exactly how these pressures interact - and to identify the feedback cycles that make the problem self-reinforcing.
How the Feedback Works
The study maps the dynamics of "screening" and "sorting" in the peer review system. When journals receive more submissions than reviewers can handle, editors face pressure to triage more aggressively at the desk-rejection stage. Papers rejected more quickly cycle back into the submission pool, increasing load at other journals. Reviewers who feel their time is not being well used become less willing to accept future requests.
Each dynamic feeds the others. Higher submission volume leads to more desk rejections. More desk rejections lead to more resubmissions elsewhere. More resubmissions increase submission volume further. Meanwhile, reviewer burnout creates delays that make the system less efficient precisely when it is being asked to process more work.
What the Models Reveal About Fixing It
The study identifies conditions under which the feedback cycles are self-stabilizing versus runaway. Interventions focused solely on increasing reviewer supply - recruiting more reviewers, paying reviewers, or using AI assistance - may be insufficient if submission rates continue rising. The feedback loops mean that increasing capacity can simply absorb additional submissions without improving the workload ratio.
More effective interventions target submission dynamics directly. Preregistration requirements, mandatory data sharing, submission fees to reduce premature submissions, and journal cascading systems that route rejected papers with intact reviews to appropriate venues all appear as potentially stabilizing mechanisms.
What Is Actually at Stake
If reviewers are burned out or conducting cursory reviews, the filter that peer review provides becomes less reliable. Scientific claims that would have been caught and corrected slip through. Science's authority in public life depends substantially on the credibility of peer review. A degraded peer review system erodes one of the key distinctions between scientific consensus and unvetted assertion.
The study was partially supported by NSF awards SES-2346645 and SES-2346644, and by a Templeton World Charity Foundation Diverse Intelligences frameworks grant. The full paper is freely available in PLOS Biology.