The Peer Review System Is Caught in a Self-Reinforcing Cycle - These Researchers Mapped It
Scientific journals are drowning in manuscripts. The volume of papers submitted to journals has grown far faster than the pool of researchers willing to review them, creating a system under increasing strain. What makes this problem particularly stubborn is that the forces driving it reinforce each other: declining review quality increases submission volumes, which further degrades review quality, which drives volumes higher still.
Carl Bergstrom, professor of biology at the University of Washington, and Kevin Gross, professor of statistics at North Carolina State University, used mathematical modeling to map this cycle in detail. Their paper, published in PLOS Biology, identifies the mechanisms driving the crisis and evaluates which proposed interventions might actually help - and which could make things worse. UW News spoke with Bergstrom and Gross about the problem and the potential paths forward.
Why Peer Review Matters
Peer review is the process through which manuscripts submitted to scientific journals are evaluated by other researchers before publication. Reviewers - who are typically unpaid volunteers - assess whether the methodology is sound, the analysis is appropriate, and the conclusions are justified. The system is not perfect, but it provides a filter that distinguishes scientific literature from unvetted claims.
"Peer review helps scientific literature maintain its credibility," Bergstrom and Gross write. "The system of peer review guarantees that published research has been scrutinized by experts in the relevant field. While peer review is not, and never has been, a watertight seal of approval - peer reviewers are human, too - it has proven to be a system that, by and large, helps ensure the reliability of the scientific literature."
How the Cycle Works
The modeling reveals that peer review, when functioning well, reinforces itself through a virtuous cycle. High-quality reviewing makes journal editorial decisions more predictable - authors can judge with reasonable confidence which journal suits their work. That predictability encourages authors to be selective about where they submit, reducing unnecessary submissions to journals where a paper has little chance.
That cycle can spin in reverse. "If peer reviewers have to dilute their efforts over a larger volume of submitted manuscripts, then each manuscript may receive less scrutiny and editors' decisions consequently become less predictable," Bergstrom and Gross explain. "This encourages authors to try their luck at journals that might otherwise have been a stretch, increasing the volume of manuscripts that need to be reviewed even further and making editorial decisions even less predictable, and so on."
Why the Crisis Is Acute Now
Scientists have worried about peer review for decades, but the researchers argue the current situation is more severe than previous phases. Scientific communities have grown larger and more dispersed, and willingness to volunteer declines as groups become more diffuse. Commercial publishers have launched large numbers of new journals to capture publishing revenue, giving authors more venues to resubmit rejected papers - and creating more review demand with each resubmission. The COVID-19 pandemic also disrupted review participation in ways that have not fully recovered.
The concern that most troubles Bergstrom and Gross is the prospect of journals replacing human reviewers with AI. "There may be ways in which machine review could complement human peer review, but we think it's important that human review continues to be the engine of editorial deliberations at scientific journals. Peer review is not just a process for making an accept-or-reject decision. Peer reviewers also provide commentary and feedback for the authors. These reports provide a venue for honest dialogue that helps researchers hone their ideas and grow in their careers. Outsourcing manuscript review to robots risks collapsing a discourse that is crucial to scientific progress."
What Could Actually Help
Paying reviewers for their work at for-profit journals is one intervention the researchers take seriously. "Perhaps the most compelling argument for paying reviewers is that, of all the possible interventions one could propose, it requires the least amount of coordination among different stakeholders to succeed. As soon as one journal figures out a working model for paying reviewers, then everyone will notice that paying reviewers is viable, and there will be market pressure on other journals to follow suit."
But the intervention they consider potentially most impactful is reforming academic hiring and promotion criteria. "Most academic scientists today are working in a system that rewards a researcher for the number of publications above all else. This obviously creates incentives for researchers to submit lots of manuscripts, which puts lots of pressure on peer reviewers. If the norms changed so that hiring and promotion hinged on a candidate's top two or three papers instead, then researchers' incentives would change and the pressure on peer reviewers would diminish."
This change would require coordination across universities and funding agencies - a substantially harder lift than any single journal implementing reviewer pay. Both interventions address real parts of the problem; neither is a complete solution on its own. The research was funded by the National Science Foundation and the Templeton World Charity Foundation.