I don’t know who first labeled the problem the “peer review crisis,” but it wasn’t me. Crisis or not, these days1 people seem to be rather frustrated with the peer review process. Here, I focus on just one of the problems: the perception that it is becoming increasingly difficult for editors to secure reviewers for manuscripts submitted to scholarly journals, thereby dragging out the timeline for review and publication.
Is there actually a problem with the timeliness of review? Or should I say, has it gotten worse? I don’t really know; I have not seen much solid data to support the claim. Schick et al. (2022) showed a clear decrease in reviewer acceptance rates across 2004 to 2019 for the journal Drug and Alcohol Dependence, but that’s all I got. I’m sure there are other data out there, but I have not seen them.
Regardless of whether or not there is actually a problem with the timeliness of peer review, what seems obviously true to me is that people are complaining much more now. It’s possible that it is just increasingly easy for us to air our complaints. When I submitted my first paper as a Ph.D. student in 2005, I had to wait seven months for the initial review, and that only came after two inquiries to the editor. I complained to my lab group, my office mates, and anyone who asked how my day was going, but I did not really have any public outlets2. People have had online outlets to yell through for well over a decade, though, and the complaints about timeliness of peer review seem to have ramped up in the last couple of years.
That academic stress and burnout may be leading to an increase in people declining to engage in yet more free, time-consuming work has been well-covered previously. Editors are (mostly) also academics, so are facing the same overextension and burnout as are reviewers. Editorial work is typically taken on in addition to one’s standard workload, and for varying degrees of compensation (from zero to many tens of thousands of dollars), and so it can be a drag. A recurring moment of doom is when I receive the inevitable system-generated email that, “A new manuscript has been added to your Associate Editor Center.” This means I have to read the paper, locate a handful of reviewers to invite, hope they agree, invite more if they don’t, monitor if and when they submit, invite more if they don’t, and then write a decision letter. There are many points there where editors can and do slow down the process. There have been manuscripts in peer review for far too long and it was completely my fault as the editor, not the fault of the reviewers. We like to pin the problem on the reviewers, but editors are just as much at fault.
The problem of reviewer and editor stress are important to acknowledge, but are also merely symptoms of two larger, structural problems:
People are publishing too many papers. That we are publishing too many papers has long been discussed and derided, but to no avail. We just keep chugging along. As long as we continue to chug within the current system, we will continue to have a problem with peer review.
There are too many journals. This structural problem is not as often recognized and discussed, but we just keep on birthing journals. The growth of mega-journals such as PloS ONE, Frontiers, and Scientific Reports, along with the comically absurd number of special issues published by MDPI means that there is a potential outlet for any paper, and a consequent need for reviewers. Vanity journals such as Nature, Science, and PNAS spawned “sister journals” that exploit the name-recognition of the original—the Nature Industrial Complex alone has 69 journals.
Specialty areas have their own disarming array of specialty journals: In adolescent research there is Journal of Adolescent Research, Journal of Research on Adolescence, Journal of Adolescence, and Journal of Youth and Adolescence, to name just a few of the indistinguishable outlets.
People like to poke fun at the acronyms used for journals in personality and social psychology, and whose to blame them: where should I submit my paper, JPSP, JESP, EJSP, BJSP, PSPB, PSPR, JRP, or SPPS? Does it really matter?
There are journals titled Identity, Identities, and Self and Identity, and they are all published by Taylor & Francis! Totally normal, reasonable system we have here.
To be clear, I am not taking an extreme position on this like the dude who said that biology should just have one journal. That is clearly not feasible, at least within our current journal-centric structure. There are good reasons for specialty journals to exist, especially those that publish work focused on marginalized populations that were created because such work was being shut out of the existing journals. Things have clearly gotten out of hand, though, and one consequence is the need for more and more peer reviewers.
There are Some Solutions….
There are too many journals publishing too many papers that require too many reviewers who have too many other things to do. This problem is a structural one, but it is not intractable. Many different initiatives have been advanced to solve these problems, at least in part:
Pay reviewers. This is the standard knee-jerk response. The near total majority of journal peer review is done for nothing more than the satisfaction of knowing you are contributing to the scientific enterprise. There are many good reasons for why journals should pay reviewers, most of all because it is valuable work most often undertaken in service to for-profit companies. I strongly support people getting paid for their work, and there is at least one journal that pays reviewers, I just don’t think that paying reviewers will solve the underlying problem. There are too many papers, there are too many journals, and we all have too much other shit to do.
Enforce a submission-to-review ratio. This is another solution that pops straight into people’s minds when thinking about the lack of available reviewers. If you submit n number of papers in a calendar year or to a particular journal, then you should review n*x number of papers (with a common value of x = 3). Look, we all know people who publish extensively but almost never review, and this is annoying because we feel like you should put into the system what you get out. But this sort of mindless transactional approach to peer review would almost surely lead to a decrease in the quality of the already questionable quality of peer reviews. “Ah well, looks like I owe a review to Frontiers in Fuckery, so I guess I’ll just write a quick paragraph on how the abstract is unclear and that they should have cited Syed more.” Besides, the math gets all wonky on this equation once you start taking it seriously, especially in the context of big collaborative efforts. I was recently a co-author on a paper with 219 authors. Does that mean we collectively owe 657 reviews to offset the three we got?
Desk reject more papers. Most journals implement an initial triage process in which the Editor-in-Chief reviews submissions to determine whether or not they should be sent out for peer review. A “desk reject” or “rejection without review” is when they decided to not send the paper for review. I am of the view that most journals should probably desk reject a far higher number of papers than they do—not because I am a gatekeeping jerk, but because often it is quite clear that the paper is unlikely to be successful based on the initial screening. Desk rejecting saves authors and reviewers time and effort. This is not a great solution to our problems, though, because authors will just keep on sending their papers to different journals, which is more work for editors.
Stop sending revisions back to reviewers. It is rare that revised manuscripts need to be returned to the original reviewers for additional comment, but some editors do this as a matter of course. Some even continue to do so three, four, five times, until the reviewer is 100% satisfied with the paper. I once had a paper returned to a reviewer a fifth time after they commented on some typos during the fourth review. Total nonsense. There are some cases where it makes sense to solicit additional comments, and I have done so on occasion, but this should be done sparingly. Keeping peer review to a single round for most papers would provide some relief to the reviewer pool and speed up the overall review process.
Widespread adoption of expedited review. Manuscript rejections are often seen as capricious and unjust. Who among us has never thought, “but I can address all of those comments!”? And yet, unless the authors make a successful appeal, their only choice is to start afresh at a new journal. Or is it? Some journals support expedited review, also called streamlined review, in which they accept revised versions of manuscripts that were rejected from other journals. Think of it as a revise and resubmit, but sent to a new journal rather than the one that rejected it. Authors submit a revised manuscript, a cover letter detailing the changes they made, and the original decision and reviews. The new journal then decides whether it can be accepted, rejected, or requires additional review. This procedure could go a long way to addressing the lack of reviewers, especially if adopted at scale, but unfortunately few journals currently allow it. For-profit publishers like Wiley have established “transfer networks” that can move papers around from one Wiley journal to another (and thus avoid having to deal with another submission portal). This is clearly intended to keep papers within their portfolio, but it would be worthwhile if something like this was implemented across publishers.
Expand the reviewer pool. Editors typically identify reviewers by a) drawing from the journal editorial board, b) keyword search in the journal system, c) scanning the reference list of the submitted paper, d) recommendations provided by the submitting authors, e) their knowledge of the field, and/or f) calling on friends or people who owe them a favor. Clearly, these methods are biased in favor of people who are already linked into the system, and can overlook engaged and eager scientists, especially those from under-represented countries and institutions. Editors are often so desperate they will take whatever they can get, failing to take into account status biases and potential conflicts of interest. I mean, one of the reviewers of Daryl Bem’s infamous paper on extra-sensory paper was Lee Ross—who was suggested as a reviewer by Bem. They are godfathers to each other’s children. Totally cool, normal science we have here.
A few months back I was having a miserable time finding reviewers for a paper, and in desperation put out a request on Twitter. Within about an hour I had more reviewers than I needed. Potential reviewers are out there, we are just not connecting with them. Some journals or editors will not invite doctoral students to review as a matter of policy. This is foolhardy, as doctoral students are often deeply engrossed in their fields and are some of the best reviewers out there. It is also extremely inefficient that each journal has its own reviewer database. A single, centralized reviewer database that all journals could draw upon would be much more useful.
Changing the way we organize and locate potential reviewers could also help with an under-appreciated cause of the peer review crisis: some reviewers simply never receive the invitation. Because of employment precarity and frequent institutional transitions, email addresses that editors can locate online3 will often be out of date. Sometimes the invitation bounces, but other times it just seeps into the black hole of an email address that no one will ever again access. To make matters worse, journals and editors tend to have a bias against non-institutional emails such as gmail, treating them with skepticism, despite the fact that they tend to be much more stable and reliable than institutional emails.
Contraction of poorly performing journals. If having too many journals is one of the central problems, then the clear solution is to shut some of them down. I am a big baseball fan4, and years ago when there were concerns about the quality of play and ability to generate revenue there was talk of contraction, in which two teams would be eliminated from the league. We could do the same for journals that routinely publish shoddy work, allow outrageous claims based on weak data, allow generalized claims based on limited samples, do not hold themselves accountable for what they publish, fail to issue corrections or retractions in a timely manner, and so on. The devil is in the details, of course, and there is no governing authority that could make this happen. Nevertheless, I will take this opportunity to nominate PNAS and Psychological Science for contraction.
Break free of the current journal structure. Structural problems require structural solutions, and the only real tractable solution to our current predicament is to blow it up and do something different. That may seem implausible at first glance, but we are already well on our way to doing something different. There are initiatives such as preprint review clubs, eLife’s new model of treating all papers that pass the initial editorial triage as published as Reviewed Preprints, and overlay journals that curate collections of reviewed preprints.
Of all the different publishing alternatives that have been proposed, Peer Community In (PCI) strikes me as the most promising. PCI is a journal-independent preprint review service. The ultimate outcome of the review process is called a recommendation. Journals can sign on to be “PCI-friendly,” which means that they commit to accepting, without further peer review, any recommended preprint that meets their pre-specified criteria. What this means is that authors have their choice of where they want their article to appear, from the available options. No more “shopping around” a paper to four or five journals until it “finds a home,” each of which requires its own set of reviews, sucking up precious reviewer time. I have been a recommender for PCI Registered Reports since it launched a couple of years ago, and so have seen the transformative potential first-hand.
Of course, the PCI model raises questions about the utility of traditional journals. The review process is completely handled by PCI, and a finished project is just handed off to a journal. PCI does all the work, but the journal gets all the glory. If the paper has been peer-reviewed, if the paper has been recommended (i.e., “accepted”), then what is the added value of being able to say that it is published in the Journal of the Personality and Social Psychology of American Undergraduate Introductory Psychology Students?
And therein lies the rub. We like the prestige. We like the hierarchy. We like to be able to brag that our papers are in the “top journals in the field,” even though most of them are nothing of the sort by any reasonable metric. Who doesn’t want to take a shot at PNAS? And besides, PNAS Nexus is just waiting there in the wings. Academics are a conservative bunch that are resistant to change, and a change like PCI is just too radical for most. At least for now.
So, in the end, what is the solution to the peer review crisis? Just keep on complaining.
And all days, past and future.
Academic discourse on Friendster was extremely limited.
If they can be located at all. It is shockingly difficult to find some people’s email addresses.
Go Giants!
Change might happen faster than you'd expect. Research funded by the Europen commission started their own open publication platform: https://open-research-europe.ec.europa.eu/about You can submit your paper before peer-review.
Except that the data show that there are now more eligible potential reviewers per paper than 10 years ago in all fields -- that is because we have larger teams authoring a larger amount of papers, but the team size has increased faster than the number of papers.