Intended for healthcare professionals

Editorials

Improving peer review: who's responsible?

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7441.657 (Published 18 March 2004) Cite this as: BMJ 2004;328:657
  1. Frank Davidoff, editor emeritus, Annals of Internal Medicine (fdavidoff{at}earthlink.net)
  1. 143 Garden Street, Wethersfield, CT 06109, USA

    Peer review needs recognition at every stage of scientific life

    Peer review matters. Why? Firstly, scientific assertions can't be proved; they can only be disproved. The doubts raised by peer reviewers are therefore a crucial element in scientific reasoning.More specifically, as Francis Bacon put it in 1605, the “registering and posting of doubts has a double use:” it not only guards us “against errors,” but also furthers the process of inquiry, causing issues that would otherwise be “passed by lightly without intervention” to be “attentively and carefully observed.” Moreover, since scientific findings effectively don't exist until they're in written form, the doubts raised during editorial peer review come at a particularly crucial step in the overall scientific process.1 Secondly, the exchange of information for professional recognition is the principal instrument of social control within the scientific community.2 Approval by peer review is perhaps the single most powerful expression of that recognition. Thirdly, and more pragmatically, journal editors depend heavily on peer review to accomplish their two main tasks—selecting papers and improving their quality1 3—even though editors themselves are apparently the source of substantive improvements to manuscripts more often than either peer reviewers or statisticians.4

    The quality of many manuscript reviews is excellent, but in many others it is, unfortunately, still far from optimal. Therefore editors of journals might understandably look for ways to improve reviewers' performance. The paper in this issue by Schroter et al reports on a carefully controlled randomised trial of educational interventions designed to do just that.5

    Sadly the result is essentially negative: the small improvements found in some measures of review quality were judged to be “not of editorial significance,” and faded over time. Sadly, this result is also highly predictable. The report provides few details, but the educational programmes seem to have been short, cognitively focused, and largely didactic (passive)—all features that decades of educational research tell us are relatively ineffective in producing meaningful changes in practice.68

    Peer reviewing is after all an applied skill—like architecture, flying aeroplanes, or clinical medicine—a practice, rather than a purely cognitive attribute, that is, knowing a lot. All practitioners need to become good at “reflection in action,” the mix of analytical, judging, and performing skills that is essential for handling the complex, unfamiliar, and ill formed problems they're called on to handle.9 And, as John Dewey and others have told us for the past 65 years, acquiring these applied skills requires “learning by doing,” a process, sometimes referred to as “experiential learning,”10 that differs fundamentally from traditional cognitive (or “rationalist”) classroom learning. Although the experiential learning “cycle”10 includes important cognitive elements (reflection, integration of experience with established explanatory models), the true curriculum in experiential learning is the experience itself11; it requires hands-on involvement by learners—fully, openly, and without bias—in the relevant tasks: a “practicum”9; and experiential learners need to receive their guidance from coaches, rather than absorbing information from lecturers.9

    Whether by intention or not the present report provides strong evidence that experiential learning is important in peer review. It cites extended training in epidemiology and statistics—a hands-on, problem solving experience under expert guidance—as a known characteristic of high quality peer reviewers. And it makes us realise that most editors are themselves highly skilled reviewers. How do they get that way? They critically assess the quality of manuscripts, day after day, thousands of them over the years. And they receive repeated, timely, on-the-job feedback from a variety of coaches, including their editorial and statistician colleagues, not to mention peer reviewers. The difficulty that many peer reviewers have in producing high quality reviews shouldn't, therefore,surprise us, any more than the difficulty most lawyers would have in flying aeroplanes: most reviewers, chosen for their expertise in clinical or research areas, simply haven't had the requisite opportunities for experiential learning in critical assessment.12

    So does this largely negative study help? Yes. It can discourage further use of precious time, energy, and funds for the kind of educational intervention that's unlikely to be effective. Further, it can focus attention on more productive research questions—for example: What's the actual learning curve for high quality peer review (in particular, how many reviews are required)? How does the length of time a reviewer spends on a review affect its quality? And what's the optimal mix ofinput from peer reviewers, editors, and statisticians in the overall editorial process?

    Finally, the study suggests that editors, for all their strengths, can hardly be expected to fix the problems of peer review on their own, any more than schools can be expected to solve the problems of education singlehandedly. True, in the short term, editors might be more effective if they were to “train the trainers,” rather than trying to train reviewers directly. But there is probably no quick fix here. Peer review is such a fundamental element of critical scientific thinking that the entire scientific and scholarly community should arguably take on the responsibility for improving and maintaining its quality—a major, long term commitment. Why isn't substantial, formal training in peer review an important and integral part of all graduate level training in basic science in clinical medicine? It should be. Moreover, serious consideration should be given to developing certification in peer review, and certified reviewers might be required to do a defined minimum number of reviews every year to maintain their credentials. And, finally, why isn't peer review considered worthy of serious academic recognition? It should be. Promotion, tenure, and funding decisions should take the quantity and quality of candidates' peer reviewing into account.

    Much good can come out of this study if it serves as a wake up call. The message: the broader scientific and scholarly communities need to get serious about making high quality peer review an integral part of all aspects of professional life, including training, practice, and reward systems.

    Acknowledgments

    Papers p 673

    Footnotes

    • Competing interests None declared.

    References

    View Abstract