In the year 2000, peer review was already deeply embedded in scholarly publishing, but the way it was practised reflected a very different set of assumptions about time, labour, and visibility. Manuscripts arrived physically or by email. Anonymisation was often done by hand. Corrections involved paper copies, Tippex, and careful retyping. And although editorial oversight was not invisible, it was largely unquantified. Decisions only moved forward because people moved them forward.
Yet this was not an inefficient or naïve system; it was a human one, built around professional judgement, tacit knowledge, and trust in the people tasked with maintaining standards. Peer review functioned because editors, reviewers, and publishers clearly understood their roles, even if the tools supporting those roles were relatively functional yet unsophisticated and labour-intensive.
Twenty-five years later, the tools are no longer that simple. Submission platforms, automated screening, similarity checks, reviewer databases, and real-time reporting have reshaped the editorial backdrop. The scale of publishing has expanded dramatically, driven by digital production workflows, increased global research activity, and, more recently, the growing use of AI-assisted research and writing. Timelines have compressed, and scrutiny of research integrity has intensified. Despite all this, what has not changed is the central purpose of peer review: to evaluate research carefully and credibly and to sustain trust in the scholarly record.
Alongside these technological shifts, the structure and scale of scholarly publishing have also expanded and diversified at pace. The number of journals has grown substantially, cascading models have become commonplace, and disciplines have continued to specialise, each with its own expectations, norms, and thresholds. Parallel developments such as preprints, open and other alternative peer review models, and more formal mechanisms for recognising reviewer contributions, including initiatives such as ORCID, have added further layers of complexity to editorial assessment and decision-making.
Taken together, these changes have placed greater emphasis on early editorial triage, not only assessing quality but also determining scope, fit, and readiness within increasingly crowded systems. What this has reinforced, rather than diminished, is the central role of informed editorial judgement in navigating volume, variation, and competing signals.
At PA EDitorial, our own history sits across these transitions. The early years of the company were shaped by editorial practices that predate platforms and dashboards, where judgement was carried by people rather than systems. That inheritance still informs how we approach peer review now, even as the tools around it continue to change.
As we reach the first quarter of the twenty-first century, it is worth pausing to examine not only how peer review has evolved but also what that evolution reveals about the system’s strengths and vulnerabilities. From our vantage point, spanning both pre-platform and platform-native workflows, one conclusion stands out: technology has transformed the mechanics of peer review, but it has not altered the responsibility at its core.
From manual practice to managed systems
The most visible shift since 2000 has been the rise of integrated manuscript submission and tracking systems such as ScholarOne and Editorial Manager. These platforms replaced ad hoc email chains, spreadsheets, and filing systems with centralised workflows capable of handling thousands of submissions annually. For editors and publishers, this change was transformative.
Tracking systems brought order to complexity. They enabled journals to monitor turnaround times, reviewer engagement, and decision patterns with a level of precision that would previously have been unthinkable. They also facilitated globalisation. Journals could draw on international reviewer pools, manage multi-editor structures, and standardise processes across large portfolios.
Yet this transformation also subtly redefined what counted as editorial work. Tasks that were once understood as editorial judgement began to be framed as ‘workflow’ stages. Decisions were increasingly represented as data points. Efficiency became visible and measurable, while the interpretive labour underpinning decisions became harder to see.
This is by no means an argument against systems. On the contrary, the modern scale of publishing would be impossible without them. But the last twenty-five years have shown that systems are only as effective as the editorial reasoning that guides them. A platform can route a manuscript, but it cannot assess nuance. It can record a decision, but it cannot explain why that decision was intellectually sound.
When speed became a measure of quality
Alongside technological change came an acceleration of expectations. In 2000, extended timelines were common and broadly accepted. Delays were frustrating but rarely interpreted as indicators of failure. Today, speed is closely tied to perceptions of quality and professionalism. Authors expect rapid responses, publishers monitor performance metrics, and editorial offices operate under sustained pressure to deliver decisions quickly.
In most respects, this shift has been positive. Excessive delays undermine confidence and can disadvantage early-career researchers in particular. Streamlined workflows and clearer accountability have improved transparency and reduced unnecessary stagnation.
At the same time, the pursuit of speed has exposed a persistent tension within peer review. Unquestionably, rigour takes time. Reviewer recruitment, thoughtful assessment of reports, and careful editorial mediation cannot always be compressed without consequence. The last twenty-five years have made it clear that efficiency gains must be balanced against the human intellectual labour required to maintain standards.
When speed becomes an end in itself, peer review risks being reduced to procedural compliance rather than evaluative judgement. The challenge for modern editorial teams is not to resist efficiency, but to recognise where it must give way to deliberation.
Peer review as a human system
Perhaps the most significant lesson of the past quarter-century is that peer review remains, at its heart, a human system operating under increasing strain. Submission volumes have risen sharply across disciplines, driven by the global expansion of research output and growing pressures to publish. Reviewer availability, by contrast, has not increased at the same rate.
This imbalance has tangible consequences. Editors spend more time securing reviewers. Reviewer fatigue is widely reported. Reports vary in quality and depth, and editorial teams are increasingly required to interpret, contextualise, and, at times, compensate for uneven input.
These pressures are not failures of the system so much as reflections of its reliance on professional goodwill. Peer review has never been fully mechanised, nor was it designed to be. Its effectiveness depends on expertise, care, and ethical responsibility, qualities that cannot be automated or scaled indefinitely.
The experience of the past twenty-five years suggests that sustaining peer review requires sustained investment in editorial support. This includes clear reviewer guidance, realistic expectations, and editorial intervention when reports conflict, lack substance, or fail to address the core issues at stake. Without this mediation, the appearance of review may persist, but its substance erodes.
Integrity in an automated age
Concerns about research integrity have moved steadily towards the centre of editorial discourse since 2000. Tools for plagiarism detection, image manipulation screening, and statistical review are now routine components of many workflows. More recently, generative AI has raised new questions about authorship, originality, and acceptable assistance.
Organisations such as the Committee on Publication Ethics have played a key role in shaping guidance around these issues, emphasising transparency, proportionality, and editorial judgement. Major publishers have published evolving policies addressing AI-assisted writing and disclosure requirements.
What emerges from this landscape is not a simple narrative of technological threat, but a more complex picture of shifting responsibility. Automated tools can flag potential issues, but they cannot determine intent, context, or significance. Anomalies require interpretation. Policies require application.
The past twenty-five years demonstrate that integrity cannot be outsourced to software. It depends on editorial teams capable of weighing evidence, exercising judgement, and documenting decisions transparently. As tools become more sophisticated, the need for experienced editorial oversight increases rather than diminishes.
Trust as a cumulative practice
Public trust in research has never been static, but it has become more visible and more contested in recent years. Retractions, corrections, and debates about peer review quality are now part of the broader public conversation around science and scholarship. While these moments often attract negative attention, they also highlight the mechanisms through which the scholarly record is corrected and maintained.
Trust in peer review does not arise from claims of infallibility. It is built through consistent, accountable practice over time. Editorial transparency, reasoned decision-making, and willingness to address error are all central to that process.
Looking back over the first quarter of this century, one pattern is clear. Where peer review functions well, it does so because editorial responsibility is taken seriously and supported structurally. Where it falters, the underlying causes are rarely technological alone. More often, they reflect overstretched systems, insufficient support, or erosion of editorial space for judgement.
What twenty-five years have taught us
Viewed in retrospect, the last twenty-five years offer a set of clear, if sometimes uncomfortable, lessons.
- First, technology is indispensable, but it does not replace editorial reasoning.
- Second, speed improves access and efficiency, but it cannot substitute for rigour.
- Third, peer review remains a human system whose sustainability depends on supporting the people within it.
- Fourth, automation and AI reshape workflows, but not the ethical foundations of evaluation.
- Finally, trust in peer review is cumulative, fragile, and maintained through practice rather than promise.
These lessons are not new, but they have become more visible as the system has grown more complex.
Looking ahead
For editorial teams, adapting to these shifts has not been a purely technical exercise. New platforms, processes, and policies require changes in practice, confidence, and culture, often introduced alongside existing workloads and expectations. Over time, supporting editorial teams through these transitions has become a significant part of sustaining effective peer review, ensuring that systems enable rather than constrain judgement. As peer review continues to develop, we are exploring this change-management dimension in more detail, reflecting on how different eras of practice – from paper-based workflows to platform-driven and AI-enabled systems – have each required thoughtful support to maintain trust and integrity.
For us at PA EDitorial, reaching the first quarter of the twenty-first century is not an invitation to nostalgia, but more an opportunity to recalibrate. Peer review has survived profound technological and cultural change precisely because it has adapted without abandoning its purpose. That adaptability will continue to be tested in the years ahead.
For editorial teams, publishers, and service partners, the task is not to choose between innovation and tradition, but to align tools with core values. From the Tippex days to current tracking systems, the mechanics of peer review have transformed, but the obligation to uphold trust in the scholarly record has not.
About PA EDitorial
PA EDitorial provides expert peer review and editorial management support to academic journals and publishers. Drawing on decades of experience across both pre-platform and platform-based workflows, we work closely with editorial teams to support effective peer review, uphold research integrity, and manage evolving systems and processes.
Learn more about PA EDitorial’s services.
Sources
Committee on Publication Ethics (COPE). Core practices and guidelines. Available at: https://publicationethics.org
Elsevier. Publishing ethics and integrity. Available at: https://www.elsevier.com/about/policies/publishing-ethics
Springer Nature. Editorial policies and research integrity. Available at: https://www.springernature.com/gp/editors/research-integrity
Tennant, J. P. et al. (2017). ‘A multi-disciplinary perspective on emergent and future innovations in peer review.’ F1000Research, 6, 1151.
Wager, E. & Kleinert, S. (2011). ‘Responsible research publication: international standards for editors.’ A position statement developed at the 2nd World Conference on Research Integrity.