Lizi Dawes, CEO, PA EDitorial
Last year, when I first stepped into the ALPSP Conference, I walked in not quite knowing what to expect, and found myself in a room full of people who spoke the same language of publishing pressures and possibilities. It felt less like a conference and more like a check-in; a chance to swap notes on what was working and what wasn’t.
Each year, the theme shifts – from open access to evaluation, and now to AI and integrity. But the core question doesn’t change: how do we keep publishing steady, honest, and human? That’s the thought I’ve carried with me into this year’s conversations: how we protect trust in the scholarly world without overwhelming the very people charged with holding it together.
When Integrity Tools Multiply the Questions
Let’s start with where we are. In recent years, publishers have increasingly turned to AI-driven tools to help identify a wide range of issues: plagiarism detection platforms such as Turnitin or iThenticate spotting text overlap; language and formatting checkers highlighting unusual phrasing or grammar; scope-matching systems assessing whether a manuscript aligns with a journal’s aims; and reference-analysis tools flagging suspicious citation patterns. Alongside these, image-integrity platforms like Proofig AI and Imagetwin have entered mainstream usage, enabling editors to flag duplications, splices, and other alterations that once took human ‘image detectives’ hours to spot.[1]
Images remain the clearest illustration of scale. Take Imagetwin, for example. It can compare images in a manuscript against a database of 51 million images spanning decades – a scale no human could match.[2] Proofig, meanwhile, tripled the Journal of Clinical Investigation’s rejection rate for image-problematic papers, from 1% to 3%, by spotting flaws editors had missed.[3]
The same story plays out with text, scope, and references: these tools are unquestionably powerful; yet they remain imperfect. They flag matches, yes, but determining whether they represent innocent reuse, mislabelled control images, or outright misconduct still requires human judgement. In practice, the technology acts as a first filter, but human editors and peer review managers remain a vital second wall of checks and balances to ensure fairness and accuracy.
Text-based AI integrity tools are ascending, too. Platforms like Copyleaks use AI to detect paraphrased plagiarism or AI-generated content, moving beyond simple string matching to deeper semantic analysis.[4] However, these systems bring their own challenges – false positives, bias, and misinterpretation. In fact, some scholars have cautioned that AI plagiarism detectors can unfairly target non-native English speakers or those with distinctive writing styles.[5]
The tools are here to stay, and rightly so, but if we hand editors an avalanche of alerts without a way to sift and judge them, we’ve essentially solved nothing. Integrity isn’t just about detection; it’s about making decisions that remain fair, transparent, and humane.

When Information Isn’t an Answer
For editors, the reality is that AI tools generate powerful insights – but not decisions. Every flagged phrase, duplicated image, or suspicious pattern still needs a human to weigh the evidence, judge intent, and decide what to do next. That responsibility cannot be automated, and nor should it be. As the Journal of Korean Medical Science recently put it, ‘the ultimate responsibility still lies with professionals who ensure scholarly integrity’.[6]
The consequence is that editors are now spending more time not only assessing manuscripts but also sifting through the signals that these systems produce. What was meant to lighten the load can, without support, tip into added strain.
And the context matters. Research output continues to surge, reviewer pools are thinner than ever, and oversight expectations continue to climb.[7] Drop a stream of AI-generated alerts into that mix, and it becomes another weight on plates that are already full. Unless those signals can be triaged, contextualised, and integrated into workflows, there’s a real risk they will overwhelm the very people they were meant to empower.
Why Peer Review Management Services Now Matter More
This is where peer review management services step into a critical role. Traditionally, these providers support editorial workflows by managing reviewer invitations, calendars, reminders, submission checks at various stages, and even day-to-day mailbox management. But as AI tools proliferate, peer review management teams are becoming more than workflow managers. They are poised to be guardians of research integrity.
Here’s how this already looks in practice: a manuscript enters the system, and AI tools flag potential concerns, image duplication, unusual textual patterns, or possible recycled content. Rather than pass all that raw output onto an editor, peer review management staff can:
- Triage alerts, highlighting those most critical or credible and advising on whether they warrant editorial follow-up.
- Prepare structured summaries that frame the issue and its implications, for example: ‘Figure 2 appears duplicated elsewhere in the database; this could indicate a labelling error or possible manipulation.’
- Group alerts consistently, so editors receive a clear, concise briefing with recommendations, rather than a flood of isolated flags.
- Track trends within and across journals, spotting systemic issues linked to particular authors, institutions, or paper formats.
These practices are beginning to move from aspiration to reality. They don’t just support editors; they protect fairness. Editors can focus their judgement where it matters — evaluating academic merit and contribution — while trusting that flagged issues are accurately framed, contextualised, and accompanied by clear next steps.
What I’ve noticed is that the role of our managing editors is shifting. In addition to their ever-important role of supporting journal teams, they are increasingly becoming the infrastructure for trustworthy and transparent editorial decisions.
And like any infrastructure, they need more than good intentions to hold. Processes only work if they are consistent, and trust only builds if approaches are shared, which brings me to the next challenge: standards.

Building Standards and Shared Frameworks
AI alerting, triage protocols, and editorial thresholds must be consistent across publishers; otherwise, identical AI output could lead to wildly different outcomes.
We need cross-industry standards:
- How to interpret AI alerts, and when to escalate them?
- When should peer review management teams pass concerns to editors, institutions, or ethics committees?
- How do we document resolution paths for transparency and training?
- Can we agree on a shared vocabulary and risk categorisation (e.g. ‘low-risk duplication,’ like reprinted control images vs. ‘high-risk manipulation’)?
Publishing needs to evolve here: publishers, peer review management firms, AI-tool providers, and scholarly societies need to join forces to co-design standards. Such a collaboration would ensure a shared roadmap and help maintain fairness, clarity, and public trust.
Looking Ahead: A More Sustainable, Trustworthy Future
If I look five years ahead, I imagine editors receiving not a flood of raw alerts, but clear, usable signals that have already been weighed and organised. Peer review management companies will be central to that shift, working as integrity partners rather than workflow support. And it won’t happen in isolation; it will take all of us, across publishing, to shape systems that support good judgement rather than drown it.
The vision I have, one that is worth aiming for, is this:
- AI flags will arrive as refined insights, curated and contextualised by trained triage teams.
- Editors will receive concise dashboards instead of an overload, so their attention is focused on what really matters.
- Resolution pathways will be consistent and transparent across publishers.
- Authors will understand the process – why an alert was raised, how it was evaluated, and what it means for their work.
- And trust in peer review will deepen – not because we reach perfection, but because we show clarity and fairness at every stage.
This vision isn’t something one company or one editor can deliver alone. It will take all of us – the ALPSP community, publishers, service partners, and editors – all shaping the future of peer review together.
ALPSP Reflections
At its heart, publishing is about trust. That trust emerges not from technology or systems alone, but from how we turn data into judgements, and how we ensure that fairness, transparency, and humanity remain central.
As I walk back into the ALPSP conference, I feel a sense of optimism; we are not facing AI’s complexity alone. The conversations here show how peer review management service providers have become essential collaborators alongside editors and publishers in maintaining research integrity – a role that is both timely and essential to the future of scholarly publishing.
I’m excited to be building this future with other colleagues in this profession. From the editorial teams and service providers to the AI innovators, we are all working together so that integrity isn’t an extra burden but a shared foundation.
References
[1] The Publication Plan. (2024, April 9). Image manipulation: how AI tools are helping journals fight back. Available at: https://thepublicationplan.com/2024/04/09/image-manipulation-how-ai-tools-are-helping-journals-fight-back/
[2] Wall Street Journal. (2024, May 28). The journal sleuths trying to stop image fraud in science papers. Available at: https://www.wsj.com/tech/scientific-papers-imagetwin-proofig-image-scanning-4eae1aad
[3] The Publication Plan. (2024, April 9). Image manipulation: how AI tools are helping journals fight back. Available at: https://thepublicationplan.com/2024/04/09/image-manipulation-how-ai-tools-are-helping-journals-fight-back/
[4] Copyleaks. (2024). About Copyleaks. Available at: https://en.wikipedia.org/wiki/Copyleaks
[5] Soliman, K. (2024). The rise of AI plagiarism detection and its impact on international students. College & Undergraduate Libraries, 31(2), 177-185. Available at: https://www.tandfonline.com/doi/abs/10.1080/10691316.2024.2433256
[6] Kim, H. (2025). Use of AI detection software in scientific publishing. Journal of Korean Medical Science, 40, e92. Available at: https://jkms.org/pdf/10.3346/jkms.2025.40.e92
[7] STM. (2023). STM Global Brief 2023–2024: Economics & market size. International Association of Scientific, Technical and Medical Publishers. Available at: https://www.stm-assoc.org/