PA Editorial

PA EDitorial

What Leading Through AI Integration Has Taught Me About Peer Review

When the first serious conversations about artificial intelligence (AI) in peer review began, my instinct was the same as many people’s: curiosity, mixed with a healthy dose of scepticism. It wasn’t that I doubted the technology could do impressive things – it’s that I’ve been in academic publishing long enough to know that impressive is not the same as useful, and useful is not the same as trustworthy.

I’ve spent more than a decade working in peer review management, and the last twelve years leading a team at PA EDitorial that now manages over 200 journals in peer review across multiple platforms, which means I’ve come across challenges of every shape and size: overloaded systems, unpredictable surges in submissions, reviewer shortages, and unexpected crises that throw even the best-laid schedules into disarray. I’ve learned that what keeps things steady isn’t just process, it’s people.

So, when AI entered the peer review conversation in earnest, I had one question at the front of my mind: how do we integrate this without losing the human judgement and relationships that peer review depends on?

Seeing AI as a Tool, Not a Replacement

The loudest voices in technology often talk in absolutes – the idea that a tool will revolutionise everything, or that it will make whole roles obsolete. Neither of these assumptions has been my experience.

AI has clear potential in peer review. It can help match manuscripts to suitable reviewers more quickly, flag missing information, detect possible ethical concerns, and even translate or summarise content. These are useful efficiencies, especially in high-volume portfolios. But none of those things replace the nuanced decisions that managing editors, editorial board members, and reviewers make every day.

If anything, integrating AI has made me more aware of the value of human oversight. Unquestionably, the tool can surface a list of potential reviewers in seconds, but it can’t know that one of them recently had a less-than-positive experience with the author, or that another is currently dealing with a family emergency and won’t be available. It can flag unusual text patterns, but it can’t judge whether they’re a genuine problem or the result of a perfectly legitimate writing style.

In other words, AI is a potentially beneficial filter, but not a final say.

Leading a Team Through Change

In peer review management, introducing AI into workflows is rarely just a technical decision. It changes how people feel about their work – their confidence in the peer review process, their sense of where responsibility lies, even their understanding of what ‘good’ editorial judgement looks like. And change, particularly in an environment already under pressure from headlines about peer review automation and job loss, can unsettle even the most adaptable teams.

From the start, I realised that this was not about replacing people, it was about giving them better tools to do their jobs. For this reason, I’ve encouraged and advocated for open conversations, not just about how AI technology can work, but also about the concerns of those who may use it.

Take reviewer-matching tools, for example. An AI system can produce a polished list of suggested names in under a minute, but that speed doesn’t guarantee the right fit. It won’t necessarily know that a reviewer is on sabbatical, or that they’ve recently collaborated with the author[LD1] , or that they’re already overcommitted; and while it might pick up some of this from publicly available work, it can’t see what hasn’t yet been published. These are the kinds of details human editors pick up – often without even thinking about it – that keep the process running smoothly.

Addressing concerns means starting with trust. New peer review tools need space to be tested before they are embedded, and their results need to be checked against human judgement. Training and adaptation take time; no one benefits from rushed rollouts. Clear boundaries matter too: defining what AI can handle, and what will always remain in the hands of experienced editors, helps keep expertise at the centre of the process.

The truth is, integrating AI successfully is as much about people as it is about technology. Even the best peer review tools will fail if the people using them don’t trust them – or the leadership behind them.

Guarding Against Bias

One of the most important lessons I’ve learned in this process is that AI reflects the data it’s trained on. If that data carries biases, so will the outputs. In academic publishing, that’s not a small risk – it’s a serious one. And in peer review, bias can undermine trust more quickly than any technical flaw.

Take language, for example. Claudia, our research integrity consultant, recently wrote about the challenges that non-native English speakers face in publishing, and how AI can compound those barriers. If a tool has been trained mostly on articles written by native English speakers, it may rank unfamiliar phrasing as ‘poor quality’ or alter meaning during automated editing. That doesn’t just risk undervaluing valid research; it can skew which work is invited to review, recommended for acceptance, or flagged for extra scrutiny.

Consider a translation tool used to help editors quickly assess abstracts in other languages. If it rewrote an author’s opening sentence so bluntly that the nuance was lost – and that nuance changed the meaning – the result might still be grammatically correct but wrong for the paper. Without a bilingual editor to catch it, that submission could be judged unfairly.

This is why human oversight matters so much. Our job is to spot those patterns and challenge them. We can’t just take an algorithm’s recommendations at face value; we must ask, ‘Why is it suggesting this? Who might be left out? What might it be missing?’

Bias isn’t just an abstract ethical issue; it has real consequences for careers, for the diversity of research being published, and for the credibility of peer review as a whole.

Balancing Efficiency and Trust

AI can speed things up – we know this. There’s no denying that. But in peer review, speed is not the only measure of success. The quality and fairness matter more than the pace. If we move faster but erode trust – between editors and reviewers, between journals and authors – we’ve lost far more than we’ve gained.

Trust is built over time, in small, consistent actions: keeping our word, respecting deadlines, communicating clearly, and treating people fairly. It’s fragile, and it can be damaged quickly if authors feel they’re being processed by a machine rather than evaluated by peers.

For me, the balance in peer review comes in using AI to remove unnecessary friction – the repetitive, low-value tasks that drain time – so that my team has more capacity for the conversations, decisions, and relationship-building that keep trust alive.

For example, my experience has shown me that after major conferences, many journals see sudden surges in submissions. Without planning, those spikes can put the schedule under pressure and lead to hasty decisions. In situations like these, AI can help with initial triage checks, freeing editors to spend more time on borderline but promising work. That combination of speed and human engagement can make the difference between a strong paper being overlooked or successfully revised and accepted.

Lessons Learned as a Leader

From the first stages of exploring AI in peer review workflows, there are already lessons that have stayed with me:

  1. Start with the people, not the tool. It’s tempting to focus on what the technology can do, but it’s the team that will be using it day in and day out. If it doesn’t make sense to them or help with their real pressures, it won’t work in the long run.
  2. Be transparent. People work better with a tool they understand. It’s important that we show them what it can do, where it falls short, and why we’re using it, especially as guesswork only breeds mistrust.
  3. Treat it as a trial, not a finish line. The first version of any tool will have quirks. This means that we need to use it, adjust it, see what works and what doesn’t, and be willing to change course.
  4. Remember why we’re doing it. AI isn’t there to tick a modernisation box; it’s there to help peer review do what it’s meant to do: assess research fairly and rigorously.
  5. Protect the culture. If people feel valued and trusted, they’ll carry that same care into their work. Software can’t create that culture, only people can, and it’s worth protecting.

The Human Element Isn’t Optional

There’s a temptation, when talking about AI, to focus on capability: what it can do, how fast it can do it, how much it can process. Those things matter, but in peer review, they’re not the whole story. The credibility of the process rests on the belief that work has been judged fairly by people who understand both the subject and the human effort behind the research.

That belief doesn’t come from automation. It comes from editors who send careful, constructive feedback, and from reviewers who give their time and expertise freely. From managing editors who keep everything moving without losing sight of the people involved.

AI can help make that work easier, but it cannot replace the judgement, empathy, and relationships that make peer review function – and respected – in the first place.

Looking Ahead

The conversation about AI in peer review is still developing, and so are the tools. Some will prove useful in the long term; others will fade as quickly as they appeared. What I’ve realised through this process is that one thing won’t change: the need for human leadership from people who understand the pressures of academic publishing, and the values peer review is built upon.

I don’t know exactly what peer review will look like in ten years, but I believe if we keep people at the centre, if we’re willing to question the outputs and protect the relationships that underpin the system, we can make AI work for us rather than the other way round.

And for me, that’s the real lesson of leading through AI integration: the technology may be new, but the principles that make peer review work – fairness, integrity, trust – are not. It’s our job to make sure they survive whatever comes next.

About PA EDitorial

Here at PA EDitorial, our team possesses extensive expertise, experience, knowledge, and skills in peer review and editorial management.

Our primary focus is supporting academic journals by overseeing the peer review process and assisting all contributors and editorial boards.

We excel at enhancing the efficiency and effectiveness of peer review and have a reputation for successfully revitalising struggling journals.

In addition to peer review services, we offer copyediting, proofreading, formatting for academic and teaching materials, and social media management for journals.

Learn more about us here.

Leave a Comment

Your email address will not be published. Required fields are marked *