Somewhere in the middle of a piece of academic writing, I stopped reading.
It wasn’t that the argument was flawed; in fact, I found it to be coherent, well-structured, and properly referenced. The tone felt balanced, and the phrasing was careful. Every sentence was structured just how it should be, with calm, accurate, and polished rhetoric. Yet the writing lacked any real sense of the thoughts being developed. What I mean by this is that there were no questions being explored, no obvious tension between ideas, and no trace of the person behind the sentences.
It felt as though the writing had been stripped of all human nuance, making it devoid of idiosyncrasy and smoothed into something impersonal. The language felt flattened, tracing the outline of an argument without ever bravely pushing into it, without that moment when a line of thought tightens or turns, and both the writer and reader feel a sense of excitement. For me, as a copyeditor, the language was describing rather than thinking, and it failed to rummage under the bonnet.
And that, I suspect, is the premise behind the question: Who exactly are we reading?
In academia, where disagreement, refinement, and voice are supposed to do the work, something is quietly shifting. As AI tools become more common in research writing, peer review, and even submission letters, we are seeing a slow convergence towards language that is structurally sound but narratively vacant. It’s not that the writing is poor; it’s that it no longer sounds like us.
What’s more, we may be feeding this human loss back into the system because AI doesn’t just learn from human input; it’s increasingly learning from itself.
Recursive Learning, Recursive Language
In mid-2023, researchers at OpenAI published a short technical paper discussing a concept they called the ‘curse of recursion’[1]. They found that when large language models (LLMs) are trained on content generated by other AI systems, they become narrower and begin to repeat their own patterns. Ultimately, this means that LLM outputs become less accurate, less surprising, and – crucially – less nuanced.
We have to wonder – if AI models are learning from material that’s already been smoothed into patterns of what sounds plausible – whether they will eventually cease to reflect any kind of human thinking and nuance, but instead reflect the increasing echo of their own processes. Then what happens when those models are used by scholars, whose writing becomes part of the training pool for the next model’s version? Does that mean the loop perpetuates itself, thus tightening its grip around human language?
These thoughts are not based on a future scenario. If we consider current trends, we can see that this is all happening now. And in academic writing, where clarity of thought is expressed through clarity of language, the risk isn’t just stylistic, it’s epistemic… shaping not just how we write, but how our knowledge is framed and understood.
When Voice Disappears
As a former academic (if there is such a thing) and a copyeditor, one persistent misunderstanding I have never fully understood is the belief that academic writing should be impersonal. In reality, the opposite is often true. We need the human voice as an anchor to build both trust and integrity.
When we read Mary Beard or Oliver Sacks, we’re aware of someone thinking on the page. Beard’s writing, even at its most scholarly, retains its personality and nuance. Her sentences aren’t narratively vacant even though they are clear and intentional. This kind of human element is precisely what’s missing in so much AI-assisted writing. It’s not that the information is a problem; it’s that there is a missing presence of a person thinking. To be more succinct, AI language knows how to sound like it’s saying something, but it rarely knows why it’s saying it. How could it? It doesn’t think.
Oliver Sacks wrote in much the same way. Whether describing a neurological case or a memory from his own life, his prose could be described as attentive and precise because it feels ‘lived in’ and because it matters to him.
This kind of writing stays with the reader because it has voice; it sounds distinctive and carries the weight of attention. It reflects a person being there with their words, noticing, choosing, and revising them. AI can’t replicate that, even with a multitude of prompts; instead, its writing feels produced and neutral… and knowledge is neither.
To illustrate the difference, Sacks wrote: ‘To restore the human subject at the centre – the suffering, afflicted, fighting, human subject – we must deepen a case history to a narrative or tale.’[2]
When asked to produce a similar sentence, an AI system offered:
‘In order to enhance the understanding of patient experiences, it is essential to adopt a more holistic, narrative-based approach to case histories that prioritises the human dimension of clinical care.’
The second sentence isn’t wrong, and it would sit comfortably in a policy document or journal introduction. But something has been lost in the AI translation, especially when and human subject becomes ‘the human dimension’.
This is how language is becoming diluted.
Peer Review and the Illusion of Neutrality
Peer review, too, is not immune to the neutrality and dilution that AI language can bring. Several major journals are now offering guidance on how – or whether – reviewers might use AI tools in their workflows [3, 4]. Recent research in academic publishing has begun to flag how generative‑AI tools don’t just help with grammar; they may also subtly push writing toward a narrower, more homogeneous style, diminishing the distinct voice and nuance of authors, as well as different cultural expression [5].
At first glance, AI tools might seem helpful in peer review feedback. A reviewer under time pressure might type a rough note into an AI assistant – something like: ‘Strengthen the limitations section’ – and the tool returns: ‘While the study offers valuable insights, the limitations could be articulated more explicitly to guide future research.’
The language in the example isn’t wrong, but it can leave the author guessing about exactly what needs strengthening or what’s missing. Is it a methodological gap? A sample size issue? A lack of reflexivity?
And peer review isn’t just about assessment, it’s about conversation, which is usually in written form. It’s about noticing what a paper is trying to do and responding not only to its argument but also to its effort. Sometimes, the most important parts of a review are not the ones that suggest a correction but the ones that show a reader was paying attention. AI can’t do that. It doesn’t care whether the work is brave or cautious, nuanced, or politically fraught.
But Is This Really About AI?
Not entirely.
This is also about what we value, what we reward, and what we desperately need to hold on to. If, for example, systems continue to prioritise metrics over meaning, or speed over depth, we can’t be surprised when researchers reach for tools that help them produce more, faster, and with less risk. Yet in this publish or perish culture, as editors, reviewers, supervisors, editorial teams, and writers, we have a responsibility to push back.
That said, this is not an argument for banning AI tools, which would be neither realistic nor desirable. Used carefully, they can support clarity, reduce friction, and help writers refine what they already know they want to say. The problem tends to arise when ‘generation’ replaces ‘selection’. In other words, when language is produced rather than chosen.
Good academic prose isn’t about polished words; it’s about messy, chaotic thoughts and theories, brought into some kind of shape… an organised display of human exploration.
Losing the Shape of Thought
There is also a long-standing and valuable tradition in academia of valuing complexity because complex things usually require complex thinking.
AI-generated text, even at its best, is built to resolve the question because it wants to fill the gap and finish the paragraph. But the best, most valuable academic writing doesn’t do that. Instead, it usually begins with a question and closes with another, because complex thinking doesn’t have a conclusion.
So if we teach students – or ourselves – that success lies in clean syntax and surface coherence, we’ll lose more than just originality; we’ll lose the habits of thinking that actually make scholarship worth doing.
What Might We Do?

Maybe nothing, at first. Maybe just this: notice.
The loss won’t come all at once. It will come quietly, through phrases that almost mean something, reviews that sound like engagement but aren’t, or papers that follow every rule and say nothing at all. The more we read language that was never really written, the more natural it begins to feel, and once it feels natural, we will stop asking what’s missing.
So, what might we do?
We might stay aware of how AI changes us – not just how we use it, but also how it uses us – and how it learns from the language we accept. Likewise, we might stay ahead of how it reshapes our sense of clarity, tone, and even thought. If AI learns from us, we are responsible for what we teach it; and… if we’re not careful, we’ll train it not on human writing, but on the residue of writing – the shell of what it used to be – the car without the engine.
We don’t need to reject the tools, but we do need to choose them with discretion and moderation. And we do need to keep choosing what kind of language we want to live with: what kind of writing we want to recognise as our own, and what kind of thinking we’re willing to lose.
Final Thought
Roland Barthes wrote that ‘The writer’s language is never innocent.’[6]
And maybe that’s what we’re in danger of forgetting. AI writes without consequence. It isn’t haunted by history, and it has no politics. It doesn’t worry that a sentence might be misread or that a joke might fall flat. Unlike humans, it doesn’t risk its own authority. It doesn’t care, because it can’t. But we do, and that’s what makes our writing more intriguing.
So perhaps that’s exactly why we must keep writing like humans, in all our flawed, brilliant, searching ways. Not to prove we are better than the AI tools we are learning to work with, but to remind ourselves that thinking is a practice we shouldn’t relinquish so easily. Perhaps we need to keep insisting that ‘thought’ remains alive and kicking, in all its glory, through the words we put on the page.
As George Orwell wrote, ‘If thought corrupts language, language can also corrupt thought.’[7]
PA EDitorial and AI-enabled Workflows
At PA EDitorial, the question isn’t whether AI has a role in peer review because we recognise that it clearly does. The issue is how that role is defined.
Automation can scan at scale, flag patterns, and surface concerns far more quickly than humans, but those outputs still need to be read, weighed, and understood in context. This is where our work sits: not in replacing editorial judgement, but in supporting it. We use AI where it helps, and keep people where it matters – in the interpretation, the decisions, and the consequences that follow.
If you want to see how that balance works in practice, and how we support publishers, societies, and journals across AI-enabled workflows, you can find more here about our human-in-the-loop model.
Sources
[1] Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Lee, Y. T., Li, Y., & Zhang, X. (2023). The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv preprint arXiv:2305.17493. https://doi.org/10.48550/arXiv.2305.17493: https://arxiv.org/abs/2305.17493
[2] Sacks, O. (1985). Awakenings (Rev. ed.). London: Picador. (Original work published 1973)
[3] Cohen, J. H., & Elmore, S. A. (2023). Use of ChatGPT and other large language models in scholarly peer review: ethical considerations and guidance. JAMA. https://jamanetwork.com/journals/jama/fullarticle/2838453
[4] Springer Nature. (2023). Editorial policy for the use of AI tools. https://www.springernature.com/gp/editors/policies/artificial-intelligence
[5] Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals. arXiv. https://arxiv.org/abs/2107.06751
[6] Barthes, R. (1977). Image–Music–Text (S. Heath, Trans.). London: Fontana Press.
[7] Orwell, G. (1946). Politics and the English language. Horizon, 13(76), 252–265.
