Artificial intelligence (AI) has driven a paradigm shift in scholarly publications. These profound changes have impacted all stakeholders in this community – the authors, readers, and publishers.
As the AI era was dawning very early in this decade, I was already deeply involved in the scholarly publishing industry. At that time, I was working as a full-time peer review administrator for the leading publisher, John Wiley & Sons, when I first heard the footsteps of AI entering the academic publishing ecosystem.
Thereafter, I moved into freelance consulting as an editor for companies such as PA EDitorial that provide editorial services to major publishers. It was during this period that I could observe more closely the AI-driven technological innovations unfurl across scholarly publications. Seeing these changes first-hand made it easier to understand how AI was affecting the field.
From my perspective, the influence of AI on scholarly publishing has two distinct sides.
First, the sweet one. Both authors and publishers now have state-of-the-art AI tools in their repertoires, with unprecedented potential to generate and disseminate high-quality research content.
For example, publishers can offer AI-powered editorial platforms [1, 2] that make it much easier to submit research content to journals, and I have been fortunate to be part of pilot projects testing such editorial interfaces.
However, the bitter side is hard to ignore and has become increasingly visible during my tenure in the scholarly publishing industry. Over the past two to three years, since the advent of AI, journals have been flooded with a mammoth volume of low-quality submissions [3, 4].
I have seen this most clearly during my editorial ‘triage’ work for PA EDitorial, where I used my PhD/postdoc-level subject-matter expertise to identify and sieve out submissions that are not of adequate standard to enter the peer review system. Consequently, I have been able to closely observe the unprecedented volume of low-quality research content.
Using my own experience as an editor for these journals, bridged with my close consultation with their publishers and senior editors, I have sought to determine who is submitting a high number of low-quality research papers and why.
And the root cause behind this seems evident.
It is without question that AI is being misused. A growing number of papermills – and, in some cases, individual authors – employ AI tools to generate manuscripts that rely on fabricated text, invented data or misleading narratives.
In a ‘publish or perish’ academic ecosystem, researchers, especially early-career ones, are desperately trying to add publications to their CVs – even at the cost of unethical practices that compromise research integrity – and AI has become their easy route to publication.
Because of this, I’ve had a general negative view of AI’s impact on research output.
In addition, during my tenure as an author of peer-reviewed publications, I have never used AI. First, because my last research article was published in 2021, just before AI tools became ubiquitous in publishing. Second, the importance of ‘originality’ in scientific writing was hammered into our brains throughout grad school and postdoc, to the point that even if such tools had been available, taking a shortcut with AI would have likely been taboo.
Nevertheless, it simply cannot be ignored that AI has indeed become commonplace in the publishing world; so much so that top publishers are developing formal AI policies [5, 6]. This prompted me to get a clearer picture of where authors are in terms of using AI in their research output.
To this end, in my limited capacity, I have spoken with distinguished academics working at different levels of the research ecosystem – spanning from undergraduate colleges to high-impact research institutions. I have paired these conversations with my own experience in the publishing industry and targeted literature scouting.
This article brings together ‘the good, the bad, and the ugly’ impact of AI in shaping scholarly publications. It also outlines my perspectives on what responsible stakeholders in scholarly publishing are doing – or might need to do – to reap the benefits of AI, while minimising its misuse.
AI has done a ton of good for the research community
For the research community, AI has brought a plethora of benefits – laid out not on a golden but an iron platter, making it cheap, or even free. Not surprisingly, a robust Wiley survey of nearly 5,000 researchers across 70 countries has suggested ‘artificial intelligence (AI) tools for processes such as preparing manuscripts, writing grant applications and peer review will become widely accepted within the next two years’.[7]
My conversations with researchers suggest that academics across the spectrum – from top scientists to those supervising only undergraduate projects – are increasingly open to using AI in their work. Several factors explain this shift.
Language polishing
Many researchers, especially those whose first language is not English, use AI to refine their manuscript text [8]. Even in the pre-AI era, free or even sophisticated, expensive software was used to spot and correct grammatical errors. AI now takes this a step further, offering ‘smart’ language edits that resemble the refinements provided by scientific writing vendors, but at a much lower cost.
Some researcher-authors optimise these benefits by using AI to write a first draft, then working through the text to refine it to their choices and/or needs, thus making the content ‘original’. From the publisher’s side, where I’ve been working for the last few years, the impact of this optimisation is especially visible. I’ve seen a huge upsurge in submissions from regions such as China and the Middle East, where English is not the native language.
If we consider, hypothetically, that AI was used in these submissions solely to refine the text or to produce a first draft, and that the underlying science is original and authentic, then this shift is positive in two ways.
First, if AI can assist non-native English speakers in communicating their research content in English prose, it would promote diversity, equity, and inclusion (DEI) – a core value in scholarly publishing.
Second, this would reduce the time and energy researchers and reviewers spend correcting grammar or editing the text. That time would be much better utilised in focusing on the actual research content.
Reviewing the literature, especially in cross-disciplinary fields
Reviewing the existing literature to gain meaningful insights is a cornerstone for producing your own research output. AI has indeed revolutionised this stage of the process. Even before the AI boom, we already had access to sophisticated databases and search engines for retrieving existing literature, but AI has taken it to the next level.
With AI, it is not only easier to find relevant literature, but also the associated metadata from the entire consortium of publications relevant to the research, which is extracted and organised seamlessly. In my view, this offers three specific advantages.
- Researchers, especially early-career ones and trainees, waste less time reading irrelevant papers.
- Despite the best human effort and proficiency, manual literature searches often miss key papers, whereas AI search engines have very low error rates.
- The most significant advantage is reaped by researchers engaged in cross-disciplinary academics. For example, say you, as a computer scientist, are collaborating with a molecular biologist to generate a new bioinformatics tool. Wouldn’t it be wonderful to have all the relevant molecular biology literature extracted, analysed, and succinctly summarised for your easy reference? AI exactly does that.
These points perhaps explain the huge hike in narrative reviews and meta-analyses/systematic reviews on cross-disciplinary topics submitted by junior researchers from underrepresented countries in scholarly publishing.
Actual process of submission to the journal
AI-powered tools have made the process of submitting a research manuscript much more efficient. In my own work with next-generation ‘smart’ submission systems like Wiley’s ‘Research Exchange’ and Springer Nature’s ‘SNAPP’, I have seen how these platforms are designed to save authors’ time, efforts, and energy during the submission process – a point echoed in conversations with researchers.
AI tools can analyse a manuscript’s metadata and suggest suitable journals for submission. Once an author has chosen a journal, AI systems guide them step by step through each stage of the submission process, ensuring that all technical requirements – ethics, funding, conflicts of interest, and more – are met. For example, I have worked with a journal that offers a large language model-driven ‘assistant’; it tells authors exactly which parts of their sequencing data need to be archived in which database and what they need to specify under the Data Availability Statement.
Not only is this convenient for authors, but also for folks like me – sitting at the publisher’s end. It reduces the number of manuscripts that need to be returned for minor technical revisions. These AI-embedded interfaces also give editors everything they need in one place – accessible by just a mouse click – from suggested reviewers to instant plagiarism reports.
Even when a manuscript is rejected by one journal, these smart submission systems suggest alternative journals where it could be more suitable. Here, the authors benefit from using AI, increasing the chances that their manuscript will find a journal ‘home’ and their research will ultimately be published. It also allows publishers to retain strong content within their wider portfolio.
AI comes with a lot of problems, too
Every benefit of AI I have outlined can also be turned into a potential misuse, and we already see this trend. Several concerns stand out for the industry.
How much of the submitted research content is ‘original’
While using ‘assistance’ from AI tools to refine grammar seems just fine, it is definitely a problem if AI actually authors the manuscript with low or no human intellectual involvement.
It is increasingly common to encounter submissions that pass initial technical checks but reveal, on closer examination, clear signs of AI-generated authorship: generic language, a study rationale laid out lackadaisically, and materials and methods described very sketchily.
More concerning is that the growing use of AI is now being used to generate fabricated data. Some papermills – and occasionally individual authors – use AI tools to create manuscripts that include manipulated blots, fictitious microscopy images or graphs based on non-existent datasets. These issues often become apparent only during careful editorial scrutiny.
Another unethical trend is ‘salami slicing’. Elsevier concisely states ‘the “slicing” of research that would form one meaningful paper into several different papers is called “salami publication” or “salami slicing”’.[9]
For example, consider a publicly accessible sequencing or survey dataset such as NHANES [10].
Some less ethical authors or papermills may use these datasets – not generated by themselves – to perform minimal statistical or informatics analysis, and then ‘write’ multiple near-identical manuscripts with the help of AI, submitting them simultaneously to multiple journals through user-friendly submission systems offered by publishers.
While one might argue that such low-quality manuscripts would ultimately be rejected, they clog the already overburdened peer review machinery. This is why publishers are being forced to employ more human resources with PhD-level subject-matter expertise and to develop tools that are always a step ahead of the ‘AI-writers’, so that such content can be rejected even before peer review.
The next generation of researchers is learning to take shortcuts
All the great benefits that AI provides – like helping non-native English speakers write manuscripts and providing a summary of existing literature on a topic without having to read papers manually – have the potential to spoil the next generation of young researchers. This is an issue that seasoned academics and research mentors should consider.
For example, an early-career researcher who is not proficient in English might become overly dependent on AI for writing. Such a person might never develop the skills to communicate their own research independently in coherent English – an important part of academic maturity.
Similarly, key opinion leaders and policymakers in postgraduate research and education may need to consider whether there is still intellectual value in a PhD or postdoctoral trainee manually searching for and physically reading papers, jotting down key ideas, and summarising an area of research to form new research hypotheses and ideas for experimentation. There is a fine line between being AI-savvy and being on a slippery slope toward AI dependence, and it is not yet clear on which side of that spectrum the next generation of researchers will fall.
Concluding thoughts
From the publisher’s perspective, and from conversations with colleagues across academia, it seems to me that AI in scholarly publishing remains a double-edged sword. It has considerable potential to improve the quality of research publications, but only if its use is focused exclusively on enhancing quality rather than quantity.
At the same time, there is clear evidence of AI being misused. Journals are being flooded with an increasing number of AI-generated manuscripts that make minimal original cognitive contributions to data and its analyses. Ironically, much of this material comes from the communities that could have benefited the most from AI – authors in non-English-speaking and historically underrepresented regions in the scholarly publishing landscape.
As noted by Harlan Krumholz of Yale University– a leader in the clinical and academic fraternity – scholarly publishing is a ‘partnership’ between authors, publishers, and the editorial workforce – working together to strengthen research and help the best ideas reach full potential [11]. The current dichotomy over whether AI is a friend or a foe for researchers presents an ideal opportunity for this partnership to rise to the challenge, so that the advantages of AI can be harvested while mitigating its shortcomings.
Senior academics mentoring early-career researchers may need to emphasise that AI is just a tool, not a substitute for the underlying intellectual work. Without guidance and restrictions, some researchers may quickly fall into cognitive laziness, seeing AI as doing all the academic work for them.
Specifically, senior academics need to make clear to their trainees that AI may assist researchers in polishing the manuscript’s language; however, the submission’s narrative must definitely be owned by the authors. Algorithms can serve as sophisticated search engines, but connecting the dots to assimilate the bigger picture emanating from the existing literature is still pretty much the researchers’ responsibility.
In my work with the publishing industry, I have seen publishers evolve beyond the role of simple gatekeepers. Many now encourage authors to use their AI-enabled tools for submission and even for language refinement, while simultaneously developing robust, data-driven analytics to define how AI can be used responsibly in submitted manuscripts.
Finally, on a personal note, I feel fortunate and motivated to serve scholarly publishing at a time when this community is still walking on a tight rope – learning to adapt to AI. It has been fascinating to watch how the author’s role is transitioning compared with when I was a researcher myself, not too long ago.
In some ways, AI in scholarly publishing is what the discovery of fire was to early humans – groundbreaking. This means that using it in moderation, with adroit maturity, and exactly where required is key. As part of the editorial workforce, my role is to ensure that quality research content, often generated with judicious use of AI, gets published, while low-quality or inappropriately AI-generated material is nipped in the bud.
By – Shubham Chakravarty – PA EDitorial freelance consultant editor
Shubham Chakravarty holds a PhD in Biology from Purdue University, USA. He has been long associated with the scholarly publishing industry – starting with his tenure at John Wiley and Sons. He is currently a full-time Assistant Section Editor at BMJ Best Practice. Shubham has also worked for PA EDitorial as a freelance consultant editor – providing editorial services such as triaging new submissions and peer-review management.
Sources
[1] https://www.springernature.com/gp/snapp
[2] https://www.wiley.com/en-in/solutions-partnerships/societies-publishers/research-exchange/
[3] https://www.nature.com/articles/d41586-023-03144-w
[4] https://pubmed.ncbi.nlm.nih.gov/30209384/
[6] https://www.wiley.com/en-in/about-us/ai-resources/principles/
[7] https://www.nature.com/articles/d41586-025-00343-5
[8] https://researcheracademy.elsevier.com/uploads/2018-02/2017_ETHICS_SS02.pdf
[9] https://pubmed.ncbi.nlm.nih.gov/36038730/
[10] https://wwwn.cdc.gov/nchs/nhanes/continuousnhanes/default.aspx?Cycle=2025-2026 [11] https://www.jacc.org/doi/epdf/10.1016/j.jacc.2025.10.007
