PA Editorial

PA EDitorial
ChatGPT Academia

ChatGPT Implications in Academia and Peer Review

ChatGPT implications

Since OpenAI launched ChatGPT in November 2022, it has consistently made headlines for a variety of reasons. For some, the arrival of this latest artificial intelligence (AI) application heightens anxieties about where this technology could lead. In contrast, others are keen to explore its potential in a range of fields, including academia.

Before understanding how ChatGPT can work within an academic setting, we must first understand its basics.

What is ChatGPT?

ChatGPT is a merger of innovations: the GPT-3 AI language processing tool developed by OpenAI and chatbot technology, which has been consistently improving over the years and can already be found on an array of websites across sectors, including online retail.

GPT-3 is capable of generating text that looks as though it’s been written by a human. Inevitably, this has numerous applications, like language modelling, translation, and text generation for websites and chatbots.

To give a little more context, it’s easily one of the most powerful language-based AI models available at the time of writing, with parameters that exceed 175 billion. [1] That’s 165 billion parameters more than the Turing NLG model from Microsoft.

Clearly, it’s impossible to cover all of these 175 billion parameters effectively, so to narrow the focus, ChatGPT has become the most used aspect of GPT-3 because it can be provided with prompts and deliver written text or appropriate responses to specific questions.

How does it work?

GPT-3 uses ‘probability’ to effectively predict which word should follow the previous ones in a sentence. To do this, it has been through rigorous testing so it can reference the hundreds of gigabytes of data from online articles and books that it was provided during its training stage.

To illustrate just how well ChatGPT can perform, a CNN article reports that it has passed several notable graduate exams at institutions, including the Wharton School of Business at the University of Pennsylvania and the University of Minnesota. [2]

That said, it’s important to highlight that it didn’t necessarily pass with flying colours. In fact, on one blindly-graded exam for a course at the University of Minnesota Law School, it performed at a low C+ level. However, ChatGPT did perform better in other exams, achieving a grade between a B and B- in a Wharton business management exam.

The adoption of ChatGPT in academia

A recent study appearing in Finance Research Letters demonstrated that ChatGPT has the ability to write papers that are accepted for publication in academic journals. [3] Furthermore, a study published in bioRxiv revealed that ChatGPT-generated abstracts within the science field were convincing enough to ‘blinded human reviewers’. [4]

Therefore, it’s perhaps of little surprise that academic opinions are divided on whether to embrace ChatGPT or take steps to distance academia from this type of technology. Brian Lucey and Michael Dowling note in an article published in The Conversation that ChatGPT could become an ‘important aide for research’. [5]

However, Sandra Wachter of the University of Oxford has expressed concerns in a Nature article, noting that if experts cannot accurately determine what is true and what is false, it could impact how we move through complex topics. [6]

The limitations of ChatGPT

While there are many areas in which ChatGPT performs exceedingly well, it has plenty of limitations that are important to highlight.

If we loop back to ChatGPT’s ability to pass prestigious examinations, we should emphasise that there were areas in which its performance dipped dramatically. As Wharton business professor Christian Terwiesch explained, ChatGPT answered basic management and process questions with proficiency, but it also made mistakes on very simple mathematical problems, which can clearly have an enormous impact on the overall quality of its output. [7]

This is a sentiment echoed by Arvind Narayanan – a professor at Princeton researching the impact of AI – in an interview given to The Markup. [8] Narayanan emphasises that while AI tools are more accessible than ever before and certainly provide some benefits, including an ability to condense large volumes of information neatly, they are less trustworthy when subjected to the harsh lens of informational accuracy assessment.

In a Slate article that referenced a peer-reviewed paper written by ChatGPT, a range of concerns were raised, including inaccurate references, references to works that don’t exist, and the fact that the content itself contained only recycled information – hardly credentials meriting inclusion in an academic journal. [9]

Ethics and the detection of AI content

Although ChatGPT certainly has many limitations, the fact remains that it does possess the ability to secure pass grades, albeit low ones, in prestigious tests. To combat the ethical issues, this raises, educational establishments are turning to even more sophisticated AI content detection tools to prevent plagiarism.

Turnitin is one of the most widely used essay submission tools, and it has developed its own AI detection software trained specifically on database-sourced academic writing. Chris Caren, CEO of Turnitin, has said that while AI tools can make a positive difference when used responsibly, educators are faced with an increasing need to understand where and when these tools are being used.[10]

Additionally, OpenAI has also created a tool designed to identify when writing has been generated by its own ChatGPT technology. [11]

Other emerging AI technologies

GPT-3 from Open AI is far from the only emergent AI technology. You may recognise Google’s LaMDA from the headlines made in 2022 when Google engineer Blake Lemoine was placed on leave after saying that the chatbot was sentient and could express feelings and thoughts at a level equivalent to an eight-year-old child. Such a statement provoked strong and adverse reactions in the AI community. [12]

That being said, it’s clear that AI will only become ever ‘smarter’ as technologies develop.

Where could ChatGPT add value to the peer review process?

As a Slate article highlights, the process of vetting an academic document is a time-consuming exercise that requires a great deal of effort on the part of the reviewer, who, in the majority of cases, will not be paid for their work. By leveraging AI technology to assist at some level, there is the potential that journals could improve the overall quality of submissions and, thus, their value. [13]

Brady D. Lund from the University of North Texas predicts that ChatGPT could facilitate research sharing by ‘generating abstracts, summaries, and other materials that can help to make research more accessible and understandable.’ [14]

Clearly, there remains extensive debate within the educational sector – and far beyond – as to whether ChatGPT has a position within academia and peer review. This is clearly a subject which merits ongoing attention and informed discussion.

One thing is certain: countless thought-provoking conversations will be had on all sides of the argument. Perhaps the chatbots will contribute to them.

If you would like to know more about how PA EDitorial or EDiTech can help you, please feel free to contact us at

PA EDitorial – No task too big or too small
















Leave a Comment

Your email address will not be published. Required fields are marked *