Academic publishing has considerably evolved in response to technological developments. Current discussions revolve around the rise of generative Artificial Intelligence (AI) tools or Large Language Models (LLM). They exceed the capabilities of simple spelling and grammar checkers or translation software and their use in the publication process has several implications that need to be considered.
LLM are powerful writing assistants – you may have heard of ChatGPT and what it can do, for example – that can be harnessed to generate scholarly text. Following the prompt of a human user, these tools create textual content based on the huge datasets they have been trained on. For native English speakers, they can aid with the structuring and organizing of papers, ensuring that ideas and concepts are presented in a logical and clear way. On the other hand, researchers whose primary language is not English may benefit from the improvement of grammar, syntax, and vocabulary, as well as from guidance with the nuances in English writing. However, although outputs are presented in a very credible tone, mimicking human writing styles, the information is not always accurate, and citations are often fabricated. So much so that ChatGPT now presents a disclaimer: “ChatGPT can make mistakes. Consider checking important information.”
The careless use of these so-called Chatbots may lead to the publication of papers of low scientific quality, including misleading and/or incomplete information. However, it is worth mentioning that the identification of such papers can be facilitated by employing open and transparent peer review practices, like those utilized in the EGU journals. This can happen in basically two ways: first, authors may feel more inclined to submit carefully prepared manuscripts in an open peer review system, knowing that the public will have access to their work before publication. Second, the interactive discussion stage before publication includes the participation of the scientific community, enhancing the evaluation of the manuscript.
Like other segments, publishing may benefit from AI-based tools for the automation of laborious tasks. Applying AI tools to the detection of fraudulent practices and duplicate images, language improvement, and the identification of suitable reviewers could help with the optimization of publishing workflows. Other applications of AI require more careful consideration. For instance, in addition to generating text from scratch, LLM are also able to summarize, rephrase and comment text when provided with it. This bears implications for peer review, a cornerstone of the scientific publishing process.
Indeed, the use of Chatbots to both write and review scientific papers could lead to a situation in which AI evaluates content generated by AI, undermining a core principle of the peer review process: expert evaluation by peers.
Moreover, there may be confidentiality concerns associated with providing manuscript data to AI tools prior to publication. Therefore, although these tools can currently support reviewers, they should not be used as independent referees of scientific manuscripts.
To maintain the integrity of the scientific publishing process, human oversight is essential. Both authors and reviewers offer critical thinking that surpasses the capabilities of AI tools. Moreover, as these tools lack accountability for the content they generate, they cannot replace human expertise and judgement. In today’s academic landscape, initiatives such as the hands-on EGU peer review training are paramount to the development of well-trained individuals who will help maintain rigorous and ethical scholarly standards in the presence or absence of AI-tools. Beyond improving the overall quality of manuscripts for publication, participants are trained to evaluate the significance and originality of the research presented, which is something that chatbots are unable to do.
The importance of writing papers is immense in the current academic system that uses publications as an important metric of a researcher’s impact. In this context, researchers should make responsible use of new tools to improve their manuscripts. Similarly, publishers are also experimenting with responsible ways of integrating AI-tools in the review process. However, it is important to be cautious due to the many limitations and ethical issues mentioned above. For this reason, EGU has recently released guidelines for the use of AI-based tools in the publication process. We hope that this will help ensure the ethical use of AI tools in the rapidly evolving landscape of scholarly communication.