The scientific publishing industry is grappling with a surge in AI-generated content, shedding light on the existing flaws within the sector. Experts tracking problems in studies have revealed that the rise of artificial intelligence has exacerbated issues in the multi-billion-dollar industry. While AI programs like ChatGPT can be valuable tools for writing and translating papers when properly checked and disclosed, recent cases have exposed the shortcomings of the peer review process.
Notably, AI-generated graphics that slipped past peer review have garnered attention. One such example was a widely shared image of a rat with disproportionately large genitals, published in a journal by academic giant Frontiers. The study was later retracted. Another retracted study featured an AI-generated graphic depicting legs with peculiar multi-jointed bones resembling hands. However, it is the introduction of ChatGPT, a chatbot launched in November 2022, that has significantly impacted how researchers present their findings.
While embarrassing examples are rare and unlikely to pass through the peer review process of prestigious journals, identifying the use of AI is not always straightforward. One clue lies in the repetitive use of certain words favored by ChatGPT, such as meticulous, intricate, or commendable. Librarian Andrew Gray from University College London discovered that at least 60,000 papers in 2023 involved the use of AI, representing over one percent of the annual total. Gray predicts a substantial increase in these numbers for 2024.
The misuse of AI extends beyond graphics and includes the proliferation of “junk” papers by bad actors in scientific publishing and academia. Paper mills, which sell authorship to researchers, are a significant concern. These scammers produce a vast number of poor-quality, plagiarized, or fake papers. Dutch researcher Elisabeth Bik, who detects scientific image manipulation, estimates that two percent of all studies are published by paper mills. The rate is escalating due to the floodgates opened by AI.
The acquisition of troubled publisher Hindawi by academic publishing giant Wiley in 2021 highlighted this problem. Since then, Wiley has retracted over 11,300 papers related to special issues of Hindawi. To combat AI misuse, Wiley has introduced a “paper mill detection service” powered by AI. However, Retraction Watch co-founder Ivan Oransky emphasizes that the issue extends beyond paper mills and reflects a broader academic culture that pressures researchers to “publish or perish.”
The demand for an ever-increasing number of papers places immense pressure on academics, who are often evaluated based on their output. This creates a vicious cycle, pushing many researchers to turn to ChatGPT to save time. While AI translation tools can be invaluable for non-native English speakers, concerns arise regarding errors, inventions, and inadvertent plagiarism by AI, which could erode society’s trust in science.
A recent incident involving bioinformatics professor Samuel Payne further exemplifies AI misuse. Payne discovered that an AI program had seemingly rephrased his own study, resulting in a plagiarized version being published in an academic journal. Despite being rejected during peer review, the plagiarized work was published in a new Wiley journal called Proteomics and has not been retracted.