AI should concern journals
Two new papers look at the role of AI chat tools in science.
Artificial intelligence programs like ChatGPT could be helpful to scientists if used wisely, according to international researchers.
However, they also make it easier than ever to fabricate data, according to a second study.
On 30 November 2022, a company called OpenAI released an online, AI-powered chatbot called ChatGPT (Generative Pre-trained Transformer) that simulates human conversation. Users will now be able to quickly summarise existing medical research findings, assist with medical decision making, and generate educational materials.
This new technology may have significant implications for researchers, reviewers, and editors in scientific publishing.
Authors from Amsterdam University Medical Center have outlined the possible benefits, problems, and future of medical research assisted or written by artificial intelligence (AI) software applications like ChatGPT.
In their piece, the authors argue that software like ChatGPT will allow researchers to create manuscripts more efficiently by assisting in generating complete, standard scientific text.
They also highlight that this software may help nonnative English-speaking authors with grammatical errors, likely increasing their chances to have their findings published.
However, the authors warn, the knowledge bases of software like ChatGPT are limited and users will need to cross-check information to ensure its accuracy.
They note that pre-prints, existing misinformation, and ChatGPT’s tendency to generate nonexistent references while creating text may further complicate this process.
The authors suggest that journals create policies for using ChatGPT transparently. They also advise that adding ChatGPT as a coauthor raises concerns because it cannot be responsible for its content and may not meet authorship criteria. Instead, authors could highlight areas of generated text that reviewers and editors can further scrutinise.
In a second article, experts at the State University of New York say ChatGPT makes it easier than ever to fabricate data.
They say that journals should require proof of data collection, because ChatGPT can be used to fabricate non-existent data.
The team asked the AI-based language model chatbot to write an abstract about rheumatoid arthritis using data up to the year 2020, knowing that ChatGPT only has access to data up until 2019.
ChatGPT provided this abstract without question and also said it used data from a database known to be protected from public view, which implies this data was also fabricated.
The team also successfully asked the AI to “reword the conclusion” to support a particular claim.
The resulting abstracts show how easy it is to fabricate a paper, and journal publishers need to be aware of the possibilities and screen for AI the same way they screen for plagiarism, the team says.