A koala is crouched on a surfboard, calmly riding the waves while decidedly more human surfers appear in the background. This remarkably realistic video flashes across the screens in the Telders Auditorium of the Academy Building at Leiden University. If it weren’t for the icon in the bottom corner indicating it was created by artificial intelligence (AI), you would be forgiven for wondering where this wave-loving marsupial lives.
This AI-generated video was a part of a symposium on AI in Academic Publishing organized by Leiden University and Elsevier. Researchers, university staff, and academic publishers from around the Netherlands came together to discuss the risks and benefits of the rapid evolution of AI for research.
‘We can’t just develop technology and hope it goes well’, said Catholijn Jonker, Professor of Interactive Intelligence at TU Delft and guest professor at the Leiden Institute for Advanced Computer Science (LIACS). ‘No technology is value-free. We have to think about what technology will to do us as a species.’
Humans looking humans
Although AI – ChatGPT included – has been rapidly integrated into daily life over the past couple of years, there are still many practical and ethical challenges to its use. At the university level, this includes whether and how students are allowed to use it on assignments.
A similar question extends into academic publishing. If ChatGPT is used to help write an academic paper, should it be listed as an author? According to Anita de Waard, VP of Research Collaboration at Elsevier, the answer is no.
De Waard noted that ‘authors can use AI to improve readability, but there needs to be human oversight and control.’ AI is forbidden for peer review, though. ‘Editing and reviewing are humans looking at other humans work.’
The rise of AI has given academic publishers a new problem to contend with: fabricated papers. Just as students have been caught using AI to generate assignments, there is also a problem in academic publishing around fabricated papers being published in academic journals. How exactly all these fake papers make it past peer review is unclear.
Junk papers
‘Published papers are a proxy for being a scientist’, noted De Waard. ‘AI is exacerbating this issue, but it’s not directly a result of AI.’
‘I like to think of AI as an accelerant’, said Paul Groth Professor of Algorithmic Data Science at the University of Amsterdam. ‘It’s not like we didn’t have junk papers and problems with scientific rigor before, but now we have more junk papers and problems with rigor. We need to double down on the norms of science: transparency and replicability.’
This sounds good in theory, but will it work in practice? When it comes to replicability, De Waard noted that if an LLM was used to generate a result, ‘in a sense, you can't really re-run the experiment because you can't keep all of GPT-4 anywhere.’ For Groth, this means that ‘provenance will become paramount’.
According to De Waard, there’s another challenge of using AI for research. ‘Bias is inherent in the data we train our models on. AI can generate and precipitate bias because they are in the nature of the people generating the models.’
Always hallucinating
This means that researchers and the public alike need to be more skeptical. ‘People need to understand these things are not magic’, noted Michael Cook, a Senior Lecturer at King’s College London. ‘ChatGPT is designed to smooth over its mistakes.’
Peter van der Putten, Assistant Professor at LIACS, offered a more philosophical perspective. ‘When looking at the outputs of AI, we need to realize that we're looking at the shadows and not the real thing. It’s possible to have a model that's no longer connected to reality.’ For example, a surfing koala.
No one at the symposium advocated for banning AI use in academia. Instead, they focused on ensuring human oversight on the use of AI in research. De Waard offered a stark reminder: ‘Large Language Models don't reason. They're basically always hallucinating. These models are trained on style.’