It was not even that long ago that scientists had to hang on to huge stacks of paper journals and students had to go through index card boxes to find sources. Reference software programmes such as Mendeley and Zotero have made it a lot easier to keep track of literature.
However, the new generation of tools poses a threat, according to a publication in Research Evaluation written by sociologist of science Serge Horbach and his colleagues.
Researching sources not required
Whereas the current tools only help researchers find relevant literature, computer scientists are now developing new tools that go a step further. This new citation software can analyse texts on its own and automatically adds relevant source citations. Simply type a sentence, a paragraph or even an entire article, and the computer will suggest which paragraphs you should add a citation to – for example ‘Jansen 2020’ or ‘Pieterse 2015’ – in order to substantiate your argument. You would no longer have to do your own source research.
Such an invention would save researchers a lot of time and effort, but according to Horbach, the danger of this convenience lies in the fact that it touches upon the very heart of science. ‘Your work should show where your ideas come from and what you’re building on’, says the guest researcher at the Centre for Science and Technology Studies. ‘That’s how you associate yourself with certain scientific movements, or with specific scientists.
‘Furthermore, citations play a major role in the scientific reward system. Citing articles increases the attention paid to the authors, which makes it appealing to cite yourself or your close colleagues. Citation software misses the mark in all these areas. Of course you can just use the software’s suggestions for inspiration, but the threshold to not do that, and to just copy them directly, will be very low.’
GAMING THE SYSTEM
And that is when laziness starts to creep in, he argues. ‘These tools can take part of the writing process out of your hands, even though keeping track of and understanding the literature is one of your main tasks as a scientist. You have to assess other people’s work and combine it with your own ideas to create something new. If you leave out half of that work, you’re left with a rather meagre result. It makes you a poor scientist.’
But apart from potentially making researchers lazy, the automatic source finder presents even more problems. ‘People will eventually figure out how to get their articles to appear at the top of the search suggestions of these kinds of tools, just like with Google’s search results. It will be attractive to write your articles in such a way that it will be one of the tools’ first suggestions. This might be even more appealing to scientists than trying to make their text attractive and easy to read. That will create even more incentive for certain forms of gaming the system.’
The software could also enhance the so-called Matthew effect that applies to scientific articles. This positive feedback loop is also evident in the awarding of grants and prizes. Once you have received a grant, you’re much more likely to be rewarded again the next time. This goes both ways: those who fall short in the beginning are less likely to succeed later on. Likewise, papers that have been referred to many times will be more likely to be recommended for re-citation by the algorithm.
And those who think that an algorithm will solve the current prejudices in science are wrong, according to Horbach. ‘These tools base their suggestions on current citation behaviour. This will magnify the existing biases in scientific literature. It works to the advantage of people who work for famous institutes and articles with positive results. Gender biases will also be reinforced.’
Another disadvantage: ‘The computer will only give suggestions for sources that substantiate your argument. It will offer far fewer suggestions, or none at all, for articles that contradict your work or show nuance. And that in turn contributes to positive citation bias.’
PEER REVIEW
Those who still entertain the hope that journals will filter the mediocre, automated source citations will be sorely disappointed, according to Horbach. ‘I have done a lot of research into the effectiveness of peer review in the correction of science over the past year, and I estimate the chance to be practically zero. Only very few people consider checking references to be the responsibility of reviewers, least of all the reviewers themselves.'
The citation software is not yet widely available, but Horbach hopes that his appeal will pre-empt its widespread use and possibly steer the developers in a different direction. ‘Let’s not wait until the damage is done.’