‘Impressive’, says Assistant Professor of Cognitive Robotics and Artificial Intelligence Roy de Kleijn, referring to ChatGPT. ‘This is a breakthrough. Until two years ago, these kinds of language models performed poorly and you could immediately tell that the texts were not written by humans. But ChatGPT has brought us into a new phase. The quality is very good, it produces natural-sounding sentences.’
As a result, De Kleijn notices that lecturers are getting nervous. ‘Their reaction is rather panicky: what on earth is this?’ As for himself, he remains neutral on the matter. ‘In the same way that calculators have taken over arithmetic tasks, ChatGPT can take away writing tasks. Let people do what they’re good at: innovative thinking, and let’s leave the rest to technology.’
Professor of Journalism and rhetoric expert Jaap de Jong feels somewhat apprehensive about the chatbot.’ ‘I don’t think this is a good thing for education. It's important that students are able to come up with text structures themselves. Originality is also important, but by generating sample texts, originality is constrained. You try thinking creatively when there is already an entire piece of text in front of you.’
HOPELESS TASK
‘Putting thoughts and arguments into words is a skill I think students should learn’, says Francien Dechesne, academic coordinator of the Centre for Law and Digital Technologies (eLaw) and Assistant Professor of Ethics and Digital Technologies. ‘ChatGPT confronts us with the question of whether our way of testing is still sufficiently robust. Everyone is asking a lot of questions but there are no answers yet.’
According to De Kleijn, however, trying to stop ChatGPT would be a ‘hopeless task’. ‘The attainment targets state that students should be able to produce well-written texts, maybe we should change that. There’s no doubt students are going to use it, so we should embrace it too.’
But the question is how. Should we revise the testing methods? And will lecturers still be able to give their students written assignments?
‘This puts a bomb under our testing model,’ says Chair of the Board of Examiners of the Institute of Political Science and Professor of Crisis Management Arjen Boin. It’s our job to safeguard the quality of exams, and papers are a big part of that, certainly for political science. ChatGPT poses a direct threat to that form of testing. But we can’t just put everything on hold or only administer multiple-choice exams. Nor can we ask lecturers to suddenly jettison the entire curriculum.’
Mare also reached out to other Boards of Examiners but they either ‘did not want to publicly discuss’ their methods or referred the questions to the public relations department.
DISCUSSING AND CRITICISING
According to Boin, changing learning objectives is not on the agenda yet. ‘We want to continue to teach our students argumentative reasoning. The fact that there is now a software tool that can do that for them is no reason to adjust the learning objective in our view. If they’re to advise politicians someday, they should be able to do so based on their own arguments. However, we will have to come up with other ways to check whether they have achieved their learning objective.’
Dechesne has an idea of how to do that. Before the arrival of ChatGPT, she was already using its predecessor GPT-3 in her law lectures. ‘I would let GPT-3 answer a question and ask students what they thought of that answer’, Dechesne explains. According to her, discussing and criticising arguments provided by the chatbot in class actually helps students learn. So from now on, I'll start projecting an automated answer from ChatGPT on my screen every year.’
How do students use ChatGPT? Read ChatGPT: friend, foe or fraud?
Assistant Professor of Language Proficiency Alex Reuneker also advocates that approach. ‘It shouldn’t become the main focus of your teaching, but at least you’d be able to introduce your students to this kind of technology. We could pretend it doesn’t exist, but that would be silly.’
However, this method of teaching also poses a problem, warns Dechesne. ‘This method requires a lot of human contact and feedback. But we’re dealing with large student numbers, and it would take a lot of time, which we don't have. Moreover, it means that students no longer learn to devise and formulate arguments themselves.
‘We’ll have to come up with new solutions’, says Dechesne. But she does not like the idea of reintroducing pen and paper exams, like Australian universities have done. ‘That's fixating on the idea of preserving the status quo. I don't believe in that, not just because it implies a distrust of students, but also because writing by hand is irrelevant to their future career.’ Besides, it creates difficulties for lecturers. ‘It adds a lot of extra work because it's more difficult to read and it's also harder to evaluate students' work anonymously.
‘Writing a thesis with pen and paper would be rather problematic’, says De Kleijn. ‘You can’t lock students up until their thesis is complete.’
SKILLS
‘Traditional learning objectives are changing anyway’, says Associate Professor of Philosophy Jan Sleutels. ‘There is still an emphasis on written work, but collaboration skills and oral presentations are becoming increasingly important.’
According to him, there is no need to ban written assignments, but instead they might be supplemented with an oral exam, for example. ‘Have them reiterate the details of a particular subject in their own words, as a reality check. That is a perfect way to combat fraud.’
‘That way, they can show right then and there that they've understood the material and can put it into words’, agrees Dechesne. ‘Presenting and responding directly to questions from fellow students are skills we can continue to test.’But then again, ‘it takes more time.’ Sleutels points out another drawback: ‘The assessment is subjective guesswork.’
Moreover, he is not yet very impressed with the text quality of the bot. ‘I recently entered a few short essay assignments, but the result would not get a passing grade. The text is clichéd, superficial and machine-like.’
‘What I saw was rather basic’, agrees Professor of Legal History Egbert Koops. ‘Presumably, I'd notice it when a student submits a text written by ChatGPT, but maybe I'm overestimating myself.’ The lecturers do expect that, in time, it will become ‘more difficult or impossible’ to tell the difference.
BAN
By then, ‘we will have exam settings on university laptops so we can make students do written assignments without access to ChatGPT’, Reuneker thinks. ‘This can be remedied in the short term. First, lecturers need to put on the brakes a bit and not rush to regulate everything.’
Yet it is the lecturers themselves who have to figure out whether things have to change and if so what, says university spokesperson Caroline van Overbeeke. ‘It’s up to the programmes to decide this for themselves. We support them by sharing information provided by our internal experts on what ChatGPT can and cannot do and what can be done to combat fraud. We’re not going to ban any forms of examination.’
According to her, the use of ChatGPT is not prohibited, but is only allowed if the programme has not placed any restrictions on it. ‘Fraud occurs’ when it is unclear to the evaluator whether a completed assignment is the work of a student or a bot, or when students ‘copy and rewrite an argument or statement in their own words’.
But how do you detect that? According to De Kleijn, that’s impossible to prove, but there are certain tools that recognise whether something was written by a human or by so-called large language models. ‘For example, there is a tool that measures this on two dimensions: complexity, such as word order and how likely it is that one word should follow another, and burstiness: sentence length. People tend to use long and short sentences interchangeably, while sentences written by language models are always roughly the same length. Based on those dimensions, you can figure out how likely it is that it was written by a bot.’
But that is not conclusive proof. ‘If you’re 80 percent certain that a text was written by a bot, I can’t imagine that the Board of Examiners will consider that sufficient proof for a penalty.’
PANIC MODE
The Board of Examiners of the Institute of Political Science expects to see an increase in (suspected) fraud, says Boin. ‘If a lecturer files a complaint, the student in question has to demonstrate to us that no fraud was committed, for example by producing drafts, savings data and notes. If a student is unable to do so, the committee may decide that the complaint is justified. But I would assume that most students would like to learn something themselves. Besides, they also have to make a consideration: will I get caught or not?’
Sleutels also expects to see bot texts in the next round of theses. ‘But if you know the student and their writing style personally, I’m confident you’ll be able to see right through that.’
‘We shouldn't go into panic mode’, says Reuneker. ‘There is no doubt that, occasionally, some cases will slip through the cracks, but I don’t think it will lead to more fraud. People used to think that photography would strip people of their souls and that cars would make walking obsolete. The arrival of Wikipedia raised fears that students would only ever pull their information from there. None of that happened. Those fears are timeless.’
MPs from D66 and VVD, among other parties, have asked parliamentary questions about the use of ChatGPT in education. They would like to know from Minister Dijkgraaf (Education) what he thinks the consequences are for the quality of education, how negative effects can be prevented and what tools teachers have to deal with the bot. The questions have yet to be answered.
ChatGPT was also discussed during the University Council meeting on Monday. Mark Dechesne of staff party LAG would like the Executive Board to explain what the arrival of the bot will mean for education and research. ‘The university needs to find answers’, says Dechesne. ‘Its use requires everyone to start thinking about this.’
Chair of the University Council Bas Knapp promised to consult with the Board about organising a meeting about the bot, which may take place as early as Monday 6 February.
University spokesperson Caroline van Overbeeke says that 'chatbots will be the focus of the annual meeting of our examination boards. This meeting is scheduled for April. In addition, a blog with practical tips for lecturers and students will appear on the university website, a podcast with round-table discussions will be launched and presentations on ChatGPT will be given 'to relevant groups in cooperation with faculties and institutes’.
Utrecht University has already announced it will tighten its plagiarism regulations in the new Course and Examination Regulations by including an extra sentence on the use of ChatGPT, reports news site DUB.