Australian universities have been forced to change the way they run exams and other assessments amid fears students are using emerging artificial intelligence software to write essays.
Major institutions have added new rules which state that the use of AI is cheating, with some students already caught using the software. But one AI expert has warned universities are in an “arms race” they can never win.
ChatGPT, which generates text on any subject in response to a prompt or query, was launched in November by OpenAI and has already been banned across all devices in New York’s public schools due to concerns over its “negative impact on student learning” and potential for plagiarism.
In London, one academic tested it against a 2022 exam question and said the AI’s answer was “coherent, comprehensive and sticks to the points, something students often fail to do”, adding he would have to “set a different kind of exam” or deprive students of internet access for future exams.
In Australia, academics have cited concerns over ChatGPT and similar technology’s ability to evade anti-plagiarism software while providing quick and credible academic writing.
The Group of Eight leading universities – the leading research-intensive universities around the country – said they had revised how they would run assessments this year due to the emergent technology.
The group’s deputy chief executive, Matthew Brown, said its institutions were “proactively tackling” AI through student education, staff training, redesigning assessments and targeted technological and other detection strategies.
“Our universities have revised how they will run assessments in 2023, including supervised exams … greater use of pen and paper exams and tests … and tests only for units with low integrity risks.
“Assessment redesign is critical, and this work is ongoing for our universities as we seek to get ahead of AI developments.”
The University of Sydney’s latest academic integrity policy now specifically mentions “generating content using artificial intelligence” as a form of cheating.
A spokesperson said while few instances of cheating had been observed, and cases were generally of a low standard, the university was preparing for change by redesigning assessments and improving detection strategies.
“We also know AI can help students learn, and will form part of the tools we use at work in the future – so we need to teach our students how to use it legitimately,” they said.
The Australian National University has changed assessment designs to rely on laboratory activities and fieldwork, will time exams and introduce more oral presentations.
Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, said teachers were in “crisis meetings” about how exams would be marked in the new year and whether protocols were in place to deal with plagiarism.
“People are already using it to submit essays,” he said.
“We should’ve been aware this was coming … and we do tend to sleepwalk into the future,” he said. “But it’s a step-change – it’s accessible, it’s got a nice interface and it’s easy to play with.”
Walsh said with more advanced programs arriving – including GPT-4 from OpenAI – simply banning the platform was unrealistic.
“It’s a technical fail – there’s a thing called VPN, and it misses the point,” he said.
“There are technical solutions – digital watermarking, but you can just run another program over it to destroy the watermark. It’s an arms race that’s never going to finish, and you’re never going to win.”
Walsh said AI technology had great potential for innovation and streamlining in the education sector.
“Teachers hate marking essays, and with suitable prompts it can be used to mark and provide feedback teachers wouldn’t have the time or patience to,” he said.
“We don’t want to destroy literacy, but did calculators destroy numeracy?”
Flinders University was one of the first in Australia to implement a specific policy against computer-generated cheating.
Its deputy vice-chancellor, Prof Romy Lawson, said maintaining academic integrity in an era of fast changing technology was an “ongoing challenge”.
“We are concerned about the emergence of increasingly sophisticated text generators, most recently Chat GPT, which appear capable of producing very convincing content and increasing the difficulty of detection,” she said.
“Instead of banning students from using such programs, we aim to assist academic staff and students to use digital tools to support learning.”
A spokesperson for UNSW Sydney said the university was aware of the use of artificial intelligence programs to assist and write papers for students who then submitted the work as their own.
“Using AI in this way undermines academic integrity and is a significant issue facing all education and training institutions, nationally and internationally,” they said.
The Guardian USA