- Some teachers say students are using new technologies to pass off AI-generated content as their own.
- Academics are concerned that colleges are not set up to combat the new style of cheating.
- Teachers say they are considering returning to written assessments and oral exams.
College professors are feeling the heat when it comes to AI.
Some teachers say students are using OpenAI’s chatbot, ChatGPT, to pass off AI-generated content as their own.
Antony Aumann, professor of philosophy at Northern Michigan University, and Darren Hick, professor of philosophy at Furman University, say they’ve caught students submitting written essays through ChatGPT.
The issue has prompted professors to consider creative ways to end the use of AI in colleges.
Blue books and oral exams
“I’m stumped on how to handle AI going forward,” Aumann told Insider.
He said one way he was considering solving the problem was to switch to blocking browsers, a type of software that aims to prevent students from cheating when taking exams online or remotely.
Other academics are considering more drastic action.
“I’m planning to go medieval with students and go back to oral exams,” said Christopher Bartel, professor of philosophy at Appalachian State University. “They can generate AI text all day in their notes if they want to, but if they need to be able to speak, that’s a different thing.”
Bartel said there were inclusivity concerns surrounding this, however. “Students who have deep social anxieties about public speaking is something we’ll have to find out.”
“Another way of dealing with AI is for faculty to avoid giving students assignments that are covered too well,” he said. “If students need to engage with a unique idea that hasn’t been covered in great depth elsewhere, there won’t be much text the AI generator can pull from.”
Aumann said some teachers are suggesting going back to traditional written assessments like blue books.
“Since students would write their classroom essays by hand, there would be no opportunity to cheat by querying the chatbot,” he said.
‘The genie came out of the bottle’
While there were red flags in the AI-generated trials that warned Aumann and Hick about using ChatGPT, Aumann thinks they are only temporary.
He said the chatbot’s essays lacked individual personality, but after playing around with it, he was able to write less formally. “I think any of the red flags we have are just temporary as far as students can understand,” he said.
“My concern is that the genie is out of the bottle,” said Hick, who believed the technology would improve. “That’s kind of inevitable,” he said.
Bartel agreed that students could use AI very easily. “If they ask the program to write a paragraph summarizing one idea, then a paragraph summarizing another idea, and edit them together, it would be completely undetectable to me and might even be a decent essay,” he said.
An OpenAI representative told Insider that they didn’t want ChatGPT to be used for deceptive purposes.
“Our policies require users to be upfront with their audience when using our API and creative tools like DALL-E and GPT-3,” the representative said. “We are already developing mitigations to help anyone identify the text generated by this system.”
While there are AI detection programs that offer an analysis of the likelihood that text was written by an AI program, some academics worry that this is not enough to prove a case of AI plagiarism.
“We’re going to need something to account for the fact that we now have an imperfect way of testing whether something is fake or not,” Bartel said. “I don’t know what this new policy is.”