AI was born in universities. As such, it’s ironic that the technology is now undermining the credibility of their institutions. Rather than writing essays and working out problem sets alone, students can now pass tasks over to artificial intelligence systems and get them to have a crack at them instead.
More worrying, most professors and examiners seem unaware they are reading AI-generated scripts. Large language models (LLMs) are now so convincing that, for many, the content appears human.
For this reason, many educational establishments are turning to AI detection tools. Unlike humans, these pieces of software can catch subtle patterns in text that indicate it is machine-written.
How these systems work is shrouded in mystery and vendors have competing solutions. But what is clear is that they work – at least for now.
Because of these developments, universities are fighting back against the bots. Professors and examiners are using these tools in an arms race against the LLMs, leveraging their superior pattern recognition to provide a likelihood that a particular piece of work was AI-generated.
What Is The Most Dependable AI Detector?
Dozens of AI detection tools are on the market, designed to help educators figure out if work is machine-made. However, there are problems.
One issue is false negatives where software believes the work is human-written when it isn’t. This problem was more commonplace among early detectors, particularly those unable to distinguish human-machine collaborations. However, it is now becoming rarer.
The other significant issue is false positives where the student produced the work themselves but the AI reports it as machine-generated anyway. This concern is becoming more pronounced among newer detectors with higher thresholds of proof. Dry, sterile work often comes across as AI-generated to these systems, even if the student was just trying to be less flowery in their use of language (perhaps because they were writing a scientific report).
The most dependable AI detector solutions address both of these issues. They minimize false positives and negatives.
Furthermore, there is now an entire industry dedicated to testing these systems. The goal is to characterize the errors that each software is likely to make and compare them. The top scorers identify AI-generated content correctly while minimizing the flagging of human-written writing.
How Are Universities Responding To AI?
Universities are responding to AI-generated content, but many aren’t availing of detectors. Relying on the unreliable instincts of professors and tutors is the status quo.
Even so, leading institutions within the system have come out strongly against AI-generated content. For instance, UCAS forbids it for the personal statements prospective students submit during admissions. It considers AI-generated content “plagiarism,” a big no-no in the academic field. Personal statements, it says, should be the student’s work.
The University of Birmingham explains the issue better using the definition of plagiarism. It says plagiarism occurs when “a student claims as their own, intentionally or by omission, work which was not done by that student.” Since students don’t generate content themselves in ChatGPT or Google’s Gemini, that breaks the rules.
Even so, universities aren’t against generative tools per se, any more than they are against search engines, like Google. Institutions agree that they can improve the educational experience and make research more efficient. It’s just how students use them that matters.
Where Do Universities Go From Here?
Given the rapid advances in AI, where do universities go from here?
Already, most institutions have policies in place to guide students on proper conduct. Most subsume AI-generated content in the plagiarism category – a serious offense that can lead to academic disqualification.
However, opportunities also abound. For example, many universities recognize that being able to interact with AI is a skill of the future (and arguably the present). Over 84% of teachers agreed that students would need AI skills in the future, such as using an AI image generator. This confirms the World Economic Forum’s view that AI will automate 43% of workplace tasks by 2027.
Universities will also need to educate their students on how to navigate misinformation. While AI systems attempt to provide reasonable answers, they don’t always get it right. Sometimes they can sound convincing, even when the substance of their output is nonsense.
Overall, young people need AI empowerment, but not takeover. These systems are helpful as pedagogic aids, but not for generating content.