AI Aware: A tale of two realities in the AI detection debate

By Jaine GreenAI Aware

This past week at AI Aware has been particularly illuminating. On the very same day, we received two enquiries that perfectly encapsulate the polarised and evolving debate over AI use in higher education. One came from a distressed parent, the other from a conflicted academic. Both highlight the urgent need for clearer policies, better education, and open dialogue between students and institutions.

Our first enquiry came from a concerned parent whose child has been called into a disciplinary meeting over alleged academic misconduct for overuse of AI. The red flags, according to the institution, were inconsistencies in writing style across this and previous essays and referencing practices. However, the student has dyslexia and had been encouraged by a tutor to use tools like Grammarly Pro, which they did on some assignments but not others.

The student admitted to using ChatGPT but, like so many others, only to help with structuring essays not for generating content. They also accessed resources such as JSTOR and their university’s reading list for research and references. Being a first-year student, they’re still navigating academic conventions like footnoting, which is no small feat for anyone new to university-level writing.

Regardless of intent, the accusation has caused the student and their family considerable stress and cast a shadow over the student’s academic future. Ideally, the disciplinary meeting will offer a constructive platform, an opportunity for the student to clarify their process, and for the university to guide and educate rather than punish. But while we hope for a fair outcome, the emotional toll in the meantime is very real.

This case raises some critical questions, such as are institutions fully equipped to distinguish between genuine misconduct and the growing pains of academic development, especially for students with learning differences? Also are students being adequately supported in understanding what appropriate use of AI tools really looks like?

On the flip side, we heard from an international university that has a formal policy against the use of AI detection tools when assessing student work. Yet, when student assignments were sent to an external examiner (unaware of the policy) they were run through an AI detection tool anyway. Various pages were flagged as likely AI-generated. The examiner returned the flagged results, leaving the university with a tricky decision whether to uphold their no-detection policy and pass AI generated content, or confront the findings and risk losing the trust of other students?

This situation reveals a troubling inconsistency. Why would a university avoid AI detection altogether? The likely reason is pragmatic, many institutions understand that most students are already using AI to some extent, be it for grammar checks, idea generation, or even preliminary research. Attempting to police this use comprehensively can feel like trying to hold back the tide and reveal a huge problem many institutions are deliberately trying to hide.

Yet, this also exposes a lack of cohesion. Even where policies exist, they are not always understood or respected by those involved in assessment. Examiners, acting in what they believe is good faith, may unintentionally undermine institutional policy, creating confusion for staff and potentially unfair consequences for students.

These two cases, while distinct, point to the same underlying issue: the absence of a clear, unified, and educational approach to AI in academia.

We can no longer treat AI tools as an anomaly or a problem to be eliminated. The genie is out of the bottle. Students are using AI. Academics are using AI. Examiners are using detection tools sometimes in direct opposition to university policies.

And yet, there is still no consensus on what constitutes “appropriate” use. Is using ChatGPT to outline a structure cheating? What about using Grammarly to improve clarity? If referencing errors or writing inconsistencies are enough to trigger suspicion, how do we ensure students aren’t penalised for inexperience, disability, or merely improving over time?

This is not just a policy issue. It’s a cultural one. AI tools are now part of the learning ecosystem, much like calculators once were in maths classes or Wikipedia became in online research. The challenge is not how to eliminate them, but how to integrate them responsibly.

Students need to be taught how to use AI tools critically and ethically. That means understanding that AI is prone to error, it hallucinates facts and can misquote sources.  It can assist, but it should not replace genuine thought, research, and learning. 

Educational institutions must stop avoiding the issue and instead create clear, robust guidelines on what constitutes acceptable use. These should be nuanced and evolving.  More importantly, policies need to be communicated clearly and enforced consistently.

For examiners training is essential.  AI detection software should be used as part of a wider tool kit including getting students to self-declare AI use, examining results compared to other writing samples, and reviewing work in the context of academic development history.

These two enquiries, though very different, highlight the same truth, we are all navigating a new academic reality, and doing so with outdated maps. AI isn’t going away. The question is not whether to use it or ban it but how to use it well.

The future of academic integrity will not be built on fear or suspicion, but on clear communication, thoughtful policy, and trust that students want to learn, and trust that educators will guide them wisely through this new frontier.

Get in touch