top of page

Beyond the Band-Aid: Rethinking AI Detectors in Education

I collaborated with AI to write this blog post by asking it to suggest improvements to my original writing and thoughts.


Table of Contents


Summaries in Multiple Forms

My goal is make my content accessible to everyone no matter:

  • Your available time to consume content

  • Your personal preference for consuming content

  • Your personal learning style


Luckily, AI has made it possible to quickly and effortlessly provide multiple means of representation. Therefore, I am experimenting with providing my blog posts in various formats, such as podcast, short summary, etc. 


Listen to a podcast version of this post:



AI Summary of this Post:

As generative AI reshapes education, reliance on AI content detectors has proven problematic, offering unreliable results that often undermine trust between students and educators. These tools, which struggle to differentiate between AI-generated and human text, not only fail technically—resulting in high error rates and biases—but also exacerbate social issues, casting undue suspicion and stress on students. Recognizing these challenges, forward-thinking institutions are moving away from such detectors towards a more holistic approach that emphasizes open dialogue, AI literacy, and innovative assessment methods, aiming to harness AI's potential responsibly rather than fearing it. At InTECHgrated PD, we're dedicated to guiding educators through these changes, fostering an environment where technology enhances learning without compromising integrity.


The Limitations of AI Detectors

As generative AI tools like ChatGPT have taken the education world by storm, many institutions are scrambling for ways to prevent students from using these tools to copy and paste their way through traditional learning tasks. Enter AI content detectors –software that claims to accurately flag whether text is generated by AI or humans. Unfortunately, these tools are merely an ineffective band-aid that can do more harm than good.


""

Source: Generated by ChatGPT


Despite bold claims from some vendors offering AI detection software, the reality is that these tools are about as reliable as flipping a coin when it comes to distinguishing between human-generated and AI-generated text. Studies have exposed several critical flaws that call their reliability into question:


  • High error rates: The study “GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Educationindicates AI detectors are prone to both false positives (flagging human-written work as AI-generated) and false negatives (missing actual AI content). The study concludes: “...these tools cannot currently be recommended for determining whether violations of academic integrity have occurred.”

  • Easily misled: The study “Can AI-Generated Text be Reliably Detected?” found that it is easy to classify human-written text as AI-generated by making small adjustments to the text.

  • Bias against non-native English writers: The study “GPT detectors are biased against non-native English writers” indicates that detectors disproportionately misclassify writing by English language learners as AI-generated, raising serious equity concerns. This opinion piece really drives this point home for me: “...it will inevitably harm students who rely on additional support to survive a system that is overwhelmingly biased to white, middle-class, native English speakers without disabilities, and whose parents went to university.”


When you grasp how the technology works, it becomes clear that there will never be a way to distinguish between human and AI generated text. This article from the University of Kansas Center for Teaching Excellence explains this well: “Detection software will never keep up with the ability of AI tools to avoid detection. Relying on that software does little more than treat the symptoms of a much bigger, multifaceted problem.” 


Even OpenAI (the creators of ChatGPT) shut down their AI detection tool and note in this article: “Do AI detectors work? In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.”


The Future of Human-AI Collaboration

The distinction between human and AI-generated text is likely to become irrelevant soon. We are transitioning to an era of AI collaboration, where AI's role as a co-creator in various tasks, including writing, is becoming normalized. The study “Human Heuristics for AI-Generated Language are Flawed” highlights that "human communication is increasingly interwoven with language generated by AI."


I have personally witnessed how AI can streamline and enhance the writing process in particular. My ideas and expressions remain distinctly my own; AI simply facilitates their articulation more efficiently and eloquently. The deployment of AI detectors propagates a misleading narrative that positions AI assistance in a negative light, rather than recognizing it as a transformative tool. Embracing AI in our workflows can significantly augment human capabilities, transforming how we think, create, and interact with information.


The Human and Social Cost of AI Detectors

The challenges with AI detectors extend beyond technical limitations and philosophy. More concerning is the significant toll these tools can take on the relationships that form the foundation of effective education—those between students and teachers.


Consider the story of Marley, a student at the University of North Georgia. She used Grammarly to proofread an essay and was subsequently accused of cheating. This is just one of many cases where AI detector programs have misassigned blame. Such incidents not only undermine trust but also can have lasting effects on students' academic and emotional well-being. The article "He Was Falsely Accused of Using AI. Here’s What He Wishes His Professor Did Instead," published by Tech & Learning, reveals the distress and frustration experienced by students caught in these situations.


Educators are not immune to the strain either. As highlighted in the article, "The software says my student cheated using AI. They say they’re innocent. Who do I believe?", teachers are having to take difficult positions. This dilemma can erode the trust and respect crucial for a supportive educational environment.


The pandemic underscored the importance of nurturing strong relationships between students and teachers. Fast forward a short time and educators are being pigeon-holed into relying on flawed detectors, risking eroding the very relationships that are crucial to learning.


Educational Institutions Disable AI Detectors

Recognizing the pitfalls associated with AI detectors, numerous higher education institutions have begun to distance themselves from these tools. Most of the examples in the media are from universities, which I think is a great lead for K–12 schools:


A Better Way Forward: From Band-Aid to Healing

If AI detectors are a band-aid, what's the real solution? I will be sharing a lot more on this topic but here are a few places to start:


  • Promote open dialogue: Ultimately, navigating the age of AI in education requires open, honest communication between students and educators. This article from University of Kansas explains this well: “Keep in mind that we are at the beginning of a technological shift that may change many aspects of academia and society. We need to continue discussions about the ethical use of AI. Just as important, we need to work at building trust with our students.”

  • Embed AI literacy: Teach responsible AI use through intellectual transparency and digital literacy skills. We need to have frank conversations about the possibilities and pitfalls of AI, and work together to establish norms for its ethical use in learning.

  • Rethink assessment: A key theme reported in the study “Accused: How students respond to allegations of using ChatGPT on assessments” is a need to rethink assessment. I share in the AI-mazing Modern Assessments: Assessing Learning with AI in the Classroom blog post one idea for assigning multimodal projects that showcase creativity and critical thinking across mediums.

  • Shift focus to process over product: Building on the need to rethink assessment, we need to shift towards emphasizing process over product by having students reflect on their learning journey. The camera and microphone of your students’ devices are the most powerful tool in the age of AI. Use a tool such as Flip to have students create videos that explain the learning process. In addition, tools such as SchoolAI and Chat for Schools allow teachers to embrace student use of AI and review the chat transcript (as well as a summary) to get a look into the learning process. The capabilities are truly amazing!

  • Redefine cheating: I make a plea in A Teacher’s Guide to Online Learning to redefine cheating in relation to virtual learning. Another key theme from the “Accused: How students respond to allegations of using ChatGPT on assessments” is that students agree. It is time to rethink what is considered cheating.

  • Invest in professional learning time: This article from Education Week notes that only 37% of teachers reported that “they have received guidance on what responsible student use of generative AI technologies looks like.” Teachers must have dedicated time to learn, explore, discuss, and adapt to these technologies that are changing the very fabric of our lives.


Invest in the Real Solution

Let's ditch the broken promises of AI detectors and invest in the real solution —humans. At InTECHgrated PD, we are committed to guiding educators through the evolving landscape of education in the AI era. Our expert team offers a range of professional development sessions that are specifically tailored to explore the instructional implications of AI technology. This is what one educator had to say after a professional learning session focused on AI in education:


This workshop was mind-blowing!

From workshops on integrating AI tools into the curriculum to discussions about ethical AI use in classrooms, we provide the resources and support educators need to excel. If you’re ready to transform your educational approach and harness the potential of AI, let’s start a conversation about how we can support your journey.



bottom of page