6 things to consider with AI detection tools
By Nik Janos and Zach Justus
When Zach and I talk to colleagues about ChatGPT and other generative AIs, the conversation always includes the topic of plagiarism and cheating. For many different reasons students cheat and for many different reasons instructors and students are concerned about cheating and plagiarism. This post isn’t about the politics and best practices of academic integrity. Rather it is a cautious warning about the quest to find a technical fix to cheating with AI. As soon as ChatGPT was released in the fall of 2022, individuals and companies have been building AI detection tools. These tools began to roll out in 2023. This is the arms race that we theorized in our first post in February 2022.
Here are 6 things to know about AI detection tools if you are considering using them this fall.
1. Students cheat. It’s always good to remind ourselves that students cheat and they have been doing this in many different ways and for many different reasons. The pandemic increased chatter about the amount of student cheating. Generative AI is the latest tool students can use to reduce the amount of time and effort they need to put into assignments.
2. It is very difficult to determine if something was written by an AI or by a human. Professional hubris leads some professors to think they can do a smell test. The “I know it when I see it” response. Others are looking for help.
3. A number of AI detection tools are already on the market. This includes Turnitin’s built-in AI detection tool, which was quietly enabled at CSU Chico in the spring without any warning or context, GPTZero, and Crossplag.
4. There are massive limitations with these tools. In a telling sign, OpenAI, the company who runs ChatGPT, recently shut down its own AI detection tool. Google “students accused of cheating with ChatGPT” and you’ll find scores of stories about accusations and false positives. On our Youtube channel, we have videos walking you through these tools and assessing what they are capable and incapable of doing. Ethan Mollick provides reasonable caution about using these tools and the gray area of what constitutes plagiarism.
5. If you do use these tools, be reflexive about their limitations and the equity and ethical implications of accusing students of cheating. Whatever Turnitin says, it is not necessarily what we think it says. We are working with black boxes here. Accusing students of cheating with limited evidence is always mixed with racial, class, gender, and ability biases. If you do use these tools, it is best to use them to start a conversation with a student about their work rather than accuse them of cheating.
6. It’s time to reconsider the types of assignments we give students and the learning objectives behind them. In his post “AI has changed your class. Panic: No. Re-Build: Yes,” Zach outlines the kind of work everyone will need to begin doing to meet the new challenge of generative AI. It’s worth reading again. There are many other guides available on the internet and our campus community, at the department level, college level, and university level should work together to develop the myriad of changes and strategies for successful pedagogy in the era of generative AI. There will not be one size fits all but we should all get to work and share what's working and what’s not.