New turn-it-in tool is sus
Academic honesty has been the most talked about issue among academics regarding ChatGPT. Will students ever write their own work again? What counts as someone else’s work? What do I do if a student confesses to using the program? These are complicated questions and Nik is preparing a longer form piece that addresses these issues from a first-principles perspective. In this post I want to narrowly focus on the new tool from Turn-it-in that produces a percentage of a document that was written by AI.
Turn-it-in is an educational technology tool that checks student work against published material plus their repository of other student work that has been submitted and produces a percentage match for professors to evaluate. There are other parts of the program which I have utilized for years including pre-scripted feedback you can drag and drop onto papers and a voice thread feature I use in response to student writing.
The company has made some big claims about their new tool and its capabilities including low false positives, a 97% success rate, and cutting edge technology. None of these claims have been subjected to external verification and Inside Higher Ed aptly called the program a “black box” that we don’t know very much about. My concern is a more practical and immediate one that I don’t have any clear answers for.
First, this percentage match appears alongside the trusted and verifiable percentage match of papers to other sources. Turn-it-in is leaning into their reputation and history here in a pretty unsettling way. These numbers are not the same. The percentage match on other sources is directly verifiable with online sources or with a request from other professors. If a student turns in something with a high match to an existing source we can look at that source, meet with a student, and turn it into an opportunity to discuss standards, source citation, and it is generally a positive learning experience. There is no analog with the AI match score. Quite simply, there is nothing clear we can do with this number other than fret. We can certainly ask a student what happened and hope they open up, but there is no learning experience built into this because it is run by an algorithm and we have no idea how it works.
Second, most of us do not have a course or campus infrastructure to deal with this. At our student conduct office there is no listed policy about AI writing and topic literacy on our campus is quite low. In some ways we find ourselves in a situation reminiscent of Congress trying to regulate social media companies, we need to create policies about things we don’t understand and probably have not used. So, even if you identify a case of AI writing, a student discloses to you the work is not their own and they wanted it to appear as though it was theirs…now what? You probably don’t have a syllabus policy to lean on (I don’t and I’ve been writing about this for months) and there is probably not a campus policy either.
I understand the market pressures that have led Turn-it-in to rush this tool to market. This is the arms race Nik and I originally projected in our earliest writing. However, this appears to be a bad outcome of severe market pressure when a product is rushed into use without sufficient transparency and an over-reliance on brand trust that will lead to a lot of awkward interactions. I have told my students about the tool because I’m hoping it will keep our spirit of transparency alive and I can continue to learn from them about how they use the tool–but there is no chance I would move forward with an academic honesty case even if I had a prohibition on AI assisted writing. Turn-it-in has rushed a tool into their interface relying on our trust of the brand to accept the validity of the tool. For me at least the effect has been the opposite–I have less trust in their technology and interface in other areas.