ChatGPT: "I know it when I see it"​ (no you don't)

Early in March and before the release of ChatGPT4, Vice published an especially cringy article about higher education and ChatGPT. The article was titled “ChatGPT is so bad at essays that professors can spot it instantly.” Laurie Clarke quotes several professors (some anonymously) who are content area experts and have identified ChatGPT writing in their classes. Buried in the article is a more balanced section about how the program is actually quite good at some forms of writing. Overall the piece speaks to several themes we have noticed in our own discussions, which we believe are deeply troubled and do not reflect the rapid pace of change in this sector. 

“This cannot possibly affect the kind of work I do.” This is a variation of the “ignore” response Nik and I initially theorized. This is anchored in the hubris of professors we have all showcased at times. The venn diagram overlap of people who assume the program cannot do something and people who have not used the program (especially version 4.0) is just one circle. ChatGPT cannot do everything well, but pretending it cannot write a poem, a research proposal, or a website is absurd. If you find yourself in this camp, take 10 minutes to create an account and try it out and if you don’t want to pay for the premium version email me what you want to have it do and I’ll send you the results.

“I know it when I see it” or “vibes.” The Vice article at one point asserts “university professors are catching ChatGPT assignments in the wild for a different reason: because the AI-produced essays are garbage” and goes on to provide some examples. This placating stance assures us in education that we are special and have nothing to concern ourselves with because of how incredible we are. Later in the article a professor details his plan of providing spot oral exams to students he suspects have used AI tools–again confidently assuming he will know. We will not always know and in fact at other places in this same article the Clarke details things the bot does exceptionally well. The danger in the vibe check response is two-fold.

  1. It introduces a different kind of bias where professors are likely to target students who we don’t think are capable of good work and let other students slide. Historically, this breaks down along the lines of class and race where poorer and darker students are more likely to be called out based on unscientific vibes. 

  2. This mentality treats the program like a static entity. Two weeks after this article was published version 4.0 was released. Would the same vibe check work? No one knows because the approach is entirely unscientific. How about the next version? I have an offer to anyone relying on the vibe check. Send me a prompt. I’ll send you my response, a response from ChatGPT I have edited, and unedited drafts from 3.5 and 4.0. I’m truly interested to see if you can identify what is what. 

The solution or perspective I continue to recommend to people is: go try it out. Earlier this week Nik and I spoke to research librarians and last week we were able to speak with some of our colleagues from around the campus. The one truth we have learned is: when people try this for the first time, they are surprised. 

Previous
Previous

ChatGPT: Tired of reading? Watch people talking instead!

Next
Next

ChatGPT: I Told You So!