My reflections on one semester with ChatGPT
As Zach wrote last week, one of the most common responses we get from faculty when showing them ChatGPT or talking about it is that we need to move away from analytical assignments to personal reflection assignments, or at least combine them. Some faculty think that we can out smart ChatGPT or they think there are truly human things that ChatGPT can’t produce. As Zach outlined, it is becoming increasingly clear that this is not true.
In my Sociology 375: Global Problems class, I have for years assigned personal reflections that combine analytical history, engagement with concepts, and then situating a sociological personal biography within it. Recently, I played around with ChatGPT to see if it could do similar work and of course it can.
Here are some examples and note these are pretty crude since I didn't provide lots of details in the prompt.
More and more I hear from colleagues that we need to deploy an array of anti-plagiarism tools (see our new Youtube channel to watch us assess them). Alternatively, as mentioned, I hear that we should scrap our legacy pedagogy and assignments and create AI proof teaching and curriculum. These two strategies will certainly be used but I foresee uneven results related to equity, assessment validity, time and effort constraints, and lack of resources. We’ve written a bit about all of these. The third option, one that Zach has written quite a lot about is the embrace position. His May 17 post is really good and outlines what he’s learned about it this semester.
Writing at the end of spring semester, the original thesis that Zach and I laid out when we started writing about ChatGPT is correct:
Ignore: I believe faculty and administrators will continue to ignore or deny the ChatGPT disruption for a variety of personal and professional reasons. While this does seem appealing, it won’t take long before the disruption has overtaken individuals and eventually the university system.
Fight: AI detection software has evolved pretty quickly and my experiences this semester have convinced me that there is usefulness for AI detection software to help shape norms, expectations, and ethical practices for the use of ChatGPT and other large language model tools. Reworking our assignments to outsmart or contravene ChatGPT seems like a waste of time, not only for the practical and pedagogical reasons but for the fact that every time ChatGPT continues to impress by producing compelling baseline outputs. This is less an arms race and more a rat race.
Embrace: Zach attempted this more than I did, but I also discussed ChatGPT with my senior level students in our capstone class this semester. I saw how they used it and it was with mixed results. I forbid them from using it to generate their answers and then copy and pasting the results as their work. And yet, a number of students did this and I was only able to discuss it with them by using Turnitin’s new AI detection tool. The embrace position is tantalizing but will require a great deal of trial and error to figure out how to use ChatGPT that builds students’ skills in core university competencies (writing and thinking) but also skills on how to prompt and use AI as a tool to augment their work.
Trying to outsmart or out AI ChatGPT, and the like, will be tempting and sometimes fruitful and other times a failure. More energy and time should be put into a comprehensive reassessment of student learning outcomes and assignment design for the large language model era. What these two things look like, no one knows yet.
Zach and I have some ideas for longer pieces on the potential for ChatGPT et al. to disrupt the university and we’ve just launched a Youtube channel to showcase different kinds of content. Check it out.