Reflecting on a Year of Generative AI: Nailed it!

Written by Nik Janos and Zach Justus

Having been thinking about, writing, and speaking publicly about generative AI for one year, we wanted to look back at our first post called “Responses to ChatGPT”, which we published on January 26, 2023. What did we get right? What do we know now that we didn’t know then? And what might the next year of AI in higher education look like?

What did we get right? We’re both humble people but in this case we’ll say “nailed it.”

In “Responses to ChatGPT” we outlined three probable responses to this new technology: ignore, fight, and embrace. In the past year this framework has served us so well in talking to faculty, administrators, staff, students, and the general public about the initial impact of ChatGPT and other AI products on the university. Let’s review.

Ignore

Even in early 2024 we are still encountering people who have spent the last year ignoring it. In January 2023, we wrote, “The default response to all disruptive changes is to ignore them. While this is by far the easiest approach, ChatGPT is here and our students won’t ignore it. Some instructors will ignore it out of misunderstanding or believing it inconsequential or impossible. Yet others will practice willful ignorance.” We are not casting judgment, as we understand the reasons people have ignored it. This motivated us to talk to as many people in academia as possible this past year and to publish our blog. However, we are surprised at how persistent and prominent this response is despite the media, institutional, and interpersonal attention AI has gotten this past year. The ignore response will continue to be an individual and institutional blind spot and make adapting the university to the ubiquity of AI slow, inconsistent, a missed opportunity, or at worse a failure.

Fight

We were correct about the arms race framing both in faculty’s desire to pursue it and its futility. Like a real arms race, don’t count on it ending or that in the future new technology might emerge or the imposition of laws and/or ethical constraints but for now any attempts to detect the use of AI are basically useless and can actually cause harm. As we’ve written, AI detection tools are inconsistent and produce too many false positives to be useful. Faculty using these tools have a high likelihood of making false accusations against students. One thing we did not anticipate was how attractive detection programs would be for institutions and professors looking for a quick fix. No doubt several companies have cashed in on this desire for a quick fix.

Embrace

The embrace position is the most perplexing yet promising position. In our estimation it remains a small percentage of people. With only one year of generative AI there is a lot to learn and a great deal of trial and error before the embrace position has solid and clear recommendations on how to successfully use AI at the university. The embrace position is no panacea as it often can produce more questions than answers. That said, students, faculty and staff are using AI to do work. Faculty are using it to create assignments, to write letters of recommendation, and to prepare for class and students are using it to do homework and, let's be honest, to take exams and write papers. This is the tension at the heart of embrace: in its progressive desire to incorporate this technology it simultaneously creates the condition for the widespread use of AI. In low trust environments or in environments without strong moral and ethical guide rails faculty and students will use it however they like. Cheating has always occurred but with a machine that can quite literally do the same work as a human it calls into question the point of higher education, among other existential questions. The most interesting conversations are in the embrace camp and we look forward to participating in them to develop solutions to and best practices within the tensions just described.

What do we know now?

One funny and in hindsight questionable line we wrote is this: “At minimum, it is a disruption.” Ya think? We certainly set the bar very low for something as massive as disruption. Perhaps faced with the Skynet framing, disruption is the minimum. But what I think we meant and what played out that past year is that generative AI is a disruption to every core function of higher education. We could see that the first time we tried ChatGPT, in late December, 2022. In that same paragraph we wrote that generative AI might be akin to the Internet, Wikipedia, and Massive Open Online Courses (MOOCs). The introduction of each of these caused a moral panic in higher education but each was more or less incorporated into traditional education or in the case of MOOCs just faded away.

We think generative AI is a bit different. It is a disruption in the classic Clay Christianson meaning and in the dictionary meaning of messing things up. Nik wrote about this in late 2023. Generatie AI does force faculty, administrators, and above all students to ask the question “what is the meaning and value of higher education?” What is the meaning and value of pedagogy, of assessment, of learning and mastery. These are much deeper reflections than any of the previous technological innovations.

The problem here is that we just don’t know what will happen in the near to long-term. AI companies are developing the technology at an incredible pace and the social and political norms are much slower to adapt let alone get ahead of it. That said, ChatGPT 4 at its 2023 capability has already changed higher education permanently.

One thing that we realized a few months into writing about AI in higher education is that universities are very conservative, small c, and slow moving institutions. Universities have legacy and heritage ways of doing things. There are benefits to this, especially in a society shaped by the mind bending pace of creative destruction of capitalism. Even faculty, who often espouse progressive and radical perspectives, are generally slow to change and will resist changes “to the way things are done.”

The university is slow. However, not all universities are and a few have embraced this technology to build programs at the forefront of AI use in the classroom. Much like car or cell phone buyers, student enrollment is a zero sum game. With the rise of online distance education this is even more true today. Students make decisions based on a number of factors on where to spend their time and money. Slow moving universities will either have to fully embrace AI or offer a truly compelling vision on why their university offers modes of learning better than those that do.

We ended our first post last year with what we thought were the Knows, Unknowns, and the Unknown Unknowns. This post is about recounting what we knew and what we learned this past year. Now entering the second year of generative AI in higher education, we both feel strongly that the unknowns and unknown unknowns have many surprises in store for us. Let’s strive to get ahead as best we can.

Nik Janos

Professor of Sociology at California State University, Chico.

greenspacenotes.org
Previous
Previous

Voice cloning for education: It is exactly like this picture

Next
Next

What chess can teach us about education and AI