Intens(ive) Reflections

Earlier this month I co-led an AI-retrofit intensive for 25 faculty members at Chico State. This was the interactive version of the asynchronous retrofit I wrote about in December (which I still recommend). My favorite part about work like this is learning from the participants so I wanted to share a couple of these lessons with a broader audience. 

  1. The learning outcome approach works. This intensive grew out of my own reflection and planning on course revision I published last year. I felt quite validated by the engagement of the faculty in the intensive class who, by design, had to consider learning outcomes first rather than rush to assignment revision. Some of them are recommending changes to colleagues or even accreditation bodies, but everyone can feel more confident their students are learning the right stuff and on the right track. 

  2. Awareness and adoption among faculty remains low. The awesome folks in the intensive self-selected into a one week program about AI. Even among this group a significant portion had never opened an LLM window or had extremely limited exposure. This is not a criticism of them–it reflects a wider trend reported on by Inside Higher Ed in the Fall. It is an important lesson for those of us knee/neck deep in this space–most of the people we are speaking with are still at the starting line. 

  3. Even thoughtful workarounds are imperiled by developments in AI. Math instructors have been developing counter-measures to online tools for years. In the intensive I encountered one of the most thoughtful math instructors I have ever met with an extremely creative plan to circumvent existing programs and AI. He found during the intensive the counter-measure no longer works. Again, this is not a criticism of him, it is a reflection of the reality that this landscape changes incredibly fast

  4. Different tools for different tasks. Most of my work has been in the OpenAI universe. During the intensive there was a strong desire to look at different tools and a surprising recognition that for particular use cases related to health and STEM Bard produced better results than ChatGPT 4.0. It is another reminder that even if you spot a trend in one model that you think makes you an expert at detection–there is probably another model you know nothing about. 

  5. Knowledge needs vary widely by discipline. My background is in Communication. Nik is a Sociologist. We teach students important things, but the nature of the knowledge is different than the folks in the intensive who work as behavioral therapists or the professor teaching worksite safety for construction management students. So while I may want my students to work with AI to produce more engaging writing that will benefit them in a future role in recruiting–a construction management graduate needs to be able to walk into a worksite and immediately spot danger without consulting a LLM. Solutions to the existence of AI cannot be one size fits all. 

  6. In class work and question/answer remain viable alternatives. The rhetorician in me smiles a bit thinking about a full return to socratic dialogue in education. Referring back to the number 4, this is not a solution for every situation, but asking students to present and respond in real time is a low-entry-barrier way to evaluate understanding and knowledge. 

Several times during the intensive I mentioned to participants or collaborators “this doesn’t feel like work.” I don’t know if the folks in the intensive felt the same way, but exploring and learning in this space makes me feel like I am opening up a web browser for the first time or went over to a friend’s house to find their parents had up-to-date encyclopedias–I just want to learn more. 


Previous
Previous

What chess can teach us about education and AI

Next
Next

AI Retrofit: Asynchronous and Free For Everyone