The Assistant for the Rest of Us
There is a truth about the prolific output of professors at research universities, called R1 schools, that is unclear to those on the outside. Professors at these universities routinely publish books and countless journal articles, deliver class lectures to hundreds of students, and grade student work all by leveraging the labor of graduate students in their role as Research Assistants (RAs) and Teaching Assistants (TA). When I was a grad student at UC Santa Cruz, I saw a TA create a slide deck that the professor walked in and used to deliver a lecture to a class of 150 students. RAs can point to parts of books or articles where hours of their work show up. RAs read articles and books and write literature reviews for the scholars. Most get credit in the acknowledgements but not always.
That class of 150 students? There were seven sections all run by a different TA. This is where the teaching and learning really took place. Those sections had 25 or more students, so not really small. TAs do most of the assessment and grading. For sure, grad students get paid and receive reduced tuition. But on the whole, the R1 is a labor hierarchy, one in which the glory of the scholar is built upon the outsourced labor of the assistant. Graduate student labor is a real advantage and privilege of R1 scholars. But are these advantages changing?
On January 24, 1984, Steve Jobs introduced the original Macintosh computer. During his introduction, Jobs said the Macintosh “is the computer for the rest of us.” It was a nod to the simplicity of the Mac in a computer industry dominated by juggernaut IBM and their hard to use software. The Mac, on the other hand, had a mouse and you could just drag it around and point and click. It fundamentally changed our relationship to computers. Generative AI, I argue, is the assistant for the rest of us.
For professors like me who teach at schools where RAs and TAs are rare, generative AI can level the playing field and make our work easier, maybe even better. This post is about my experience leveraging these tools in similar ways as my R1 professors. I also want to talk about the reluctance of faculty to admit to using these tools in their professional work.
The examples
I want to provide three examples of how I have been using generative AI as an assistant. Like my advisors and professors in grad school, ChatGPT is saving me time, helping me be a better researcher and writer, and at the same time injecting new ideas and assignments into my teaching, much like I did for my professors when I was their TA.
Example 1: I have been using NotebookLM, a Google property, to read, summarize, and synthesize sizable amounts of data contained in PDF reports. I am writing a book, and with teaching and service duties, and family life, time is very limited to research and write. NotebookLM does exactly what RAs do for professors at R1. Read, summarize, and synthesize data. I’m here for it.
Example 2: part of my research involves conducting lots of 1-2 hour long in-depth interviews. When I was a graduate student, I was hired to transcribe interviews for my professors. In my current research, I was spending 5, 6, 7 or more hours transcribing each 2 hour interview. The standard expectation is that for every hour of recording, it takes 2-3 hours to transcribe. Then I learned about Rev, a $10 per month AI powered transcription service. I now get a full transcript in minutes. The transcripts sometimes need editing but I have the video file so I can quickly use the provided timestamps to edit it myself or hear exactly what was said. Some might say, you’re losing out on important details and context by not transcribing it yourself. And to that I say, yeah but I want to actually write this book.
Example 3: Sometimes a teacher knows they need a new assignment or says to themselves “you know, based on Tuesday’s class, I really need a new learning activity for Thursday.” Teaching four classes at a state school, while trying to write a book and also have a life, I typically don’t have the time or bandwidth to create bespoke assignments on the fly. I have been developing at home assignments where ChatGPT acts as a TA to aid in student learning. But I have also been using ChatGPT to help me create fun and interactive in-class activities. In the old days, sure I would have worked for hours creating it myself, borrowed one from a colleague, or gone to the Internet. But with ChatGPT, in a few minutes I have the basis of a new activity. I’ll admit, sometimes and maybe often ChatGPT surprises and delights with its suggestions.
Opening the conversation
There are lots of other little ways I get help from generative AI during the day, such as editing suggestions for my writing or replacing Google search for somethings. There are certainly privacy and intellectual property concerns about feeding data into it. For California State University professors this will be ameliorated when ChatGPT Edu rolls out. NotebookLM is pretty good at self contained types of work, like uploading PDFs or weblinks and engaging with the content.
Why shouldn’t professors at teaching schools leverage these technologies to make their learning, research, teaching, and committee work easier? Why should R1 professors get all the resources and benefits? The rest of us shouldn’t feel shame. Zach and I have written about a tendency at the university to honor labor over the utility of the outcome. Faculty want to believe that the struggle, the revisions, and the hair pulling are the foundation of value in the products we create. But a lot of times, the usefulness of the outcome matters most. Did the activity spark student learning? Did the assessment report get done? Did I finish the chapter? Do we chastise scholars for using the Internet and favor those who only read paper books? Is the Nobel prize winning author less accomplished because they had a team of editors and proofreaders?
Is the manner of production more important than the outcome? If the answer is yes then most scholarship and teaching at R1 is built on a foundation of exploited graduate labor. But that’s a harsh take. While we should do more to acknowledge the grad students, most people recognize the outcome, the book or paper. I think the same is true for the use of AI. As it becomes more integrated into everything we do, I think it is ok for teaching professors to acknowledge that there is a new set of tools that can save time and maybe even bring some relief or joy back into our work.
It would be great to create a more open conversation. For example, I was talking to a colleague who whispered “is it ok to use ChatGPT” for this thing they wanted to do. Yeah, why not, see how it can help you, especially if it helps get you to where you want to go. I, for one, want to hear how everyone is using these tools.
Postscript
I did have two tangential thoughts while writing this: one, I fear for the future of the grad student RA and TA. Two, generative AI is leveling the playing field between teaching professors and R1 professors, but it is more fundamentally leveling the playing field between everyone else and all professors—while also raising new questions about workload expectations for faculty. But those are topics for another post.