AI. Education. Equity.

The impact of generative artificial intelligence on our goals on diversity, equity, and inclusion remains undertheorized. The goal of this post is to collect what we know so far and theorize about where we might be headed. 

  1. AI models are trained on existing data so they often have bias built in. Years before ChatGPT, Amazon attempted to use AI to help fill their hiring needs and trained a model on their existing hiring practices. They produced a model (never implemented) which systematically discriminated not just against women, but against any mention of “woman” in a resume. Went to a “women’s” college? You are out. Captained a “women’s” basketball team? You are never getting promoted, so why hire you? The exercise revealed an existing male bias within the company. More recently researchers were unable to get image generation platforms to create visuals of black doctors helping white children–revealing a deep problem with the training data. There are endless examples of algorithmic or dataset bias informing flawed AI models. The takeaway here is: existing bias is built into AI programs. In some cases AI may even interpret bias as a rule and exacerbate existing problems. Using AI in class or for your own productivity means engaging these systems. We are not advocating avoidance, but keeping this in mind when we engage.

  2. Paywalls create an obvious class bias. ChatGPT costs $20 a month. Gamma pro is $15. Canva Magic Write is $20, and the list goes on. These are just consumer facing products, if you want a chatbot designed for you or your institution the costs quickly escalate to the thousands. For a student who might benefit from these programs the costs add up quickly and are in addition to often already exorbitant course material (textbook) costs. Class often overlaps with equity concerns like race, first-generation status, and immigration. Requiring, encouraging, or even tacitly allowing AI usage in a college classroom runs the substantial risk of exacerbating existing equity issues. 

  3. AI opens a new front in the digital divide. Not all students have access to fast and reliable internet. For example, in sociology we serve distance education students within the CSU Chico service area that live in mountainous, rural areas with intermittent internet. Not only do they often have slow satellite internet but they often lose power during the winter or during fire season with power outages, which impacts their ability to submit work, take tests, and access course material. Creating new technology dependencies with AI could exacerbate this divide. Age could be another vector of the divide. Older students might have less familiarity with AI, feel intimidated by AI, both on how to use it and its ethical implications, and have less comfort signing up and using technology products than younger students who have been raised on apps and internet culture. Returning to sociology, our distance education program serves a higher number of older students in their 50, 60, and even 70s. When requiring and encouraging the use of AI in a class, instructors should be mindful about access inequities. 

  4. No post would be complete without criticism of AI writing detectors. These detection tools systematically flag English language learners at much higher rates. This data is a rare opportunity to share peer-reviewed research on the topic. For the record Nik and Zach were skeptics of these programs from the start, but we were still learning in the Spring. Even as a non-believer Zach used the tool to initiate some conversations with students individuals on several occasions, “this came back with a high number, it does not necessarily mean anything, but can you tell me about your writing process for this paper?” This is seemingly innocuous and probably a best case scenario for actually using these tools, but the fact remains, this resulted in having more of these conversations with an accusation as an undertone with international students and English language learners. There was bias built into the tool and in using it replicated that bias. Tough lesson learned. 

  5. We want to end on a positive and hopeful note. AI has the potential to provide the powerful one-on-one tutoring that closes equity gaps. This is the position Sal Khan has taken and it is a compelling one. The argument here is simple: one-on-one tutoring is the most effective tool we have in helping students overcome obstacles, but it is really expensive. AI has the potential to substantially lower that cost. Low does not mean free, so there are still class and related concerns here, but the potential to make investments in education more effective is substantial. For years institutions have thrown money at addressing equity gaps and other goals, often with little to show for it. This could transform that landscape for a fraction of the cost.

There is real peril in blind trust in AI, and as the personal experience above illustrates, even thoughtful engagement with the technology can be fraught. Real opportunity exists for institutions to scale up AI to provide students with a customized learning experience that fits them and closes achievement gaps. For individual instructors our best advice is to be intentional about your engagement. Talk with students about AI usage, the problems with the models, and point them in the direction of powerful and (currently) free access points like Bing or Bard instead of ChatGPT or Office 365 Copilot. 


Previous
Previous

Chico State professors navigate artificial intelligence in education

Next
Next

Feeling overwhelmed by AI? We get it.