AI and Assessment: The Missing Partner

Nik Janos and I wrote a piece for Inside Higher Ed about assessment and AI. It created a bit of a stir in several places online and got some discussions going on our campus. Starting the conversation was our goal, but of course we do not have control over how people interact with our writing. We did want to call attention to an argument we made which has not (to the best of our knowledge) gotten much attention.

The claim that assessment is broken was controversial even though it seems rather obvious to us. The call for a pause in the practice of assessment so we can retool how we measure student learning was also received unevenly. The argument we were hoping would generate more traction was that accreditation bodies need to lead this conversation. Here is what we originally wrote:

First, accreditation bodies, whether they are regional or discipline-specific, need to show leadership. They are supposed to be the connective tissue between employers, governments, and higher education. We need leadership, support, and conversation with these partners. The burden of adapting to artificial intelligence has fallen to faculty, but we are not positioned or equipped to lead these conversations across stakeholder groups.

We want to call attention to the necessity of accreditors to step up and lead.

Looking back through email archives, I have been messaging people who interact with institutional and disciplinary accreditation asking what they were hearing for months and hearing nothing. We assumed leaders at these bodies were aware of the challenges cross-cutting our classrooms and were ready to offer guidance. It has been quite the opposite. A search of the WSCUC (our regional accreditation body) website reveals exactly one mention of artificial intelligence, it is the bio of the president of Occidental College, whose partner works in an AI lab.

Search results showing one result.

Search results from WSCUC showing one hit

Searching other regional bodies is similarly disappointing with the exception of the NWCCU which has robust writing and scholarship on the topic. It is our understanding that there is a session scheduled for an upcoming WSCUC meeting which is good, but if we are being honest with ourselves, also absurd.

Search results from SACSCOC showing no results

Search results from SACSCOC showing no results

We are 18 months into the ChatGPT era, it has dominated higher education news for nearly that entire time and if we are looking for resources related to the intersection of government, employers, and student learning the institution at that intersection, most of these organizations, have nothing for us.

Broadly, we continue to treat the disruption of AI as a puzzle to be solved in the classroom like the integration of other new technologies. This is obviously (to us at least) not accurate. The disruption is to the world, not just to the classroom. Outcomes which mattered before are no longer relevant for job placement. New skills have emerged or been emphasized revealing gaps in the education system.

We don’t have access to government or industry leaders. I have spoken with some people who typically hire graduates in my area and many of us have done the same, but I don’t have a government response wing of the Faculty Development office and Nik doesn’t have a direct line to the CEOs of the non-profits and private sector organizations that hire his Sociology students. Accreditation bodies exist to connect us together.

The Department of Education lists the following essential functions for accreditation organizations.

  • Assess the quality of academic programs at institutions of higher education

  • Create a culture of continuous improvement of academic quality at colleges and universities and stimulate a general raising of standards among educational institutions

  • Involve faculty and staff comprehensively in institutional evaluation and planning

These areas have all been impacted by generative AI. What makes a quality program is now different in both the goals we need to achieve and how we achieve them. In terms of academic quality, it is hard to look at outcomes and see that happening. As Nik pointed out in our original piece, we have no idea if students are actually improving because we continue to evaluate learning products in the same way. This extends beyond the classroom as well. Our own writing and evaluations are now impacted by AI. If Nik uses generative AI to crank out writing projects and I refuse to do so, how should our work output be compared in the tenure process? These are big problems we are not in a position to solve. So in some ways, the final bullet point is one where accreditation bodies are excelling–with their noted absence from this conversation we are quite involved in evaluation and planning–more than we should be and with a missing partner.

Previous
Previous

Summer break, see ya

Next
Next

Voice cloning for education: It is exactly like this picture