GenAI & Classroom Dynamics: Part I

Gen AI Killjoy sticker designed by Hiba Abdallah, 2025
Introduction: ACM²
Starting in January of 2025, the Department of Arts, Culture, and Media at the University of Toronto Scarborough initiated the GenAI & Classroom Dynamics Working group, a research collaboration with faculty across all of the 9 programs in our department (Studio Arts & Visual Culture, Art History, Media & Communication Studies, Arts Management, New Media, Journalism, Theatre & Performance, Music, and Media Industry & Technology). Importantly, this research collaboration is focused on the ways that a whole suite of GenAI tools1 are eroding the cultures of teaching and learning in post-secondary Arts and Humanities classrooms. In particular, we began to collectively report to each other on classroom dynamics – the ways that students relate to themselves as learners, to each other, and to faculty.
The main thing that we observed over and over again, and that our students self-reported, is that the availability GenAI in all forms is making it increasingly difficult for students to learn cumulatively, collaboratively with each other, through trial-and-error, and with the vulnerability, humility and (perhaps most importantly) a shared sense of adventure and humour that characterizes learning processes in the Arts and Humanities.
In the Winter of 2025 we conducted research through questionnaires and follow-up Town Hall meetings with undergraduate students, Teaching Assistants (mostly graduate students in programs across U of T’s tri-campus), Sessional Faculty (faculty who teach in our programs often over a long career, but without a longer-term contract), Contract Faculty (faculty teaching with contracts limited to one or a few years) and continuing Teaching Stream and Tenure Stream Faculty. Our focus has been on learning from the expertise of students and the faculty who teach the most.
In Fall 2025 we hosted a series of pedagogical & research praxis sessions through a project that we call ACM2: Arts, Culture & Media Applied Co-Mentoring. The topics were: 1) The Anti-Gaslighting Episode, in which we discussed the real impacts of GenAI in our classrooms and the ways that institutional failure to acknowledge the crisis in learning culture makes the jobs of faculty close to impossible, if our jobs are to teach, to help students learn, and to assess and evaluate student learning. 2) Yes-ish: reflections from faculty who have been attempting to teach students selective, critical and accountable use of AI. 3) No-no-no: reflections from faculty who have been attempting to teach students to refuse and avoid AI entirely. 4) Language Learning: reflections from faculty attempting to teach writing skills – including the cultivation of critical perspectives and personal voice – to classrooms where the majority of students are English-language-learners.2
For more about Critical/Ethical Engagements with GenAI, here is a Bibliography created by our colleagues in the Critical Futures and Creative Labour (CLCF) research group: WIP Critical AI _ AI Ethics Bibliography
What we have consistently found is that unless faculty are taking active, laborious, massively-time-consuming steps to temper the impacts of GenAI in our classrooms and to encourage and incentivize students to do their own reading, summarizing, brainstorming, outlining, editing/polishing, translating, and practicing unscripted, improvised in-class discussion, the once-lively dynamics in our Arts and Humanities classrooms are – and this is not overstating the case – dead.3
We have observed that students who come to class willing to bring partial answers, to ask questions based on their own in-progress ideas, and to listen to each other and build from each others’ ideas eventually stop participating in these ways due to the pressure of the majority of students who will not participate without a scripted answer, generated by an AI machine. The students who participate simply end up feeling too conspicuous and become self-conscious about the risks they are taking by actively participating in class.
Overwhelmingly we have observed that the majority of students in our classes never develop the abilities and confidence to identify for themselves the key ideas, themes, or concepts in a research article, a documentary film, a new article, or a piece of art and feel unwilling to take intellectual and creative risks for fear of making mistakes or saying, writing, drawing, or recording something that is not “polished”.
GenAI has a disastrous impact on classroom dynamics in the Arts & Humanities because it eliminates learning processes by offering already stress-out and time-crunched students the chance to submit work (or script their in-class participation) in ways that performs apparent fluency, apparent comprehension, but with little to no real understanding to back it up. As Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University, puts it, “students’ use of generative AI to perform critical reading, writing, and thinking activities resembles ‘going to the gym and asking a robot to lift weights for you.”4
GenAI Killjoys
For faculty members who believe in the transformational power of learning, this has been a heartbreaking development that leads to more and faster burnout, malaise, depression, and anxiety for faculty of all ranks, but especially faculty in precarious positions who feel that by exerting regulatory efforts on student GenAI use, they compromise their chances of being rehired due to high failure rate in their classes and the fear/threat of negative student evaluations. This heartbreak has us cohering around Sara Ahmed’s anti-racist and queer feminist killjoy: those who “disturb the very fantasy that happiness can be found in certain places.”5
We join scholars like Maggie Fernandes, who writes that “Killjoy energy is so stark in the GenAI context because the marketing surrounding GenAI is so optimistic, so cheery” (see Fernandes 2025, “On Being a GenAI Killjoy”). As Martha Kenney and Martha Lincoln show, while the oligopoly of big coercive tech has promised that AI brings endless happiness – “an ‘everything machine’ (Bender and Hanna, 2025, p. 147) that will ‘accelerate research, personalise learning and drive productivity’ (London Business School)” – there is no scholarly or scientific research to support this cheery optimism. In the face of costly university investments in GenAI, or tacit acceptance of its inevitability, faculty and students are effectively working as unpaid researchers for these large companies, charged with finding reasons to use these tools. “We argue that the best strategy to resist the enshittification of higher education by the tech oligarchy is refusal. Refuse to use AI chatbots; refuse to accept the AI contracts the administration makes on our behalf; refuse to do the dirty work of AI firms by finding use cases for AI in our classrooms” (Kenney & Lincoln 2025, np). Given our experience, and the experience of every faculty member who has ever published on the topic, the only happiness that GenAI secures is for tech oligarchs delighting in the deregulated market of data and profit hoarding and university administrators enjoying the opportunity to fire faculty and eliminate programs. Faculty who have built their research and pedagogy expertise in fields shaped by and for social justice refuse to pretend we are happy with “ced[ing] the future of education to the interests of Big Tech, no matter how convincing their sales pitch” (Kenney & Lincoln 2025, np).
We want to ensure that faculty who remain committed to holding students accountable for their own learning and who put in the work of making their classrooms sites for genuine learning and critical engagement with or of new technologies like GenAI, are valued for their expertise and efforts. And we want to give students the opportunity and tools to learn for themselves, to experience the satisfaction and confidence that comes with doing the hard work of building their own intellectual-creative practices through effort, error, and enthusiasm. We understand why students would want to take the short-cuts offered by coercive technologies that entice students into accepting the help of GenAI as if it were common sense, to their own detriment. However, learning is not easy and there are no shortcuts to developing comprehension, creative connections, and exciting new ways of thinking.
While we do not have a full ban on the use of GenAI in ACM courses, we do encourage a severely critical approach to its use. We refuse to accept that all of our students and faculty will default to inevitably relying on GenAI for intellectual-creative work; we believe that our students and faculty do not concede to the concomitant substitution of easy answers for genuine understanding of complex and even contradictory ideas, of fast production for creativity, and automatically-scripted answers for the hard work of building complicated, broken vocabularies and partial answers.
Learning is not efficient!
We challenge all ACM students and faculty to do the hard work of teaching and learning as a set of radical, transformational, relational activities that take practice, patience, trust, and determination; to take risks and make mistakes, and to cultivate classroom dynamics that are intellectual and creative communities-in-formation!
Further reading…
Part II: Approaches to GenAI in ACM Classrooms
Part III: GenAI & Classroom Dynamics: Mythbusters
- Including large language model chatbots like OpenAI’s ChatGPT and Microsoft’s CoPilot (which the University of Toronto has licensed), AI art-making softwares like Canva, Dall-E 2 and Sora 2, and other pop-up tools like Adobe’s AI Assistant, Google’s Gemini, and others added to research sites like Jstor as well as automated translation tools like DeepL and Google Translate
- ACM is not an English-language-learning department, and does not have the resources to support this necessary skill (ie. we do not have the budget for small classes, TA-led tutorials, or individualised language development workshops). None of our programs are designed for teaching English language learners, and none of our faculty are hired for their expertise in this specialised field.
- Our findings are also consistent with every other study of teachers and faculty navigating GenAI. See for example, Jason Koebler, “Teachers Are Not OK.” 404 Media, June 2, 2025. https://www.404media.co/teachers-are-not-ok-ai-chatgpt/Ronald Purser, “AI Is Destroying the University and Learning Itself.” Current Affairs, December 1, 2025. https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself; Olivia Guest, Marcela Suarez, Barbara Müller, et al, “Against the Uncritical Adoption of ‘AI’ Technologies in Academia.” Preprint, Zenodo, September 5, 2025. https://doi.org/10.5281/zenodo.17065099; Martha Kenney and Martha Lincoln, “Let Them Eat Large Language Models: Artificial Intelligence and Austerity in the Neoliberal University.” Preprint, SocArXiv, October 25, 2025. https://doi.org/10.31235/osf.io/6z5pa_v1
- Quoted in Jason Koebler, “Teachers Are Not OK.” 404 Media, June 2, 2025.<a href=”https://www.404media.co/teachers-are-not-ok-ai-chatgpt/”> https://www.404media.co/teachers-are-not-ok-ai-chatgpt/</a>
- Sara Ahmed, “Feminist Killjoys (And Other WIllful Subjects).” Scholar and Feminist Online 8, no. 3 (2010). https://sfonline.barnard.edu/polyphonic/print_ahmed.htm. For more, see Ahmed, Sara. The Promise of Happiness. Duke University Press, 2010; Ahmed, Sara. Living a Feminist Life. Duke University Press, 2017.