- Pedagogy Unbound
- Posts
- Against AI in the Literature Classroom
Against AI in the Literature Classroom
in most college classrooms, actually

Dear Friends,
I wrote the following (for myself) in response to some pressure I’ve been getting from UI administration to integrate AI and/or “AI literacy” into the learning outcomes for Interpretation of Literature, a general education course that I’ll be directing starting later this year. Although I’m thinking of the particulars of an English class, I thought this might be useful to faculty across disciplines. If you’re not feeling this pressure yet, wait a little.
Course Focus
I think it’s worth distinguishing between AI as a subject of study and AI as a category of tools that students might use as part of their school work. I’m a lot more comfortable with the former than the latter. There are many disciplines that I bet have a lot to offer students to help them make sense of the newest offerings from our tech overlords. And I could be persuaded that one of the things general education courses should be helping students with is basic “AI literacy”: understanding what AI does and does not do, grappling with the (countless) ethical issues, thinking about environmental impacts, etc.
But AI literacy has no place in the learning outcomes of a course like Interpretation of Literature because it has nothing to do with what the course is supposed to be about. This is a course about literary interpretation—reading, making sense of, and discussing literature. To shoehorn AI literacy into the class makes exactly as much sense as including scientific literacy, musical literacy, media literacy, etc. It is good, of course, for college students to learn how to navigate an increasingly complex world; I would like them to learn how to critically engage with AI technologies (among many others!) as a part of their education. Why would they do so in a course devoted to the basics of literary interpretation?
A semester is not a very long time. Fifteen weeks devoted to the practice of literary interpretation is frankly not enough time. This is the case for every course—we’re scratching the surface, we can’t tackle everything, and so we need to prioritize. This is why the standard best practice for course design is to restrict your course goals to a very few—I’d say no more than five—hoped-for outcomes. To start adding more is to dilute the focus of the course, to make it less likely that students will be able to succeed. A course on the interpretation of literature has its hands full already trying to help students become better readers of literature, better writers about literature, better literary thinkers, better at literary discussion, etc. To add AI literacy on top of that is unwise and uncalled for.
Obviously your mileage is going to vary on this one depending on the course you’re teaching. A course like Rhetoric here at Iowa, which has long counted information literacy as one of its course goals, might naturally look to help students make sense of all the many ways that “AI” (which means a hundred different things right now) is changing the media and information environment. A business course might want to help students develop familiarity with actual software that corporations use, etc. Even in these cases, however, the risk of course goal dilution is real. Instructors need to make sure they’re clear about the three or four most important things they want their students to be able to do after they take the class.
Pedagogy and the Purpose of Higher Education
Here’s where we get to AI as a category of tools that students might use—that instructors actually encourage them to use.
I might argue that practice—deliberate, supported, guided, social practice—is the fundamental undertaking of a college education.
For the most part, what students should be doing in college is practicing. When a student writes an essay on Giovanni’s Room in my class, she is only pretending that she is writing a literary argument about a famous novel to a reading public interested in her ideas; in fact, she is writing the essay to me, who is really only interested in her ideas insofar as they show she’s learned something. It’s not the game; it’s practice. Yes, when I grade that essay, I am trying to assess how much she’s learned. But even more important than what she’s learned about this particular book and the task of writing this particular essay, I think, is that she’s practiced reading, interpreting, and criticizing a work of literature through an assignment specially designed to make that practice maximally educational. We assign essays because we believe that doing the work of interpreting literature is the best way—perhaps the only real way—of learning what the field of literary studies has to offer.
The same logic applies to a student writing a lab report in a biology class, going into the archives for a history assignment, building a simple program in computer science. In each case, the point is that the student practices completing these tasks, in an environment where that practice is deliberate, productive, and appropriate for where the student’s at.
The point of an essay assignment in a literature class is not to write the best possible essay; it’s to learn how to write a great essay, so that in the future the student can use that skill however they want. We ask them to write essays, not because we need more student essays, but because we need more students who can write. Writing the essay gives the student the practice she needs to get good at it.
Generative AI tools—at least the ones that college students use—are essentially labor-saving devices, and the labor they save is precisely the work we want students to do in our classes. ChatGPT promises to save students from having to do the thinking, planning, drafting, and revising necessary to come up with a piece of writing. Those acts are what students come to college to do and to learn how to do better. Any technology that helps students avoid doing this practice is impeding their education. As I tell my students on my syllabi, “The only path that takes you to the learning that this course is designed to help you achieve is through the ‘wasted time’ that AI tools would help you avoid.”
The Ethics of Endorsement
Finally, AI, as an umbrella term for a number of technologies, is a danger to our students. It doesn’t just rob them of precious opportunities for practice, opportunities that are essential for much of the learning they do in their courses. It robs them of much else. It robs them of their personal data and intellectual property. It robs them of their privacy. It robs them of their confidence in themselves as capable educated adults—people who can read, and write, and think for themselves. It is robbing them of career options, weakening or even eliminating whole classes of jobs they might pursue. It robbing them, as we speak, of their democracy. And the massive data centers that power the technology that seems so appealing, so personal, so easy—just put in a prompt and let the magic machine do the work—are helping to rob them of an inhabitable planet.
This is not just an argument to protect our students from this technology in the contexts where it is most dangerous to their development and well-being. It’s also an argument that to endorse—or seem to be endorsing—this technology as a learning tool is a derilection of duty that serves to confuse students further about those dangers. We have a chance to show students that, in contrast to a culture that tells them there’s no point for them to develop as thinkers, writers, scholars, we believe in their capacity to learn and grow and contribute to bettering their world. What good is a university if it doesn’t do that?
More on this soon, I suspect.
Reply