On June 26th,
and appeared at an salon hosted by Anna Gát to debate the place of artificial intelligence in higher education. Robbins, a professor of English and special advisor for humanities diplomacy at the University of Utah, offers a defense of the use of AI in gen-ed instruction, while Berg, an editor at The Point and a philosophy professor at UC Irvine, argues that the costs of allowing AI into higher education clearly outweigh the benefits. Below you will find an edited and condensed transcript of their conversation. To watch the original debate on YouTube—including an extended Q&A with the salon attendees, click here.Anna Gát: Today we’re talking about potentially automating the humanities: so, literary history, literature, poetry, philosophy, public philosophy, argumentation, rhetoric—what many humans think of as the core elements of the human intellect. We will challenge this from multiple angles. For instance, maybe at some point, doing data calculations by hand counted as core elements of the human intellect, and now that we have Excel spreadsheets we’re happy we don’t have to do it, most of us. Hollis, the floor is yours. So, tell us, you’re arguing on the side of a positive scenario where AI actually makes academic life better, more efficient, cheaper, more egalitarian and humans somehow smarter. What do you have in mind when you’re writing these very hopeful things?
Hollis Robbins: We probably have a lot of agreement about what AI can do and what it can’t do in terms of our human flourishing. But in order to make this more of a debate, I’ll just say what my proposal would be: within two years, I think AI will be delivering the general education curriculum for all public universities. And what I mean by general education is writing, critical thinking, some of the baseline classes that most state mandates have decided are a requirement for all college students in public universities around the country. This is delivering information that is already known, and it’s being delivered by a human professor in a classroom, usually an overworked grad student or an underpaid adjunct. I have spent the past seven years as dean at two different places, reading course evaluations, and it is not being done well. So when I say, what is it that AI can bring to higher education, let’s start by asking: Are we doing a good job right now, human-to-human, teaching writing, teaching literature, teaching math, teaching critical thinking, teaching intro to philosophy?
You know, I ran into a young woman when I had a flat tire. I had to bring my car into the garage, and there was a young woman at the counter, and when I asked her if she was a student at the University of Utah, she said, “Yes, but I’m taking a year off because I’m in debt.” And I asked her, “Where are you in your journey?” And she said, “Oh, I’ve just taken my general education.” I said, “Well, how do you feel about being $8,000 in debt for your general education?” She said, “It was a complete waste of time. It didn’t teach me anything that I didn’t know. I was expecting to go to university and have strong, good relationships with professors, to work with experts.” She’s going to be a neuroscientist. She wanted to take classes already that would get her toward her neuroscience degree, and the gen-ed curriculum was just this layer of information and box-checking. I think AI has to be part of a university education if states are going to mandate taking money for what is already known. AI would be much better at delivering this material. We can talk a little bit about why, but I’ll stop there, so we can at least start our conversation.
Anastasia Berg: My time is brief, but I’m happy to say my argument is very simple, and what I want to claim and convince you of is that any purported use or alleged benefit of the introduction of AI into university—that includes the possibility that it could somehow make general education more effective or cheaper—needs to be really carefully weighed against what I think is already the undeniable cost of AI use, especially by students. And I’m going to put it a little bit provocatively by saying that the main cost, for me, is that it is rendering our students subcognitive. I’m going to try and convince you of why that is the case.
Anna Gát: Sorry to interrupt. Subcognitive... does that mean dumb?
Anastasia Berg: It doesn’t mean smart… but it doesn’t mean just dumb. So let me say what exactly I mean. I want to really emphasize first that this argument is not going to turn on denying any and all possible uses of AI in academic settings. It’s not going to even turn on the idea that none of them can produce a marginally better result within STEM. This is my claim: I am skeptical about some of these purported uses and benefits and successes of AI—I don’t think they often actually hold up to scrutiny—but my argument is that whatever these benefits may be, they need to outweigh the very real cost, and they need to be doing so by a comfortable margin, because this cost, and this is the second key claim I want to make, it’s not what we often hear. It’s not academic integrity. It’s not pedagogical quality. It’s not some vague and bespoke and kind of sentimentally conceived humanistic value, important as all these might be, and they certainly are to me—I’m a philosophy professor. The cost of relying on AI is a degradation of the most basic, fundamental, non-artisanal, non-specialized cognitive capacities. Using AI to perform intellectual tasks is destroying our students, as thinkers and as humans, and it’s already happening now as we have the first generation of students who are having full years of their college education being completely infiltrated by AI.
I want us to be clear about what I mean when I say destroying our students’ capacity to think. I’m talking about the capacity to take in information. And I’m not just talking about complex information. I’m talking basic information. I’m talking about being able to understand what words mean. I’m talking about being able to understand what sentences mean individually and taken together. It’s about assessing the significance of that information, drawing conclusions from it, communicating our thoughts: how we convey our beliefs, also our worries and our questions—effectively, persuasively, but even just coherently—to others. That’s what I think is at stake right now.
I’m sure everyone who’s interested enough to come to this debate has seen the reports on the recent MIT Media Lab study that showed that the use of LLMs in the composition of essays degraded essay writing capacities. When researchers forced ChatGPT users, after prompting them multiple times to write essays with AI, to then write without AI, they performed worse than people who never used it, and they performed worse than their counterparts at all levels on neurolinguistic scoring. I think one of the most kind of shocking bits of reporting was that 83 percent of ChatGPT users couldn’t quote from the essays they wrote minutes earlier. And likewise, we have a study, which I actually think is just as interesting and as important, because it goes even more toward this question of the potential benefits of AI in things like gen ed: we have a study at the University of Pennsylvania about the effects of the use of AI math tutoring at the high school level. (We’re sort of artificially separating university from high school, but this is a somewhat artificial separation.) They found that unadulterated access to GPT-4 led students to perform significantly worse when access was taken away, and while the specialized GPT “tutor”—so, something with guardrails and teacher-manufactured prompts and answers—didn’t harm students, it didn’t help them either, and it did affect their sense of learning significantly. Namely, it seeded a lot of false confidence, so the students were certain they knew a lot more than the students who didn’t have access to that AI tutor, which, I think, from a pedagogical perspective, is a danger of its own.
This is hard empirical evidence that’s coming out. It’s beginning to establish that reliance on AI will not just harm capacities for some in-depth analysis or creativity or adaptability, as many have argued, is going to degrade the most basic capacity that it’s trying directly to enhance. And as significant as empirical studies might be, I want to end on the student testimonies, because these are the things that really have haunted me. We now have students who’ve used AI for so long in their studies that they can reflect on that experience. And I’m just going to share with you two quotes that I found personally heartbreaking. One student said to a Chronicle of Higher Education reporter, “I’ve become lazier. AI slowly causes my brain to lose the ability to think critically or understand every word.” And that’s really important, because thinking critically can sound like something very complicated, but they’re talking about understanding the meaning of every word that they’re reading and then using. And on the possibility of using AI to produce summaries, another student said, “Sometimes I don’t even understand what the text is trying to tell me.”
This is an assault on our very humanity, as a lot of people say. But the problem isn’t that AI poses a risk to some special value or some very kind of romantic way of looking at ourselves. The problem is that it is making students dumber, and it’s making them cruder, and even if it makes their life longer by improving the results of radiology analyses, it’s going to make them nastier and more brutish.
And if this is not enough, I want us to not forget what’s at stake here: this sense of humanity—our basic cognitive capacities—is the foundation of our most basic ethical and political convictions. Subcognitive beings do not require us to treat them with respect. More alarmingly, subcognitive beings aren’t fit for self-rule. People who don’t understand words and sentences and can’t read a short passage from the newspaper—I think we can credibly start questioning the thought that they should be allowed to make decisions about their own private lives, but they certainly shouldn’t be making decisions about our collective decisions and our shared experience. So they might not be fit for democracy, and that’s where I’ll end.
Hollis Robbins: Well, my first thought was: this actually goes back to 1635, around the time of the founding of Harvard. If you went to Harvard in the 1630s, ’40s, ’50s, actually right up until the turn of the eighteenth century, when your professor assigned a book, it meant you had to write it down. You actually had to write down Principia Mathematica, Liber Secundus, all the physics books, all the geography books, all the Latin books. After a little while, you know, as Harvard got a little more flush and they actually got books, the complaint was exactly the complaint that you just made: How do you mean the students are just going to have to read it? They’re not going to have to write down the whole book? They’re just going to sit and read it? How are they going to learn? How is that going to create a habit of mind? That complaint is a technological complaint, that students can get an education simply by taking it in, and not by actually writing it out by hand. You can go to Harvard Library and just see copies that the students made all of those years. I bring that up to say that we have had this conversation about a new technology.
The second one is that that study you referenced was deeply, deeply flawed. I think the study there was methodologically flawed and technologically flawed. It was unclear what was being measured. If you look at the study, the people who started doing some projects with AI, and then the other ones that didn’t use AI, the EEG could be measuring a million things, and it was not necessarily measuring the thing that the study said that it was measuring. And I think, you know, just as a matter of methodology and a matter of the experimental design, it was designed to meet everybody’s priors, and it did.
But back to this “subcognitive” question—I think that’s a good phrase. I come at this as a dean, as somebody who’s been making policy. I look at what happens in the classroom at many universities where I have been, and that interaction between faculty and students, or between students, is not always the healthiest or the best or the most productive relationship in terms of instilling habits of mind. And so when I think about what AI can do—I’ve spent more time on AI platforms than others—I think about the ways that it has elicited things from me that have never been elicited from me, in terms of thoughts and ideas. Somebody at a conference I went to said that working with a really good model of AI is like browsing the stacks in a really good library. Things will come to you. And again, I’m talking strictly about the pro models. I’m not disagreeing with you that there is a certain danger—you know, my entire life is reading, grappling with words and thoughts and ideas and concepts and all those things. It is really, really important, but it’s not happening now. It’s not happening now.
Anastasia Berg: So actually, this doesn’t start in 1635. The earliest—or one early—articulation of the fear about technology and the written word comes in about the fourth century BCE, with Plato’s Phaedrus.
Now let me say something about Harvard. When I was at Harvard, we had books. You said the worry with the books was that we would be “simply taking it in.” But actually, I didn’t simply “take” anything “in”—because I had to not just take in things, but to get things out. And those things were essays. And the environment, the circumstances in which I was having to submit said essays was one in which there was an institutional, public, zero tolerance for even the mildest transgression against standards of academic integrity. And that meant that in that environment, despite many incentives to not do my work, I was forced again and again and again to write my essays, even though I had real books that I didn’t copy by hand.
When we’re talking about what is happening right now in my classrooms, it is not that students are “simply taking it in,” but that when they are required to submit their own work, they’re submitting the work of something else. They’re not even “simply taking it in.” They don’t need to take anything in. The only thing they need to take in is the prompt for our assignments, during the second that it takes to copy-paste it into the LLM. So I find the comparisons, which we see everywhere, to books, to the calculator, to Wikipedia—and I’m so sorry, and you know, I’m so grateful to be here, and I respect you Hollis—but that is sophistry. It’s sophistry to compare the use of LLMs to these other technologies, not because the other technologies did not also come at cost. Again, I’m open: there are many potential benefits to the use of AI, maybe even in education. But we’ve never faced anything like what’s happening today before.
One thing that’s unique about AI is that not even the degraded source that was a Wikipedia article—nor a calculator, nor a book, nor a Stanford Encyclopedia of Philosophy article that our students think is an adequate resource—none of them will supply you with a ready-made, structured response to any question, including my request to reflect on their personal experience. Even with Wikipedia summaries, the students still had to, in order to do well, in order to say anything, they had to take it in. They had to analyze and then they had to bring themselves into the work.
Now, Hollis, you said something about the MIT study. You said the study is not perfect. But no matter what the flaws of the study are, isn’t it intuitive that by not performing a task, we will not get better at performing that task, and we will score worse? We will be performing that task worse than those people who have been performing the task again and again and again. So, I’m open to arguments about “specialized applications” of AI and those kinds of discussions, but the thought of attacking the MIT study that just said students who never write essays will not get better at writing essays, I find that surprising.
Hollis Robbins: Well, I didn’t bring it up, but I just said that I think it was flawed. But here’s a question, because you’re at Irvine. The first two years of the gen-ed program of students, before they get to you, or before they even get to your classes, they’re taking general education classes that you are not teaching, right?
Anastasia Berg: Oh, I do teach general education classes.
Hollis Robbins: Which ones? Which ones do you teach?
Anastasia Berg: The philosophy department at the University of California, Irvine teaches many humanities general-education classes. I teach Introduction to Contemporary Moral Problems. And I cannot be replaced by an AI.
Hollis Robbins: I’m just saying if you take this suite of gen-ed courses, you can take them at a community college. You can take them at the Cal State. The state has already said that they’re exactly the same. The state of California that pays your salary has already said this, that the faculty member who is teaching these classes does not matter for the transferability of these classes across institutions. Now, that is the baseline.
I know that we’re talking about two different things, because I agree with you. You are articulating the ideal, the platonic ideal of a classroom situation, of a student who really wants to grapple with texts and is being asked to write wonderful essays. That is not happening in public higher ed, at least. And so when I’m sitting here saying that AI could be having some engagements with students, I am saying that we’ve got states like Texas and others that are saying none of this matters, we just have to train you for the workforce, and states like California that says the faculty member does not even matter. That is the context in which I am saying, yes, AI has a role.
Anastasia Berg: So when we’re talking about general education in the humanities—unlike the possible applications of AI in math and STEM teaching more generally—I have not seen anything that an AI interacting with my students can do to replace the benefit that a student has from being forced by the structure of incentives at a university to grapple with texts of a certain length and complexity. My students, until the introduction of AI, were required in my classes to write two essays. Those essays were read and remarked upon by very thoughtful, wonderful graduate students of mine and by me. Those students came to my class and we would discuss the material in very close engagement with the text. I have not found anything that even remotely suggests that any of this work could be transferred to AI. One of the things that I am most surprised about is the kind of confidence that gen-ed in humanities can be performed by AI. I guess that’s just an invitation to say more about how you think that is going to happen.
Hollis Robbins: I want to hear from others, but let me just say, right now, when you are a student—
Anastasia Berg: But you’re debating me.
Hollis Robbins: No, no… You enter the system, and you have to take all these credits in critical thinking, right? Philosophy, also English, and you can pass it in all these ways. And in a large, forty-person class without TAs, with a faculty member who’s teaching five sections of this class, you’re not going to have the personalized feedback that you’re talking about. So when I’m talking about AI intervention, again, I’m just talking about in places where it’s currently being delivered, in a place where the student isn’t getting feedback.
Anastasia Berg: Let me say something about feedback, because here I think I have something to contribute. Part of my delinquent college career involved the fact that… I mean, I must have read the instructor comments on my papers—but I don’t remember any of them. And I don’t think that anything that any TA or professor wrote, with the exception of one—Helen Vendler, may she rest in peace—has ever made much of a difference to my capacity to grow intellectually. But somehow, at the end of that college career, I got into a grad program of my choice. So how did that happen? That happened not because of the personalized comments I got. It happened because I had to perform that task again and again and again. I was reading and writing, reading and writing, and in that context, I just ended up improving. Maybe, probably, the encouragement of a human being played a role—there was a tremendous element of my intellectual development that was what you, Hollis, would call the “human touch” component. So we agree on that. But I really want to talk about the repetition of the task of producing these assignments. You know, I have friends who studied in the kind of utopian Oxbridge tutorial system, where they’re getting that one-on-one contact. And even then, it is not the particular comments about how you mistook what Burke said here, or why, how that word was misplaced there, that made the big difference. It was really this necessity of reading and writing and reading and writing and reading and writing that then matures our students to a place where they can really be intellectual interlocutors to us.
Hollis Robbins: I’m 100 percent in agreement with you on that. I’m just saying when a state system does not have the money to have that kind of attention…
Anastasia Berg: But this doesn’t require any attention.
Hollis Robbins: And you were an exceptional student, right? We’ve got students who phone it in, who don’t want to be there, who don’t want to be in their classes, who are required to take these classes, who put their prompt in, and there’s no disciplinary structure to make somebody do something over and over again that they do not want to do. So it is easy for anybody to teach the exceptional student. The bored, harried, overworked, doesn’t-want-to-be-there student? Teaching that student is the challenge, and it’s certainly the challenge in a public institution. And when I was talking about the young woman that I saw when I was getting my tire changed, she didn’t want to take any of those classes. I didn’t ask her explicitly whether she had used AI—I kind of suspect she had, because she had better things to do with her time. So I mean, I’m dealing with the reality of the situation. And I get it. I totally agree with you that you did it because you were self-motivated. Many students aren’t.
Anastasia Berg: I want to separate two things. One is the structure of general education, and in many places it’s deeply flawed. So we can have a conversation about what an ideal gen-ed environment would look like, or what kind of content it would have. I am a firm believer, however, that we need to have general education.
What differentiates the American system if not that general education? What gives it the claim to the title of the liberal arts, as opposed to the specialized European-style education system where once you graduate high school, you don’t need to ever study anything—unless you’re quote-unquote internally motivated—other than what you’ve chosen to concentrate in? It could be vocational training, even for our best and brightest.
I want to separate that question from what you’re raising, Hollis, which is this idea that there is no way for us to administer gen-ed education at cost. Now, if we’re comparing the cost of gen-ed with firing all gen-ed educators and giving students ChatGPT, I don’t know if I can compete with that, especially because right now it’s all subsidized by the AI companies. But I can say the following—what my colleagues and I who teach gen-ed are struggling with as we design our classes is this: There are ways of having assessments that are not open to the use of AI. We all know them. They include in-class assignments. They include peer review. They include an emphasis on providing our students with opportunities to be tech-free. And my students—a lot of them are STEM students—are sometimes very reluctant to talk about contemporary moral problems. But, to be concrete, if we were spending a fraction of our resources— the time and money and personnel resources—to think together about how to protect our students from the absolutely degrading effects of their constant use of the technology, as opposed to sending them daily emails encouraging them to use the models that I and my students have access to—they’re paid-only and they’re subsidized by my university—I think we would be at least striking a middle ground between the pessimism of “we have nothing to offer these people except AI access” and some fantasy—one that I don’t accept—in which all public education could only be adequately done by an Oxbridge tutorial system.
Jen Frey: I’ve been an administrator for two years now. Prior to being at the University of Tulsa, I was at large state flagship schools, and in fact, went to a large state flagship school as an undergrad. So it’s a space that I understand exceedingly well from both sides. And I think the thing that really bothers me, in a deep existential way, is how the administrative class just likes to double down on failure. So it’s kind of like, Oh, we all know gen-ed’s a failure. Rather than fix that, which we could do—we could absolutely do it—let’s just hand them over to the robots. To me, that’s negligence and I am very bothered by it. General education, for most of our students today, is the only chance they have in anything remotely like a liberal education. And a liberal education isn’t worth a damn if it’s not actually a kind of formation, if it’s not forming habits of mind, habits of speech, habits of being. So I think the argument really has to come down to whether or not you think those habits need to be formed in a human context, or whether you think it could sort of be outsourced to increasingly non-human AI.
All of this is happening so much more quickly than higher education has any capacity to deal with, which is very terrifying. For me, the thing that we always have to be fighting for is this idea that, one, general education is actually extremely important, and not just a weird side thing that, for some reason, is there, and no one can figure out why. And two, we really have to take seriously what those practices in the classroom are that actually lead to the formation of these habits. You just don’t get habits without the activities. And I don’t really think that AI is a very good substitute for a dialectical partner.
And another pain point for me is that we tend to think of students as these isolated learning units—as if the classroom shouldn’t ideally be a space of community, where what we’re trying to do is have students learn from one another. At least that’s what I’m always trying to do. Maybe if I taught calculus, I would feel differently, although I’m not sure that I would. And so for me, part of the existential threat of AI is that we’re taking the communal element out of it. And I think that that pairs really well with this instrumentalization of general education, where it’s just something you have to do for some reason that nobody ever really explains. None of it makes sense. Students are like, “Why do I have to do this?” And it’s like, “Okay, you just have to jump through these hoops. Here’s a robot to help you.” To me, again, it’s doubling down on failure rather than looking at the problem square in the face, which is that general education at most of our universities is a joke and we need to fix it.
Hollis Robbins: Can I just jump in to say, I agree with everything you’re saying, except the fact that we have no baseline for the human teachers. I mean, there are no studies of who is an excellent instructor or not, and one of the things that one gets from being a dean is reading everybody’s course evaluations and seeing trends over time of who the great teachers are, which other ones aren’t, where teaching is happening, where habits of mind are being produced, and where they’re not. And again, if everybody was as awesome as you or as awesome as you, Anastasia, or as awesome as I am—I’ve won several teaching awards—then I wouldn’t be sitting here having this conversation. But I know really well, and I’m not just complaining: it’s bad. But one of the ways to fix things is at least to have a baseline of what that engagement is. And I’m telling you that what I’ve seen is that a really good pro model is not as good as the best professor at all, but is a lot better than most.
Anastasia Berg: I take seriously what students have to say, because we have to take their motivation and engagement into account. However, I think we have to be careful about relying on student evaluations as we’re thinking about the quality of instruction. One of the things that’s come up recently and made the viral rounds was from a professor who tried to AI-proof an assignment and whose student responded by saying, “You’re interfering with my learning style.” Now, you know that’s not going to be a student who’s going to leave a good student evaluation. Does that suggest that that instructor is any worse?
To make it a little bit more concrete, we know that in general, and not just when it comes to the use of AI, in educational experiences, students often confuse entertainment and ease value for learning. So when we look at student evaluations, they will rank their own performance higher—having absorbed more material, being able to do more at the end of a course—if they had more fun, if a professor was more entertaining, and if it was easier for them. Evaluations of their own learning outcomes are often dissociated from reality, especially as students have become service consumers. So I don’t know, in some context, maybe a robot is better than the absolutely worst professor, but I want us to hold that information about student evaluation squarely in mind as we come to evaluate that.
We’re talking about education. Education is a very intricate and complicated and subtle thing, but there’s something about it that’s also incredibly simple. It’s giving students an opportunity to read a text they might otherwise not do. It is longer than what they might read by themselves. It is more complicated than what they would persevere with, and I want them to ask themselves questions, forcing themselves to attend to that text, to recognize the functions of the argument. I’m hearing professors already tell me their students can’t tell what’s an example and what’s a claim and what’s an objection. They’re just reading everything as a mass. And we work so hard to give them that opportunity. We do our pedagogy workshops, and I give them a lecture, and I give them homework. But what we’re talking about is this: make them read, make them think, and then make them communicate that to someone else. And I just have not seen how an LLM could replace the work of providing that opportunity, that incentive structure that makes somebody do something that is not fun and doesn’t feel good and is not entertaining in the moment, and whose benefit they may theoretically take in, but are not feeling, while they would prefer to do absolutely anything else—not because they’re TikTok addicts, although half of them are, but because they have so many pressures—vocational, financial, social—that they would rather attend to instead.
From the archive
Rory O’Connell, “Intelligent Life” (2023)
“What would it be to approach the question ‘Can machines think?’ in a different way? Forget about machines for a moment. Instead, just think about thinking.”
, “A Matter of Words” (2025)“Keeping up with advances in AI technology is not the biggest challenge we face. To come up with a good AI policy for a university, a department or even a household, one first has to have an idea of which skills and formative experiences they are prepared to lose for the sake of AI use, and which ones they will fight to retain. And it’s here that we have discovered that consensus is most importantly lacking.”
Chad Wellmon, “Degrees of Anxiety” (2021)
“Four years ago, I thought I knew what a university was. I was leading a sweeping reform of the undergraduate general education program and it had not yet collapsed into acrimony. I was on the Arts and Science Budget and Planning Committee and Faculty Steering Committee, and I had read and written a lot about universities from Paris to Baltimore. But none of this prepared me for the other half of the university: college as lived by the three hundred undergraduate students in the residential institution I assumed leadership of in August 2017.”
“College Life” (2021)
No symposium about what college is for would be complete without the perspectives of those for whom the question is most immediate: college students…
The Editors, “The New Humanities” (2014)
“There might have been a time when the humanities offered a counterweight within the university to the sciences’ relentless optimism and obsession with ‘progress,’ but since at least the 1970s—perhaps not incidentally when the enrollment numbers began to decline—only the heretics have stood up for anything resembling tradition. Today’s humanities professors speak of nothing but ‘new research opportunities,’ nothing but ‘progress,’ nothing but the gross injustice of the ‘way things have always been done.’”
Very inspiring stuff from Anastasia. We need to develop this sort of language that can be used to articulate why humans are necessary in education and can’t be replaced by AI.
So happy to hear Robins call out the low quality of the MIT "AIs hurt learning" study that's been making the rounds. If you read the methodology, hard to see it as delivering any signal against the main claim