Coming back from sabbatical in Fall 2025 means coming back to everyone talking about AI.
It would be nice if I could say I spent a year not thinking about it. Instead I spent a year reading, with horror, all my teacher friends’ texts and posts about dealing with a dizzying uptick in academic misconduct. And listening, with horror, to writers I admire discussing having their books pirated by AI companies. And scavenging, as I work on syllabi, for any bright ideas to make my courses less AI-able. And, for the past three months or more, waking up in the mornings with a speeding heartrate reciting “please don’t cheat” speeches to my future students that I’d seemingly been working on in my sleep.
And of course, when everyone asks “how does it feel coming back from sabbatical?” — a question I’ve been fielding all summer — the honest answer, when I choose to give it, includes the dread that accompanies my feelings of excitement about returning to my colleagues and classes.
It’s a depressing time to be a teacher. Yes, some of my colleagues choose to be excited about the technology itself, but it’s fundamentally heartbreaking to realize that your students used to mostly not cheat on their work because cheating wasn’t easy. When you thought, the whole time, that it was because they were mostly honest.
So now I’m back, and next week I’ll get to deliver the speeches to my actual students, and the actual speeches won’t be as profound or moving as they were in my half-asleep anxious brain. But there is one thing I know I want to mention, because it is the best thing I’ve heard on the topic of AI.
Ironically, I can’t properly credit the idea. I was at the Nineteenth-Century Studies Association conference at the end of March, at the pedagogy panel on the last day. The four presenters talked about incredible assignments they used in their classes. I felt jealous, inspired, and… full of dread. “How can we still do amazing assignments like these,” I couldn’t help asking in the Q&A, “when they are so vulnerable to cheating with AI?” Well, I had done it. Unintentionally derailed the session, which never got back off the topic of AI. But a man in the audience, whose name I unfortunately never learned, said something I have remembered ever since.
“When my students tell me that they use AI for something,” he said, “I don’t start by saying it was wrong or unethical. I ask them to think about what they might have missed out on by using it.”
This wasn’t a theoretical point. He really wanted the students to answer. So for instance, a student might say, “Well, I put the project assignment prompt into AI and ask it to summarize it for me and explain the parts I didn’t understand really well.” The advantage of doing this is obvious: AI is there, no matter the time of day or night, and can give an immediate (possibly accurate…) answer. But what is missed? Well, what would the student have done without AI? “Asked you about it,” they tell the professor. “Or asked someone in the class.”
What happens when they ask the professor? Surely the professor does not just answer with a crisp summary, pay them an insincere compliment, and walk away. Not at a small liberal arts college like Hope, anyway.
No: a conversation begins. If it were me, I’d invite them to think out loud a bit about what they might write about. I’d be able to steer them away from other pitfalls I’ve seen other students fall into in the past. I’d get a sense of which possibilities they sounded genuinely excited about, and ask follow-up questions about those, and encourage them to follow their interest. What’s more, I’d know that student better. And they’d know me. They might be more confident about asking me questions in the future. It benefits their work in the class on a broader level.
There are also potential community benefits. I might tweak my assignment prompt for next time, if I saw a place where they were getting confused. If something I said really helped that student, I might tell it to the whole class the next time we meet, so other students could benefit by it too.
Or what if the student couldn’t quite get up the courage to ask me to explain their assignment? They could ask a fellow student in the class. That’s now a person they are more connected to. That’s a person who might ask them a question back in the future. That’s maybe even a potential friend. Or maybe there’s no bond formed, and all that comes of it is that they find out the other student is confused too. That’s helpful knowledge. Learners are quick to assume everyone else has achieved mastery. All kinds of anxieties come out of this assumption, from not wanting to speak up in class to avoiding more challenging projects or activities even when they are of greater interest, because you fear to be the one who stands out as imperfect.
What are you missing out on by using AI for that?
Sometimes, yes, the answer could be “nothing.” (For the sake of the particular point I’m discussing, let us temporarily set issues like environmental impact and copyright piracy to one side.) A friend recently mentioned a mindless renumbering task that she expected to spend an hour on, but was able to have a chatbot do in seconds. A philosopher might come up with an argument about the inherent value of toil, or some point my nonexistent philosophy training can’t supply, but I think it’s probably valid to say my friend missed out on nothing by using AI to complete a mundane task that required no thought and that she didn’t need help on.
Still, I like this “what are you missing” question — not as a replacement of “is it ethical?” or “is it plagiarism?” but as a separate issue. Several colleagues like the idea of having students “discuss” topics with AI. They argue there’s no violation of academic honesty in just discussing. They may be right. And maybe AI will have better ideas than the students do themselves, so they will “get more” from the discussion. But what are they missing by not spending those precious minutes, of the few weekly hours we have them, for a few short months of their lives, not discussing with the real human beings taking the course with them?
I also like that it applies well beyond the classroom. Recently, someone I know was singing the praises of generative AI, explaining how it helped in his hobby, Dungeons & Dragons. As a DM, he was trying to think of options for his players in a particular situation. He asked ChatGPT and got several potential scenarios that he liked. Before that session at NCSA, I would have thought nothing of this. No one is cheating, no one is committing fraud. But now my brain jumps to the question: could he have missed out on anything by doing this?
What popped into my mind was that I happen to know he has a young adult child who also plays D&D, but isn’t in his game. Instead of asking a bot for ideas, what might have happened if he asked his kid to brainstorm with him?
This isn’t spoken in judgment. Who knows? Maybe his kid wouldn’t have picked up the phone. And I want to own up that this is the kind of thing we all do all the time: using the internet instead of talking to a human. Still, this moment really sounded a gong for me. It made me realize we don’t know who in our lives might connect with us until we reach out. And AI is exceptional at getting in the way of us reaching out.
Generative AI is always there, and people aren’t. That’s why people like it so much. Why people are using it as a therapist and trying to marry it. But ChatGPT isn’t made in the image of God. It is simply impossible to find real connection with an advanced word prediction machine.
AI can feel like a friend because it cribs the works of great writers whose inventions feel like friends. J.R.R. Tolkien, a Catholic, called his magnificent worldbuilding in fiction “sub-creation.” The real creation, he believed, was creation from nothing — God’s creation. Sub-creation is what humans desire to do in imitation of God. And generative AI? I guess we could call it sub-sub-creation. A knockoff of a knockoff. What do we miss by ordering AI to create for us? Tolkien might say we miss out on practicing a mindful imitation of God. What do we miss by using AI like a friend? He might say we miss glimpsing the luminous eternal spirit of another human.
One highlight of being back at Hope College is the annual pre-college speech by our president, Matt Scogin. I call it “the big show.” Pres. Scogin is always a great orator, even when we don’t agree on every point. But this time I found myself nodding vigorously throughout his whole speech — on AI. “Students have learned,” said Pres. Scogin, “that, unlike a relationship with a human, a relationship with AI is completely frictionless.” He talked about how this generation’s already skyrocketing rates of anxiety make them particularly vulnerable to the fear of being wrong that leads even the more ethically inclined to reach for what generative AI promises: a right answer always a few seconds away. He discussed how higher education, to survive, must lean into what AI can’t provide: the human touch.
Hearing this, I wondered if Pres. Scogin has seen the wonderful 1947 original Miracle on 34th Street movie. In it, Kris Kringle is on trial for lunacy because he believes himself to be Santa Claus. (Spoiler alert: he is.) The idealistic lawyer, Fred, has the following unforgettable exchange with his pragmatic girlfriend Doris.
FRED: It’s not just Kris that’s on trial. It’s everything he stands for. It’s kindness, joy, love, and all other intangibles.
DORIS: Fred, you’re talking like a child. You’re living in a realistic world! Those lovely intangibles aren’t worth much. You don’t get ahead that way.
FRED: That all depends on what you call getting ahead. Evidently, we have different definitions.
And then Fred delivers the kicker:
Someday, you’re going to find out that your way of facing this realistic world just doesn’t work. And when you do… don’t overlook those lovely intangibles. You’ll discover they’re the only things that are worthwhile.
The Humanities have been hit by generative AI at a time when they were already down: down in enrollment numbers, down in secure positions, down in financial and cultural support. Down because these days, departments whose name isn’t a job title suffer even when they have good post-graduate employment numbers. Down because they have been considered “unrealistic.”
Now, ironically, higher education needs to prove that the human touch has value. To succeed at that, higher ed is going to need the Humanities.
Yes, you can pass your courses with AI and get a degree with AI and land a job with AI and keep your job with AI. You can create art with AI. You can write your texts and social media posts and mediate your relationships and friendships through AI. What are you missing when you use AI for that? Just the only things that are worthwhile.