Writing is Hard Because Thinking is Hard: Why AI Can Never Replace Human Educators
By: Christina Bieber Lake
Christina Bieber Lake is professor of English emerita at Wheaton College. She is author most recently of Beyond the Story: American Literary Fiction and the Limits of Materialism. She writes weekly essays on reading literature for spiritual transformation at Art & Soul, where she also leads an online book group. She frequently leads faculty development seminars and retreats.
After more than thirty years of teaching English in the ever-changing landscape of higher education, I am completely convinced that one thing will never change. Teaching writing will always be the most important and challenging work an educator can do, and AI will never be able to do it.
That’s not to say that educators teach writing the way we should, because, let’s face it, we haven’t always done so—especially at the lower levels. Too many teachers are pressured to teach to the test: the AP exam that looks for a five-paragraph essay with a thesis and says something. Believe me, I understand. I taught high school English myself way back when, and even before smartphones, Covid, and AI, we were elated to receive a single paper that sustained half an original thought organized in any discernible way. It was with good reason that we held our department meetings at the Tropicana, a tiny bar in the one-light town of Cade, Louisiana. Those were the days.
When I started teaching first-year college writing, I learned why the jump from high school to college is—and should be—large. College teachers, no matter the discipline, are supposed to demand real thinking of their students, not just the production of an intelligible essay or the regurgitation of the contents of the periodic table. The only way to learn how to think is to learn how to write. The only way to learn how to write is to read—a lot. And then write—a lot. And finally, revise until the cows come home and you begin to truly think, and can eliminate all the tired thoughts that live in clichés like “until the cows come home.” When I discovered this truth, it enabled me to simplify my first-year writing classes down to one mantra that I repeated until they said it as soon as they saw me: “Writing is hard because thinking is hard.”
AI will never replace human educators because it cannot teach thinking. AI cannot teach thinking because AI cannot think. This common confusion—that what AI is doing can be called thinking—is causing most of our pain at this crucial juncture in higher education. Clearing up this confusion is more important than it seems. Simply telling students not to use AI without explaining why feels to them like someone told them to do dishes by hand when the dishwasher is right there. Understandably, it makes no sense to them.
Why Claude is not really thinking
To argue that what AI does cannot be called thinking is more complicated than it should be. I would need to draw from several different disciplines including linguistics, philosophy of mind, cognitive neuroscience, and computer science. Obviously, I can’t do that here.
What I can do is borrow from Brian Cantwell Smith in his book The Promise of Artificial Intelligence. Smith argues that machine learning has not yet approached the complexity of human learning. Thinking, it turns out, is not what Descartes thought it was—the mind’s grasp of “clear and distinct” ideas, as if clarity is “out there” in the form of discrete facts that we make simple representations of in our minds, like a photograph. But recent research in cognitive neuroscience has shown that it’s not that simple. For one thing, the world “out there” is so complex that it always exceeds any attempt to understand it. To recognize anything at all, an intelligence must impose schemas on the facts in view, and the schemas are always based on human judgments of value and significance. Those judgments, especially when applied to select what to attend to and what to ignore, create a context which, by definition, cannot originate within AI.
This means that an AI can only ever show you connections that some human being has already made and deemed to be meaningful. Without the original human input, any new connections an AI makes can only be guesses, shots in the dark. AI is not really thinking. It is gathering information.

What it means to think
So what is thinking? And what separates human learning from machine learning? Here are three aspects of thinking that can help us understand the difference.
1. Thinking requires personal context.
The theorist Wolfgang Iser used a helpful metaphor. He reminded us that literary interpretation works like the creation of constellations in the night sky. The stars represent the facts in the text; the constellations are the reader’s interpretation of those facts based on what she finds meaningful. There are many (but not infinite) valid interpretations, but the question of validity cannot be answered from within the text (the sky) itself. It requires outside context.
In the case of “reading the sky” to name constellations, the required context is what human beings find to be meaningful or memorable. So the ancient humans drew on mythology to “read” Orion, Cassiopeia, and Ursa Major into the sky. The facts of the stars in the sky do not and cannot give this information. With literary texts, the context for interpretation is more complicated but no less constructed by human desires, insights, and longings. The human reader of, say, The Great Gatsby takes most of that context for granted, including the classroom in which the text was assigned, the fact of the novel’s canonicity (someone else thought it important), the historical time in which it is set, and much more. All these specific choices are made by humans and must be input into a machine for it to even understand what it’s talking about.
2. Thinking requires value judgments.
I call this having “skin in the game.” An AI cannot know what you value, why you value it, or why you are writing what you are writing (unless you tell it). Value judgments separate us from machines. They are also part of what separates good writing from poor writing—good writers figure out what they think and why it matters during the writing process itself; bad writers deliver what they already know (or got from AI) to get a grade. Bad writers are only ever jumping through hoops. Writing with AI is not like using a calculator to perform a sum or to solve a simple equation. To solve a simple equation, a computer will perform guesses until it gets the simple variable right and solves for it. In contrast, there are almost infinite variables at play in a piece of writing, especially if the student is trying to interpret a literary text or solve what interdisciplinary studies calls a wicked problem—a fundamentally irresolvable one like “how can we attain world peace?” These kinds of problems are often less immediately practical than they are fundamentally connected to the human experience.
Writing then involves selecting from among thousands of different approaches, and the selection always involves value judgments. Thinking about a literary text (for example) always starts with why (its meaning for us, why we should read it) and ends with how (analysis). The student of literature is not trying to impress people at a cocktail party with her knowledge of when The Great Gatsby was written and what F. Scott and Zelda had going on at the time. The student of literature must discover, through her writing, what kinds of things make The Great Gatsby worth thinking about and worth sharing those thoughts with other readers of the text. Why this text? Why now? What’s important about what Fitzgerald accomplished and how he accomplished it?
3. Thinking asks new questions to answer questions.
The philosopher Hans Georg Gadamer insists that thinking requires asking questions to which we don’t know the answer. Open questions drive thinking, which always creates more questions. “The art of questioning is the art of questioning even further—i.e., the art of thinking,” he writes. Furthermore, thinking is always about learning how to ask better questions, since answers are necessarily limited by the question being asked.
When I teach this to my students, I ask them to consider a man who is trying to end his pornography addiction. The man can begin by asking the question: “how can I mortify my flesh and stop using pornography?” but this will only give him answers that involve methods of self-discipline and restraint. If he learns to recognize that a better question yields a better answer, the question might become: “how can I learn to love God so much that my desire for pornography pales in comparison?” This question, which is full of different values, assumptions, and beliefs, will lead him down a very different path. An AI can suggest questions the man might ask, but it can’t (at least for now) help him to recognize the difference between these two, or why or how to value one over the other. The process of thinking and writing itself is essential to that recognition and, ultimately, to the self-discovery the man desires to achieve. There is no shortcut.
Put together, these three aspects of thinking reveal that reliance on AI—especially among Gen Z and younger—will increasingly dupe students into believing that they are thinking when they are doing nothing of the sort. When it comes to writing, the mistake we make—that AI can think—is the result of an earlier confusion that is propagating like a weed right now. The confusion is this: we say “thinking” or “writing” when what we mean is “research” or “information gathering.” Our culture increasingly reinforces the idea that we read primarily for information (rather than for understanding or transformation), that solving problems means choosing the best information, and that writing means “producing content.” First-year writing students increasingly believe that when professors require a research paper, they are asking them to find information, process it, and regurgitate it in some inoffensive form. But that is not writing because it is not thinking. And now, thanks to Claude and ChatGPT and Gemini, we know exactly why. Shame on us for passing papers that could have been written by an AI even before the age of AI.
Writing is hard because thinking is hard. You can, and should, instruct a computer to perform a series of computations or collect some research on a topic just like you can, and should, put dirty dishes in a dishwasher. By all means, call on Claude to do your grunt work if your grunt work involves responding to emails requesting information, summarizing responses to a survey, or determining whether your team should go for it on fourth down. But I hate (and love) to be the one to break it to you: thinking is just not that kind of work.
The Raised Hand is a project of the Consortium of Christian Study Centers and serves its mission to catalyze and empower thoughtful Christian presence and practice at colleges and universities around the world, in service of the common good. To learn more visit cscmovement.org.



Great essay. I think you are spot-on. I'll definitely remember "writing is hard because thinking is hard." Bad writing exposes bad thinking.
Your next-to-last paragraph is especially brilliant.