Artificial, Not Intelligent: How Meeting Educational Goals Requires Embracing our Humanity
By: Paul Gestwicki
Paul Gestwicki is a Professor of Computer Science at Ball State University in Muncie, Indiana. He is an award-winning game designer and regularly teaches in his department's Game Design & Development concentration. Dr. Gestwicki's research lies at the intersection of agile software development, games, and education.
Each semester, a few of my students earn the Detox achievement by going 24 hours without using screen-based devices. Their written reflections have been delightful. One student described a family road trip in which everyone talked and laughed instead of staring at screens. Another described how, without his headphones, he was able to really listen to the birds on campus. One more described how his frat buddies encouraged him to have a beer, watch a movie, and lie about the whole thing in his reflection. My feedback complimented him on his honesty.
I treasured reading students’ Detox reflections until last Spring, when I was struck by how many were painfully dull, especially those submitted late in the semester. The blandness of my own responses brought it to my attention: instead of reacting to interesting particulars, I was writing, “I am glad you tried it.” It was not until the semester ended that I realized these uninspiring, platitudinous stories, with their clinically perfect spelling and grammar, had to have been AI-generated. My heart fell.
What does it mean that a student would use AI on an assignment about rejecting technology? Many students see no issue with submitting AI-generated work as their own, arguing that it is just a natural evolution of spellcheck or grammar correction—or they consider it to be deceitful and do it anyway in the name of expediency. This confuses the output for the outcome, as if the goal of an essay were its submission and not its writing. It is the same confusion that leads legislators to think that awarding more degrees means that the population is better educated.
Education is about more than certifying participation. Since ancient times, it was understood that the aims of education are the formation of character and the good of society. Pope Paul VI echoed this in Gravissimum Educationis, where he described education as forming a person in the pursuit of their ultimate end and preparing them to carry the responsibilities of adulthood. Fulfilling our responsibilities requires both general knowledge and disciplinary skill, but our ultimate end is something far greater. It is what Aristotle called eudaimonia—a life of purpose and excellence, pursuing the greatest good. It is what Christians call holiness.
Students have always looked for shortcuts, and in a moment of stress, one might be tempted to submit another’s work as their own. Before the Internet, you had to know the right people. Then you had to know the right websites. Now, generative AI can do it all—no one else has to get involved. The AI chat looks like talking to a friend; it mimics the digital messaging systems that have mediated this generation’s formative interactions. This tricks us into treating the machine as if it were human, manifesting what French essayist Georges Bernanos observed: “The danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give.”1
Generative AI systems constantly tempt students against virtue. There is no need to persevere through difficulty if AI can instantly supply an answer, no need to reflect when AI has already suggested the next step. The AI is purely utilitarian, sycophantically supporting a user’s every whim. Interacting with it carries no risk of judgment, no need for empathy, no compromise for a better future together. It feeds the worst habits of the socially and emotionally stunted, frustrating the educational goal of character development.
Conversely, I am the human educator, and I am terrifying. I have a power beyond all machines: I have a will, and I can will the good of my students. I can congratulate them on their successes, and I can reprimand them when they should have known better. I can point them toward greater challenges, and I can pivot when they are overwhelmed. I can swell with pride at their successes, and when they fail, I can tell them about my own mistakes.
Human educators make moral judgments for the good of their students. These judgments require prudence, which is why effective teachers must be virtuous as well as knowledgeable. Flannery O’Connor understood the role of moral judgment in the humanities when she wrote, “When anybody asks what a story is about, the only proper thing is to tell him to read the story. The meaning of fiction is not abstract meaning but experienced meaning.”2 If you ask an AI to summarize O’Connor’s story “Good Country People,” it will do so obediently, regurgitating the abstractions it has gleaned from processing mountains of text. If you ask me to summarize the story, I will refuse, because I want you to read it yourself.
A machine cannot make moral judgments, but this does not mean that it is morally neutral. All technology reflects the culture that created it and impacts the moral decision space. Computing technologies embed the values of homogenization and reductionism, and replacing wholes by their parts. When using a computer, everything and everyone become data to be processed. Modern generative AI systems culminate this tendency, dissolving human achievements into mathematical abstractions. They cannot witness the beauty of a painting; they only know that it is made of pixels. Human educators must understand this dehumanization so that we can teach students to distinguish between what they can do and what they should do with AI technology.

The corporations behind this technology are also not neutral. They are engaged in a concerted effort to win minds and markets, and their systems are designed for habituation. They carry no legal liability for the real and imminent risks their systems incur. These risks include widespread disinformation, large-scale manipulation, and systemic human rights violations, according to the United Nations’ Global Call for AI Red Lines. Consider, for example, the recent MIT study that showed that chatbots outperform political ads at influencing political opinions. This will not be ignored by those with totalitarian inclinations.
Artificial intelligence systems are unlike anything in the natural world, and they defy explanation without resorting to computer science jargon. Even the term “artificial intelligence” itself is fraught with complications. It refers to systems that are artificial but not intelligent. We call them “intelligent” because they can do some things that a human can do—not because they think, as the metaphor implies. Conflating literal intelligence with metaphorical intelligence is not a new problem for AI developers. In 1976, Drew McDermott addressed this problem in a brilliantly titled article, “Artificial intelligence meets natural stupidity.” He critiqued the self-deception of AI researchers who confused the metaphor for the actual phenomenon, and this confusion has only compounded fifty years later. Pundits talk about “machine learning” without acknowledging that the term is just a convenient label for complex algorithmic processes. Any similarity to human learning is superficial. Meanwhile, an underperforming AI system can be reset, but a human who reads AI-generated nonsense cannot un-learn it.
Generative AI systems generate output by exploiting statistical relationships in their training data and by following heuristics provided by their programmers. They appear intelligent only because human intelligence created their training data, human intelligence programmed them, and human intelligence interprets their outputs. We notice when their output does not match external reality and call these “hallucinations,” but this mistakes the locus of the error. Nothing a generative AI produces has any intrinsic relationship to reality. The AI merely reports the output of a nondeterministic computational transformation. When a human thinks the machine recognizes truth or exercises wisdom, it is the human who hallucinates.
Well-meaning people are falling for AI hype without considering its implications. Misunderstandings about AI technology are promoted by bad metaphors and billion-dollar marketing efforts. I fear its most lasting impact will be in the hearts of the next generation: the amateur artist gives up after comparing their attempts to effortless machine output; the nascent writer settles for algorithmic text and never feels the texture of language. These young people are advancing through the formal education pipeline, but they will require increasing remediation if they are to shoulder their adult responsibilities and pursue their ultimate end.
Education succeeds when students grow in knowledge and virtue, and the human educator is both a guide and guardian of this process. The traditional aims of education are not served by a technology that is ambivalent about truth and encourages vice. We improve education by being more human, not by putting more technology between each other. Only by embracing our humanity can we give the students something that a machine cannot give: a glimpse of eudaimonia.
The Raised Hand is a project of the Consortium of Christian Study Centers and serves its mission to catalyze and empower thoughtful Christian presence and practice at colleges and universities around the world, in service of the common good. To learn more visit cscmovement.org.
Quoted in Dicastery for the Doctrine of the Faith, Antiqua et nova: Note on the relationship between artificial intelligence and human intelligence (28 January 2025), The Holy See, para. 112.
Flannery O’Connor, Mystery and Manners: Occasional Prose (Farrar, Straus and Giroux, 1969), 96.



This is an exceptionally thoughtfull and important piece! Your distinction between artificial systems and true intelligence really resonates with me because I remember teaching a coding workshop where students couldn't understand why their AI-generated programs kept failing, since they never actually learned the underlying principles and logic themselves. The concept of eudaimonia as education's ultimate goal is something I genuinely wish more educators would embrace in their teaching today. Your point about machines being 'ambivalent about truth' while encouraging vice cuts right to the heart of why this techology concerns me so deeply.