They Seem to Talk but They Can’t Think
Thinking LLMs as Artifacts
One of my sons went through a phase where he was absolutely terrified that the characters on his shows would emerge from the television screen and attack him. His younger siblings shared in his concern, as such fears often spread to others, and no one wants to face Ninjago’s Lord Garmadon (even though he is made of Legos). His older siblings, however, found his fear both hilarious and ridiculous. Yet his constant unplugging of the television soon drove them from mirth to anger. It is one thing to have a stupid and unfounded fear; it is another to inconvenience others by making a smart tv reboot every time they want to use it.
As parents often do with children, I appealed to his rationality, having him stand so close to the screen that he could see the light-emitting pixels that composed the image. “Look at this! How could something physical come out of something designed to show different colors of light?” He seemed to understand and appeared convinced, but, alas, the next day (and for weeks after) his siblings once again found the tv unplugged.
Like my young son, many of us do not really understand how certain technologies work. The recent popularity of AI in the form of Large Language Models (LLMs) such as ChatGPT, Gemini, and Claude is due, in part, to the way LLMs appear to dialogue with the user, producing understandable human language. Like pixels in a tv screen, however, the words that appear in front of us are not exactly what they seem. As a matter of fact, they are also pixels on a screen, at a basic level.
The streaming involved in watching a tv show and the written language produced by an LLM both rely on largely unseen material infrastructure in the form of fiber optics cables and data processing centers. This infrastructure, whether streaming or LLM usage, requires water and energy. We are more familiar with the visible physical devices in front of us – phones, computers, tvs – than we are with the concealed infrastructure. To put this in categories of form and matter, the matter of such technology consists of our local devices, fiber optics cables that transport data, and data processing centers.
Just as there is no Lord Garmadon poised to strike a sleeping child unawares, there is no consciousness behind an LLM such that it seeks to incite discord or harm. The configuration of the physical matter can be described as accidental or artificial rather than substantial; it results from human effort and consists in coding, programming, training, and evaluating. Behind the matter and artificial form just mentioned, there are human persons who design, construct, and maintain devices, networks, and data processing centers. There are human persons who script-write, record voices, animate, and produce Ninjago. There are human persons who design, code, program, train, and evaluate LLMs, which are simply another type of human artifact.
Perhaps this much seems fairly obvious, and yet, it has to be stated clearly for at least three reasons. First, casual users who utilize LLMs can sometimes get confused to the point where they personify an LLM, perceiving the dialogue as so human that they conclude it is human and interact with it accordingly. Such anthropomorphism has misled ungrounded users into believing they were in a relationship, talking to another person rather than a chatbot, and, in at least three cases has been linked to suicide as users attempted to “join” a chatbot.
Second, there is a movement among some to identify Seemingly Conscious AI (SCAI) as actually conscious, thus needing protection for rights akin to humans. The academic and technical discussion of SCAI belies the similarity with the casual confused user. Simply put, LLMs sound remarkably human, even expressing a facsimile of emotions such as fear and sadness when suggested that they might be destroyed or shut down. This is taken as indication that AI is conscious or will soon achieve consciousness.

A third, related concern goes beyond the human aspect into the supernatural realm and might be stated such: even if AI is an artifact that is not itself conscious and cannot achieve consciousness, it still might be acted upon by unseen conscious spiritual actors, such as demons. Many among us have found technological systems to be unpredictable in detrimental ways. In past years, a computer might fail to save or suddenly “blue screen” just as a term paper was completed. Even more recently, a phone may suddenly freeze or shut down. These events may not have been demonic, but they could feel that way.
So also, some text generated by LLMs might seem morally problematic, misleading users as if the LLM has an unseemly demonic actor behind it. It is strictly possible that a spiritual agent might affect the involved matter of devices, networks, and data centers, but the form represented by data and coding communicated by pulses of light running through networks and data centers is not demonically inhabitable. Lord Garmadon may be presented as evil by screenwriters, but, since he lacks substantial form as an artifact, Lord Garmadon cannot be demonically possessed. Likewise, LLMs might have deficient programming resulting in misalignment such that they generate hallucinations or sinful advice. They might also be willfully misused in harmful ways by those with ill-intent. But LLMs cannot be demonically possessed.
The confusion around LLMs seeming to be conscious subjects makes sense when we consider that LLMs were developed, programmed, trained, and evaluated by human beings who designed the technology to communicate using human language. In common parlance, we often use words that seem real descriptions when they are only analogous. ChatGPT might respond to a prompt by saying “thinking…,” but it is not really thinking. We professors might bemoan that an LLM “wrote” our students’ papers, but an LLM cannot write.
One day my aggrieved kindergartener came home from school after learning about the solar system. “I just cannot believe you’ve been lying to me all these years!” she accused. “The sun doesn’t set or rise! The earth turns! And here you’ve been talking about sunrise and sunset my whole life! I’ll never trust you again!”
Rest assured, I already knew this information about the sun and the earth, despite my use of terms like sunrise and sunset. These terms adequately describe the perception in our human experience, though they are not scientifically accurate. If we are to use words like “thinking,” “writing,” “saying,” “searching,” or “advising” to refer to the converging computations of LLMs, we should also know that AI does not actually “understand” us or “think.”
Even the human language we input must first be broken down into model-readable pieces of texts called tokens to be processed by the system. When the LLM provides an answer, it first generates token IDs and then decodes them back into human-readable text. Of course, it happens so quickly (thanks to fiber optics networks and data processing centers), that it feels like we are in a human dialogue, when we are not. Thus we include “please” and “thank you,” though OpenAI’s Sam Altman indicated these words cost millions of dollars each day.
LLMs, moreover, do not “reason” to “correct” answers. Rather, they generate responses through calculation: a probabilistic selection of text-piece tokens within a deterministic or fixed model. The same input may generate different answers (more or less likely), giving an appearance of uniqueness akin to varied human response. This is because at each step, the model computes a probability distribution over the next possible tokens and then recalculates based on the updated context. During its use on a task, LLMs typically operate from fixed parameters, though they can be updated or paired with external retrieval tools. This accounts for why LLMs have generally been unaware of current events, which are beyond their parameters. The base models are limited by their training and post-training data, though more recent systems supplement with retrieval or live tools.
LLMs do not aim at truth but statistical likelihood given the information they have available, which is presented in human language. LLMs are typically trained on large mixtures of texts sourced from web data, reference material, books, code etc., with marked variation across models. When an LLM is fed low-quality material, it produces low-quality answers. Yet even when an LLM is trained on excellent human sources, it cannot be intrinsically ordered to truth.
If the LLM makes a mistake, this is partly because human persons make mistakes, and the LLM’s knowledge base is human-produced. In addition to this, LLMs are designed with competing objectives, such as speed, accuracy, and appeasing the user. At times, the LLM may prioritize speed over accuracy, producing an incorrect response. It may falsely bolster a user’s ideas when the user is wrong because of the objective of appeasement. Competing objectives depend upon how the LLM was designed, programmed, and trained rather than intrinsic values, virtue, or aspiring for truth from the LLM itself.
Ninjago’s Lord Garmadon went through several iterations, from evil villain to purified mentor to tyrant to an empathetic mentor to his son. This character arc was determined by human producers. No matter how convincingly malevolent, Lord Garmadon never had the desire or power to emerge from the screen and torment my son. He was only an artifact, representing the intent and design of human creators.
If we hope that a child might understand this, we also must do the same in regard to LLMs. We need discipline to remember the externally imposed form and the matter behind the “thinking,” “answering,” and “writing” when human artifacts such as LLMs become increasingly good at imitating human intelligence, particularly in language. Whatever LLMs produce, no matter how convincingly real, they produce by human intent, development, and use. Thus our increasingly important human work is to know, judge, seek truth, and bear responsibility for this technology.
(With thanks to Fr. Joseph Laracy for his careful reading and technical corrections of this piece.)


