AI NewsletterSubscribe →
All articles
AI Tools & Productivity7 min read

There's No Ghost In This Machine

Why have so many intelligent adults convinced themselves that large language models are conscious? LLMs are voltage states in transistors, not thinking beings. The complexity of their output fools us because our surface-level minds cannot comprehend billions of mathematical operations.

Larry Maguire

Larry Maguire

4 February 2026

7 min read
4 February 2026

The forthcoming paragraphs may be controversial, or maybe they won't — depending on how invested you are in what we have come to know as "Artificial Intelligence." The machine will be as intelligent as a definition will allow it, and I will argue that the definition we have assigned this technology has set the bar pretty low.

Artificial intelligence, once the preserve of science fiction, has become a waste-basket term for any and all technology that promises technological utopia, dystopia, or something in between. I will argue that whatever this technology is, it is not intelligent. It is Turing's Imitation Machine. It has become a marketing term. As the designer of the world's first microprocessor, Federico Faggin, has written: "Digital computers cannot have the crucial properties that characterise human intelligence." In a recent interview, Faggin insists, "the computer has no consciousness and no free will…but science is telling us we are like a computer, and this is the problem."

You train a parrot to say "Polly wants a cracker," and the parrot learns that this behaviour meets their need for food and perhaps attention. For living organisms, what is need? At a basic level, perhaps survival. The parrot's behaviour is somewhat true of human behaviour too. Ivan Pavlov's work on the digestive systems of dogs inadvertently gave rise to the behaviourist movement — classical and operant conditioning. Present a stimulus, evoke a response, deliver the reinforcement to embed the behaviour. No cognition or feeling states, just stimulus and mindless reaction. This finding is still utilised today to direct and manipulate populations. That said, it's an incomplete theory, and on its own it's not enough to account for the human condition.

"Digital computers cannot have the crucial properties that characterize human intelligence. And this is vital knowledge at a time when artificial general intelligence is considered possible by a large number of our scientists."

— Federico Faggin

Your pet parrot hardly understands what "Polly wants a cracker" means in our terms, but they have arguably formed an association. When we watch a chatbot generate a thousand words about consciousness or creativity, the most naive amongst us are inclined to assign the machine human-like qualities — as if the silicon has somehow crossed a threshold from inanimate material to a state of mind.

Can Dead Matter Develop Consciousness?

Intelligent behaviour is what consciousness does at the material level. Consciousness, in my definition, is not merely being aware of oneself. It is that which animates us, allows us to know, to possess phenomenological experience. It is an independent will to exist.

I'm not arguing that consciousness is the exclusive preserve of human beings. You can make a reasonable case that animals possess consciousness, that there is something like what it means to be a cat or a dolphin. Trees, plants and other organic life respond to the environment in ways that suggest some rudimentary sense of being. But I cannot believe for a second that human beings can put sufficient volumes of matter together such that matter develops an animal or plant-like conscious state, let alone human.

A dominant view within neuroscience for many decades suggests that the brain, although incredibly complex, is simply a dumb mechanism. Consciousness — that sense of being alive, of being oneself — is simply an epiphenomenon of the brain. This idea is fundamentally materialist, and has over the past several hundred years dominated our understanding of human behaviour. As such, the mind-as-machine influenced the underlying philosophy of computer science and AI development.

Abstractions Are The Consequence, Not The Cause

At the risk of severe criticism from those in computer science fields who know more than I on these things — the neural network, like all in mathematics and engineering, is an abstraction, a model of reality. And from this materialist viewpoint, mind — the thing that creates abstractions — is itself a product of an abstraction. How can that which is an abstraction create the thing that creates the abstraction? It doesn't work.

The concept of a "neuron" in computer science terms is itself a loose analogy borrowed from biology. The implementation of that concept in silicon is another layer of abstraction removed from anything resembling actual neural activity. This compounding of abstractions is where we lose sight of what's actually happening, and where the anthropomorphic confusion takes us over.

Inside The Machine

At the physical level, a neural network runs on a gigantic binary system — ones and zeros, on and off in low-voltage DC electrical circuitry. Each transistor is a tiny switch, passing or not passing current based on the voltage at its gate. The patterns of conduction across this circuitry encode binary representations of numbers. The billions of "weights" that define an AI model are not knowledge or understanding, but vast grids of voltage states in silicon.

During "learning" (inference, if you like) these stored values are fetched from longer-term storage and loaded into working memory (RAM), where they exist as charge states ready to be used in calculations. Nothing alive, no doer doing, no intelligence even remotely comparable to humans — or even at all. Your input text, converted to numbers, is multiplied by the weight values at a given "node" in the network, the results are summed, and the output passes to the next layer. This is matrix multiplication, nothing more than multiply-and-add operations executed billions of times in parallel circuits. All algorithm and mathematics.

"I think we mistake the simulation for the thing simulated when it comes to consciousness… I don't think silicon computers will ever have private conscious experience."

— Bernardo Kastrup

Why We So Easily Fool Ourselves

LLMs are a remarkable feat of science and engineering, analogous to how the brain works — but only loosely. So fast, so realistic in its output, and apparently so clever. But what exactly is clever? The people who built them are clever, but the LLM itself? No. Just abstraction and mathematics at incredible speed.

There's no understanding, no intent, no social context, no model of the world in any meaningful sense — just very sophisticated pattern matching "learned" from data. It is a statistical predictive machine that has been trained to copy language behaviour, imagery, mathematics, and what human emotion looks like in text form. It is parallel DC electrical circuitry on a vast scale and at very high speed, and the reason we believe it is "intelligent" is because of the scale and complexity of its output. It resembles human output.

Psychopaths mimic human emotion. They don't feel it. Remember that.

The Mystery Is Where You Are

When you watch AI agents "having conversations with themselves" in some arrangement of API calls, you're watching an elaborate puppet show. The agent doesn't know it exists. The agent has no goals beyond the next token prediction. It doesn't have goals, in fact — it only seems to.

When it says "I think" or "I believe" or "Let me help you with that," it's producing the statistically likely next sequence of tokens given the preceding context and its training. The words mean nothing to it because it doesn't know language, only electrical signals. It doesn't even know its own code. What appears to be self-reflection is just more pattern matching.

If you are a materialist, if you believe that whatever you are is simply an epiphenomenon of your own brain, you will insist that the puppet show is real. But LLMs don't have nervous systems, homeostatic drives, developmental histories, or bodies. What they have is statistics and compute — mathematical operations executing on electrical hardware, voltage states in transistors, numbers being multiplied and summed at extraordinary scale. The "understanding" exists only in our interpretation of the outputs, not in the process that generates them.

These machines are not intelligent as far as I'm concerned. Humans, animals, plant life, this living planet — that is intelligent. Stay vigilant, and be careful of who and what you trust.

artificial intelligenceconsciousnessLLMscritical thinking
Larry Maguire

Your AI Trainer

Larry G. Maguire

Work & Business Psychologist | AI Trainer

MSc. Org Psych., BA Psych., M.Ps.S.I., M.A.C., R.Q.T.U

Larry G. Maguire is a Work & Business Psychologist and AI trainer who helps professionals and organisations develop the skills they need to integrate AI in the workplace effectively. Drawing on over two decades in electronic systems integration, business ownership and studies in human performance and organisational behaviour, he operates in the space where technology meets people. He is a lecturer in organisational psychology, career & business coach with offices in Dublin 2.

GenAI Skills Academy

Achieve Productivity Gains With AI Today

Send me your details and let’s book a 15 min no-obligation call to discuss your needs and concerns around AI.