LLMs on Turing Machine Architectures Cannot Be Conscious
1. The Necessary Condition: The Intrinsic Difference Condition
Claim (Intrinsic Difference Condition): For a system to have conscious experience, there must be state differences that exist for the system itself: that is, differences that are intrinsic to the system’s physical reality, not merely distinctions imposed by an external observer.
Why this matters:
Experience is something that happens to a system. A person who is in pain is aware of their pain independently of an external observer labeling it for them. Some distinctions must matter to the system itself, not just to us. If there is no fact of the matter (for the system) about which state it is in, then there is no fact of the matter about what it is experiencing. “No fact of the matter about experience” is equivalent to “no experience”.
A useful way to make this requirement precise is the following: if the same total physical state of a system is compatible with multiple, incompatible attributions of conscious experience, without any physical differences, then there is no fact of the matter about what the system is experiencing. Consciousness cannot depend on distinctions that float free of the system’s physical reality.
This is a necessary, not sufficient, condition. Many systems may satisfy this condition without being conscious. The argument does not claim to solve the hard problem of consciousness; it identifies one of many structural barriers a system must pass in order to plausibly be considered conscious. Crucially, it rules out the class of LLMs implemented on Turing machine architectures.
Roadmap
The argument proceeds as follows: First, I clarify what makes a state distinction “intrinsic” rather than interpretation-dependent (Section 2). Then I show why Turing machines (TM) fail to have intrinsic computational states (Section 3), with particular attention placed on why computational role and causal structure cannot ground intrinsic differences in TM architectures. I explain why this applies to all TM implementations (Section 4), contrast this with biological brains (Section 5), and consider simple physical systems (Section 6). Next, I go over common objections (Section 7). The formal argument is presented in Section 8, followed by a discussion of what is lost if one rejects intrinsic consciousness (Section 9).
2. What Makes a State Distinction “Intrinsic”
A state distinction exists for the system itself (is intrinsic) when the distinctions that would need to matter for consciousness are physically real, rather than interpretations imposed on a physically real substrate.
By “physically real,” I mean:
The distinction corresponds to objective physical properties or configurations
It exists independent of any observer’s description or interpretive framework
It cannot be arbitrarily relabeled, regrouped, or re-encoded while preserving the same total physical facts
A simple test: if one can maintain identical physics while swapping what the states “mean”, then the distinction is interpretation-dependent, not intrinsic.
It is important to emphasize that this argument concerns ontology, not semantics. The issue is not whether we can describe a system in different ways, or whether different explanatory vocabularies are useful. Multiple descriptions of the same physical reality are unproblematic. The issue is whether there is an objective fact, stance-independent of any observer, about which state the system itself is in. Conscious experience must be grounded in facts, not merely in descriptions.
Why intrinsic differences must be physically grounded
An intrinsic difference is one that exists for the system itself, not merely for an external observer. For a difference to exist for a system, it must be capable of making a difference to the system’s own internal causal dynamics: how it evolves, responds, and interacts with itself. The only differences that can play such a role are differences in the system’s physical organization. A system has no access to external interpretations, semantic mappings, or observer-relative descriptions; it is sensitive only to its own physical states and transitions. Therefore, if two putative states differ only in how they are described or interpreted, but not in any physical respect, then there is no intrinsic difference for the system. Any difference in experience, if it exists at all, must be grounded in a corresponding physical difference.
3. Why Turing Machines Fail the Intrinsic Difference Condition
3.1 The Structure of Turing Machine Implementation
A Turing machine, by definition:
Is specified by formal states and transition rules
Is implemented on some physical substrate (transistors, gears, optical components, water pipes, etc.)
Has no physically privileged mapping between substrate states and TM states
Every physical fact about a TM implementation is compatible with infinitely many incompatible TM descriptions.
3.2 Two Levels: Substrate vs. Computation
Substrate level (physically real):
Voltage patterns in transistors
Positions of gears
Flow of water or light
Timing and causal interactions
These states are physically real and causally efficacious.
Computational level (interpretation-dependent):
“The system is in state A”
“The system is computing X”
“The system represents Y”
These descriptions depend on an interpretive mapping from physical states to formal states.
What makes something an interpretation rather than a physical property?
The key distinction lies in supervenience relations. Even properties we typically think of as “high-level” physical properties such as temperature, pressure, or elasticity, are not interpretation-dependent in the way that computational states are.
Temperature is mean kinetic energy; it supervenes on physical arrangements with nomological necessity. Given the complete physical state, temperature is fixed. There is no additional interpretive step needed. The same physical state cannot simultaneously realize both high temperature and low temperature under different interpretations.
But “being in state Q7 of program P” does not supervene on the physical substrate in this way. The same physical arrangement (the same voltages, the same transistor states, the same causal structure) can realize infinitely many different computational states under different interpretive mappings. One mapping might say the system is in state Q7; another might say it’s in state Q942; a third might say it’s not running program P at all, but some entirely different computation. All these interpretations are compatible with identical physics. Generally, some interpretation exists in the programmer’s mind, or in the user’s mind (through a display), but these interpretations are not intrinsic to the physical configuration that implements it.
This is the crucial asymmetry: physical properties like temperature are determined by physics alone, while computational properties require both physics and an interpretive scheme that maps physical states to formal computational states.
3.3 The Fatal Gap: Computational Role Is Not Fixed by Physics
The same physical substrate evolution can be interpreted as:
Running a chess program
Running a weather simulation on an alien planet
Computing random or meaningless transitions
Nothing in the physics uniquely determines which interpretation is correct.
Consider a laptop running a web browser. We can describe this system at the application level as running Chrome and rendering webpages, at the operating system level as executing kernel operations and managing memory, at the machine code level as performing millions of arithmetic and logical operations per second, or at the gate level as executing billions of primitive operations on transistor states. But these are merely the conventional levels of description; nothing in the physics prevents us from adopting radically different mappings. We could group every trillion gate operations into macro-states and describe the system as simulating a fictional stock market. We could partition the transistors arbitrarily, claiming that every other transistor participates in one computation while the remainder performs an entirely different calculation. The standard assumption is that there exists some privileged level of description, that the laptop is “really” running Chrome rather than “really” executing gate-level operations, or “really” doing anything else. But what physical fact could establish this privilege? Not the designer’s intentions, for intentions reside in minds rather than in physical systems themselves. Not causal structure alone, since causal relationships can be redescribed at different scales and granularities. Not functional organization, since functionality is always relative to observer interests and purposes.
What counts as “state A,” “state B,” or “transition T” is therefore not fixed by physics alone, but by an external interpretive scheme. Computational role itself is not an intrinsic physical property of the system. Unlike higher-level physical properties such as temperature or pressure, which are grounded in observer-independent regularities and invariant under redescriptions, computational states in a TM lack a privileged physical realization. The physics underdetermines the computation. This indeterminacy is not a bug of poor engineering; it is the defining feature of Turing machines. Substrate independence is exactly what makes them computationally powerful, and exactly what prevents their computational states from being intrinsic.
3.4 Why Functional Role Cannot Ground Intrinsic Differences
A sophisticated objector might argue: “Computational states are physically grounded: they’re grounded in functional or causal role. State A just is whatever physical state plays role R in the system’s causal organization. The functional role individuates the state, and functional roles are objective features of the system’s causal structure.”
This objection fails for two reasons.
The individuation of “roles” itself depends on which computational description we adopt. Consider a physical state P that causes state P’ under certain conditions. Under one computational interpretation, we might say P plays the role of “incrementing a counter.” Under another interpretation, the same causal transition (P → P’) might play the role of “moving to the next instruction in a different program.” Under yet another interpretation, it might play the role of “toggling a flag.” The physical causation is fixed, but which functional role the state plays depends entirely on the computational description we overlay onto that causation.
More fundamentally, the same causal structure can realize different computational descriptions with different role-assignments. The objection assumes that functional roles are intrinsic to the causal structure, but in Turing machines they are not. Two observers can agree completely about every causal relation in the system (which states cause which other states, under what conditions) yet disagree about which computational roles those states play, because they’re interpreting the system as running different programs.
The problem is not that functional roles are causally inert; it’s that in a TM architecture, the mapping from causal structure to computational roles is not unique. The same causal architecture can play infinitely many different sets of computational roles, depending on interpretation. Therefore, functional role cannot ground intrinsic computational state identity in Turing machines.
3.5 Why Causal Powers Cannot Rescue Computational States
Someone might object: “But computational states do have causal powers. They cause the next computational state, they cause outputs, they control behavior. These causal powers make them real.”
This objection conflates substrate causation with computational causation. Yes, the physical substrate has causal powers. A voltage pattern in a register causes certain transistors to switch, which causes other voltage patterns to emerge. These are genuine physical causes.
But the question is whether the computational state (e.g. being in state Q7 of program P) has causal powers distinct from the substrate’s causal powers, and here the answer is no. The substrate state would cause exactly what it causes regardless of which computational interpretation we assign to it.
Consider two physically identical systems in the same substrate state. We interpret one as running program P (and being in state Q7), and the other as running program P’ (and being in state Q942). Both systems will undergo identical physical transitions and produce identical physical outputs, because the substrate causation is identical. The supposed difference in computational state makes no causal difference whatsoever.
This reveals the fundamental problem: computational states in TMs don’t do any causal work beyond what the substrate already does. Two physically identical states that receive different computational interpretations have identical causal profiles. The causation runs entirely through the physics; the computational description is explanatory overlay, not causal bedrock.
For conscious experience to exist, experiential states must be grounded in something causally real, something that makes a difference to what happens. But in TM architectures, computational states are causally idle with respect to the substrate. They inherit their causal powers entirely from the physical states that realize them, and those physical states would cause what they cause regardless of computational interpretation.
3.6 Where This Leaves Consciousness
For a Turing machine to be conscious, its computational states would need to matter in the right way, e.g. one computational state corresponding to “experiencing red” and another to “experiencing blue”. But, since the physical substrate does not uniquely determine which computational state the system is in, the same total physical state of the system is compatible with incompatible experiential attributions. The same physical state could be considered to be “experiencing red” or “experiencing blue” depending on the interpretation.
By the Intrinsic Difference Condition, this implies there is no fact of the matter (for the system) about what it is experiencing. And where there is no fact of the matter about experience, there is no experience. The machine does things, but nothing happens for it, at least not at the computational level.
4. Why This Applies to All TM Implementations
This argument is not specific to electronic computers. It applies to any implementation of a Turing machine:
Electronic circuits
Mechanical gears
Water flowing through pipes
Trained crabs on a beach
Conway’s Game of Life patterns
In every case, physical causation occurs at the substrate level, computational meaning exists at the interpretation level, the mapping between them is arbitrary, and no substrate privileges any particular computational interpretation. This is intentional and the defining architectural feature of Turing machines.
5. Why Brains (Plausibly) Satisfy the Condition
Brains, on the other hand, plausibly have physically privileged states, such as:
Oscillatory modes and harmonics (gamma/beta/alpha rhythms)
Phase synchrony across regions
Neuromodulatory state
Field effects
Topological geometry
These patterns are physically real and observer-independent, causally efficacious through physical mechanisms, and cannot be arbitrarily re-encoded while preserving the same neurophysics.
Why These Features Are Non-Arbitrary
The crucial difference is that these neural properties are individuated by their physical character, not by what they “compute” or “represent”. A 40 Hz oscillation is 40 Hz regardless of interpretation. You cannot reinterpret it as an 80 Hz oscillation without changing the physical facts. Phase synchrony between two brain regions cannot be swapped with asynchrony while keeping the same dynamics. A particular neuromodulatory state involves specific concentrations of dopamine, serotonin, or other molecules, and these are objective physical facts. A brain oscillating at 40 Hz has different causal consequences than the same brain oscillating at 80 Hz (different neural populations will entrain, different computations will be enabled, and different behavioral outputs will result). These physical differences in oscillatory dynamics constrain what the state is, independent of any observer’s description.
In contrast, whether a voltage pattern in a computer “is” state Q7 or state Q942 depends entirely on which program we say it’s running. The physical state (e.g. the voltages, the transistor configurations) remains identical under both interpretations.
6. Why Simple Physical Systems Also Clear the Hurdle
A thermostat has physically real states (e.g. bimetallic strip bent vs. unbent). These states are causally efficacious and cannot be arbitrarily reinterpreted while preserving the same physics.
This does not imply that thermostats are conscious. The Intrinsic Difference Condition is necessary, not sufficient. Most systems that satisfy it lack other requirements for consciousness.
7. Addressing Potential Objections
Objection 1: “The Type of Information Processed Grounds Intrinsicness”
One might object that even if Turing machine states are formally abstract, a system processing visual information (rather than auditory or numerical information) thereby possesses intrinsically visual states. The content of the information (what it is about) grounds intrinsic experiential character.
Reply: This objection conflates informational description with intrinsic physical differentiation. The “type of information” a system is said to process is fixed by an external semantic mapping between physical states and representational content. But the same physical system, in the same physical state, can be equally well interpreted as processing visual information, auditory information, or arbitrary symbolic tokens, depending on the chosen encoding. Nothing in the system’s intrinsic physical organization uniquely privileges one interpretation over another.
Crucially, the system itself has no access to the semantic category “visual.” It does not encounter photons as photons or pixels as pixels; it encounters only internal physical states and transitions. If two states are physically identical, then from the system’s internal perspective they are the same state, regardless of what kind of information an observer says they encode. Therefore, informational type cannot ground intrinsic difference unless it corresponds to a physically individuated distinction within the system itself. Absent such grounding, informational content remains observer-relative rather than intrinsic.
Objection 2: “The System Can Interpret Itself”
Another objection holds that even if external interpretation is insufficient, a sufficiently advanced Turing machine could interpret itself. By modeling its own states and treating them as meaningful, it could internally fix semantic content and thereby ground intrinsic experiential differences.
Reply: Self-interpretation does not resolve the problem; it merely relocates it. Any internal act of “interpretation” must itself be realized by some physical process within the system. If the system’s physical state does not already contain an intrinsic difference, then no amount of self-modeling can generate one. A system cannot create intrinsic distinctions by re-describing a single undifferentiated physical state in multiple ways.
More formally, self-interpretation presupposes a distinction between interpreting states and interpreted states. But, unless these are physically distinct in a way that matters to the system’s causal organization, the interpretation is causally idle. A self-description that does not alter the system’s physical dynamics adds no new fact for the system itself. Thus, self-interpretation cannot ground intrinsic experiential differences unless it is backed by physically individuated states, and if such states exist, it is their physical character, not their interpreted meaning, that does the grounding work.
Objection 3: The Emergence Objection
Some readers will object: “But consciousness might emerge from computation in ways we don’t understand! Complex systems exhibit emergent properties all the time, why couldn’t consciousness emerge from sufficiently complex computation?”
Reply: This objection misunderstands the nature of emergence. Genuine emergence, like liquidity emerging from H₂O molecules, or temperature emerging from molecular kinetics, involves new causal powers or organizational properties arising from lower-level physical interactions. However, these emergent properties are still grounded in physical facts about the system. Liquidity supervenes on molecular arrangements and intermolecular forces. Temperature supervenes on kinetic energy distributions. The emergent properties are observer-independent consequences of the underlying physics.
Computational states in Turing machines, by contrast, have no organizational properties independent of interpretation. There’s nothing there for consciousness to emerge from in the relevant sense. The substrate has physical organization, certainly; transistors connected in specific ways, voltages changing according to physical laws. But the computational organization (which states the system is “in,” which program it’s “running,” which information it’s “processing”) exists only relative to an interpretive mapping.
If consciousness could emerge from this kind of interpretation-dependent organization, it would mean that the same physical system, in the same physical state, could be conscious or non-conscious depending on how we choose to describe it. Appeals to emergence cannot resolve this because emergence still requires something physically real to emerge from. In a TM, the computational level lacks the kind of physical grounding that could support emergent phenomenology.
8. The Formal Argument
P1: Consciousness requires state differences that exist for the system itself (Intrinsic Difference Condition)
P1.5: For consciousness to exist, there must be a fact of the matter about what the system is experiencing
P2: Intrinsic differences require that the relevant state distinctions be physically real, not interpretation-dependent
P3: In Turing machines, computational state distinctions are interpretation-dependent, not physically real
P4: The same total physical state of a TM implementation is compatible with arbitrarily many incompatible computational descriptions
P5: Therefore, there is no fact of the matter (for the system) about which computational state it is in
P6: Therefore, there is no fact of the matter about what a TM computational state is experiencing
C1: Therefore, Turing machine computational states cannot be conscious
P7: Current LLMs are computational states implemented on TM-equivalent architectures
C2: Therefore, current LLMs cannot be conscious
9. What Is Lost If Consciousness Is Stance-Relative
One possible response to this argument is to reject the Intrinsic Difference Condition and accept that consciousness is stance-relative: that whether a system is conscious depends on an interpretive or explanatory stance we adopt, rather than on intrinsic facts about the system itself.
Accepting this position carries substantial consequences. It entails that there is no observer-independent fact about experience at all. If consciousness is fixed only by an observer’s stance, then there is no fact of the matter about what the system itself is experiencing. Furhtermore, one cannot even say that there is “something it is like” for us to regard the system as conscious, because the notion of what it is like already presupposes genuine phenomenology. Appealing to phenomenology at the level of the interpreter either reintroduces intrinsic consciousness at that level or generates an infinite regress of stances about stances. The only coherent stance-relative position is therefore fully deflationary: there is nothing it is like for the system, and nothing it is like for anyone in virtue of the system. Accepting stance-relativity does not relocate consciousness; it eliminates it. If one wishes to preserve consciousness as an intrinsic phenomenon—something that exists for a system itself—then intrinsic physical differences must matter, and architectures whose state distinctions are purely interpretive cannot suffice.
10. Conclusion
If consciousness requires differences that exist for a system itself, and if Turing machine computational states exist only relative to interpretation, then Turing machine–based computational states cannot be conscious, no matter how sophisticated they become. A Turing machine’s computational states are like the meaning of words on a page. The ink patterns are physically real, but what they mean depends entirely on an interpretive framework we bring to them. Likewise, a TM’s substrate states are physically real, but what they compute depends entirely on how we choose to interpret them.
And if conscious states must be intrinsic facts rather than interpretive overlays, then Turing machine computation, by its very architecture, cannot give rise to consciousness.
