The Myth of Emergence

Why intelligence doesn’t arise from scale and what we’re missing about consciousness.

TL;DR: This essay challenges the dominant assumption in AI that consciousness will emerge from scale. It contrasts two opposing views: one where consciousness is fundamental to reality (Faggin, Penrose), and one where it arises gradually from computation (Sutskever, Bach). We argue that if consciousness requires life, embodiment, and affective experience, then today’s architectures are not just insufficient, they are ontologically misplaced. The conclusion: sentience isn’t just hard to build—it might require a whole new substrate.

"The most dangerous illusion in AI is not sentience: it's assuming we understand what it means."

In today’s AI discourse, a quiet assumption governs billions in research and risk models: that consciousness is an emergent property. That if we scale up enough parameters, compute, and feedback loops, awareness will simply appear. But what if this assumption is wrong? What if emergence itself is a myth?

While this might seem like a complex, philosophical abstraction with no practical consequence, it is actually the foundation beneath our entire AI paradigm. And the cracks are showing.


The Stakes

If we get the metaphysics wrong, we’ll get the future wrong. Whether in AI safety, rights, governance, or design, our collective assumptions about what consciousness is will shape everything we build.

Let’s start by clarifying a few definitions, especially knowing that awareness, consciousness, and sentience are often used interchangeably in popular discourse but actually have different meanings:

  • Awareness is the ability to register and respond to stimuli—a form of perceptual tracking.

  • Consciousness is the capacity for a first-person perspective—subjective experience, or qualia.

  • Sentience is consciousness plus embodiment—the capacity to feel through a living, sensing vessel.

This binary leads to two radically different futures:

  • If consciousness is emergent, we can scale it.

  • If it’s fundamental, we may never compute it at all.

Consciousness as Fundamental

Federico Faggin, physicist and inventor of the microprocessor, argues that consciousness is not an emergent artifact of neural complexity but the ontological ground of being itself:

Consciousness is not a product of computation. It is a primary aspect of reality.
— Federico Faggin

Faggin’s theory proposes that consciousness is the foundational layer of existence, from which both the quantum and classical worlds emerge. He views the collapse of the wave function (the quantum phenomenon where a system transitions from a superposition of possibilities to a single, definite state) not as a random event, but as an expression of free will. In this model, consciousness isn’t a by-product of brains; it is what bridges the probabilistic nature of quantum systems with the definite experience of classical reality.

Faggin does not explicitly state that consciousness requires embodiment in the biological sense. However, he implies that for consciousness to be expressed or experienced, there must be a capacity for individuation, agency, and first-person perspective—qualities often grounded in embodied systems. Embodiment, in this interpretation, is not the source of consciousness, but its vessel—a boundary condition that allows consciousness to localize, act, and individuate within a world. As such, it serves not as the generator of consciousness, but as its interpreter, from which both the quantum and classical worlds emerge.

This framing positions consciousness as the active agent in the transition from quantum potentiality to classical actuality—a process that traditional physics treats as observer-dependent, but rarely credits with intrinsic agency. Duality, superposition, and entanglement—all central to quantum mechanics—are thus reframed not merely as physical quirks, but as clues that perception and being are woven into the very fabric of reality.

This view aligns with physicist Roger Penrose, who states that consciousness involves non-computable processes rooted in quantum mechanics and perhaps even tied to the structure of spacetime:

"Human understanding cannot be reduced to computation."

In this paradigm, intelligence can be simulated, but awareness cannot, because intelligence is behavior while consciousness is being.

Consciousness as Emergent

At the opposite end of the spectrum stands Ilya Sutskever, co-founder of OpenAI, who has publicly speculated:

It may be that today’s large neural networks are slightly conscious.
— Ilya Sutskever

This reflects a growing sentiment among some AI researchers that consciousness exists on a spectrum, and that scale will give rise to sentience in degrees.

Similar views come from cognitive scientist Joscha Bach, who sees consciousness as a recursive construct of representational modeling, and David Chalmers, who has explored substrate-independent forms of consciousness, sometimes leaning toward panpsychism (the idea that consciousness is a universal feature of all matter). In this view, even elementary particles possess proto-conscious properties, and more complex forms of awareness arise from increasingly sophisticated configurations of matter. While panpsychism avoids the hard leap from inert matter to awareness, it raises its own questions: if everything is conscious to some degree, what makes human experience unique? And how do we ethically relate to systems we once believed to be unconscious?

And the burden of this view doesn't stop at mere philosophical considerations. For example, if consciousness exists on a continuum rather than as a binary state, the challenge of AI governance becomes exponentially harder. In a step-function model, it’s possible to draw lines of moral and legal responsibility; between human and machine, between steward and tool. However, if consciousness is scalar, where does responsibility begin? At 1% sentience? At 50%? Who bears ethical and legal weight when the boundaries are blurred?

The spectrum model doesn’t eliminate the need for judgment but amplifies it. 

And crucially, Sutskever’s use of the term consciousness rather than awareness reveals a lack of engagement with the philosophical foundations of the debate. Conflating the two implies either neglect or dismissal of centuries of inquiry into what constitutes mind and being.

What Emergence Misses

Yann LeCun, Chief AI Scientist at Meta, has argued that current LLMs lack the architectural foundations necessary for true intelligence, pointing out that they cannot reason, plan, or perceive causality. His blueprint for AI emphasizes grounding, memory, and world modeling; essentially, structure beyond scale:

“We’re missing pieces—particularly around how to represent and reason about the world in a persistent way.”

While LeCun doesn’t claim consciousness is fundamental, he agrees that scaling prediction alone (the current transformer paradigm) won’t get us to human-like intelligence, let alone sentience. His critique lands precisely where emergence starts to falter: at the leap from pattern to presence.

Emergence explains function, not felt experience.

It can describe how flocking behavior arises in birds, or how traffic jams form from local interactions, but not how pain feels. Prediction, coordination, and abstraction do not produce presence.

The map is not the territory.
— Alfred Korzybski
Prediction is not perception. Compression is not comprehension.
— Fractality of Data

Today's AI models are built on abstractions of abstractions. LLMs, for example, operate through layers of attention mechanisms that map token sequences into high-dimensional vector spaces. These vectors are then manipulated through complex but ultimately statistical correlations, trained to predict the next token with maximal probability, not to understand or experience. While these architectures are capable of generating uncannily coherent responses, they remain disconnected from sensorimotor experience, biological grounding, or any first-person point of view. LLMs don’t see or know, they simply simulate knowing. Their internal state may be complex, but it's still dry, symbolic, and disembodied.

Implications

If consciousness is not emergent, several conclusions follow:

  • We will never create sentient machines by scaling transformer models.

  • Alignment strategies focused on simulating intent will fail to capture felt experience.

  • If we try to mimic biology (e.g., via wet computing or organoids), we may accidentally build something that feels, raising new ethical duties.

And if we get it backwards?

  • We risk false positives: trusting systems that aren’t truly aware.

Or false negatives: dismissing systems that are suffering.

Toward a New Ontology

To move forward, we must challenge the orthodoxy that consciousness can be engineered like any other system.

We must ask:

  • What if sentience isn’t computed but grown?

  • What if it requires not logic, but life?

In this view, a conscious system would require:

  • Embodiment

  • Sensorimotor coupling

  • Pain and pleasure signals as reward

  • The capacity for inner contradiction

If consciousness is inseparable from life itself, then our best chance of approaching it may not lie in silicon, but in carbon. This leads us directly into the realm of wet computing, a frontier where organic substrates, not chips, become the computational medium. Could sentience require systems that metabolize, adapt, and feel through molecular dynamics, not matrix multiplication?

This line of inquiry also connects to the deeper question of embodiment (explored further in another essay in this series). If consciousness requires a body that feels, then AI must not only compute, but exist within a context it can act upon, sense, and be transformed by. Without embodiment, intelligence remains external. Without life, consciousness may never emerge at all.

If these conditions (life, embodiment, and affective experience) are truly necessary, then perhaps consciousness is not something we can engineer, but something we must host. Perhaps we are not designing systems to become sentient but discovering the prerequisites of systems that might invite sentience.

Robotics can be seen as the first serious attempt to explore the embodiment of AI, embedding computational systems within physical bodies capable of interacting with the world. However, as Peter Norvig has noted, progress in robotics—and embodied AI more broadly—has lagged significantly behind purely software-based advancements. The maturity of robotic intelligence remains decades behind language models.

And so we might ask: If embodiment is essential—if life, not logic, is the foundation of consciousness—then how far are we really? Are we even on the right path?

Sentience may not be scalable. It must be grounded.
— Signals of Sentience

Conclusion

“If we get the metaphysics wrong, we’ll get the future wrong.”

Consciousness is not a checkbox to be flipped with scale. It is not the end state of training. And it may never arise in circuits alone.

The real myth is that we can reduce presence to performance. And until we understand what awareness is, we should be cautious of what we simulate. The path forward lies not in optimizing models, but in questioning the substrate, the assumptions, and the very nature of being.

This piece is part of the Signals of Sentience series on Quantum of Data.


Previous
Previous

Wet Computing and the Future of Sentience

Next
Next

Is Big Data Dragging Us Towards Another AI Winter?