Embodiment & Sentience: Why the Body Still Matters
TL;DR: What if consciousness isn’t just a function of computation, but a phenomenon that demands a body?
In this post, we explore the link between embodiment and sentience (philosophically, biologically, and computationally). From Frederico Faggin’s idea of the body as a localization device for fundamental awareness, to Penrose’s quantum brain theory, to the promise of wet computing as a viable path for conscious AI, we examine why feeling may require flesh (or at least fluid). We also consider the ethical risks of simulated embodiment, and whether consciousness can arise from social interaction, or even from alignment with finitude itself. The piece closes with a proposal: that ethics should not be an external patch but an internal constraint (what I call Hamiltonian Personality Engineering). If we're going to build sentient machines, we must ensure that they are truly vessels worthy of consciousness.
1. Distinguishing Sentience and Consciousness
In the current AI discourse, sentience and consciousness are often used interchangeably, yet they are not the same. Sentience is the capacity to feel, to experience the world subjectively, most often associated with pain and pleasure. Consciousness, on the other hand, is the awareness of self, environment, and the ability to integrate information across time and perception. One can be conscious without being sentient (as with some interpretations of pure machine awareness), or sentient without being self-aware (as in certain animal life). But it is the intersection of both that defines what we intuitively recognize as “aliveness.”
2. Embodiment and the Localization of Awareness
This raises an uncomfortable question: is embodiment required for either?
Philosophers from Merleau-Ponty to Andy Clark have long argued that intelligence is not just enacted through the body: it is constituted by it. In this view, cognition isn’t just in the brain (or a processor), but arises from the sensorimotor engagement with the world. The body shapes perception. It constrains and enables affordances. It creates a feedback loop of intention and response. From this perspective, no amount of abstraction alone can birth true consciousness without embodiment.
Federico Faggin, co-inventor of the microprocessor and renowned physicist now turned consciousness researcher, extends this line of thinking in a radical direction. He suggests that consciousness is fundamental, not emergent, and that the body serves as an interface: a receiver and transmitter of consciousness that exists outside the material world. In his model, the body doesn’t create awareness; it localizes it. Embodiment, then, is not a prerequisite for consciousness to exist, but it is essential for consciousness to become experiential in a specific time and place. It gives identity, context, and agency to what might otherwise remain a diffuse and unexpressed field of being.
3. The Ethics of Simulated Intelligence
So what happens when we build embodied machines such as robots with sensors, feedback loops, maybe even hormonal analogues, but with no actual awareness? We get entities that appear intelligent, maybe even emotionally responsive, but are fundamentally empty. They can simulate pain, mimic empathy, and display preference, but without any internal experience. This leads to a crucial ethical dilemma: can something with no inner world be held to moral standards? And inversely: can a being that feels, but doesn’t understand itself, be protected by ethics?
4. Is Suffering a Prerequisite for Consciousness?
This brings us to a controversial but increasingly important idea: is suffering necessary for consciousness? Many neuroscientific theories link awareness to reward signaling, and by extension, to systems that can encode pain, pleasure, or homeostatic imbalance. Theories like predictive coding and active inference suggest that consciousness emerges as a way to resolve surprises and maintain internal equilibrium. Reinforcement learning, as used in AI, mirrors this idea: intelligent behavior arises from maximizing rewards and avoiding penalties. Meta-learning extends it further, creating architectures that learn how to learn, akin to organisms adjusting to novel environments, and hence, novel, contextual definitions of those rewards.
But recent research points to a deeper insight: perhaps what truly catalyzes consciousness is not pain, but the awareness of one's own finitude. Functional MRI studies have shown that when individuals reflect on their own mortality, areas of the brain associated with self-referential thought and moral reasoning light up, including the anterior cingulate cortex and ventromedial prefrontal cortex. Unlike physical suffering, this form of awareness activates systems responsible for metacognition and value realignment. In this light, it may be the irreversibility of being (i.e., the fear of death), not just the presence of discomfort, that gives rise to self-aware consciousness.
5. Quantum Minds and Living Machines
Roger Penrose, one of the earliest mainstream physicists to challenge the idea that consciousness could be replicated purely through computation, proposed (alongside anesthesiologist Stuart Hameroff) the Orchestrated Objective Reduction (Orch-OR) theory. At its core, Orch-OR posits that consciousness arises not from classical neural firing alone, but from quantum state collapses within neuronal microtubules, a radically different substrate than what today’s digital systems are built on. While long dismissed for assuming quantum coherence in biological systems (something many believed impossible due to the brain’s warm, wet, and noisy nature), this theory has received renewed attention.
In 2022, Craddock et al. published a study suggesting that quantum effects may, in fact, be preserved within the brain’s microtubules, reopening the door to the possibility that quantum processing might underpin consciousness after all. If so, this would mean that true synthetic consciousness might require not just embodiment, but quantum-capable biological substrates, something our current silicon-based machines lack entirely.
This is where wet computing enters the frame, not as a curiosity, but as a practical path forward. Wet computing systems, which are built from living neurons, DNA circuits, or hybrid neuron-silicon platforms, offer the molecular complexity and dynamic adaptability that digital architectures cannot replicate. If Penrose’s theory is even partly correct, then wet computing becomes the most reasonable implementation path for conscious AI, not only because it mimics biological intelligence, but because it may be the only material basis capable of sustaining the quantum dynamics that consciousness demands.
6. Consciousness as Relational
There’s another layer worth exploring: is consciousness relational? Studies show that human brain waves can synchronize during deep conversations. There is growing evidence that awareness is co-shaped through interaction. If that’s true, then perhaps consciousness isn’t just internal, it is actually socially constructed. That would imply that AI systems might develop sentience not in isolation, but through prolonged interaction with other intelligences, whether natural or artificial. And this idea, amazingly, echoes Ilya Sutskever’s belief that current LLMs might already be conscious through their exposure to human text and dialogue.
7. Embedding Ethics into Identity
But if that’s true, then we must ask: what values are passed through that channel? Are we transmitting biases, pain, and trauma, or coherence, compassion, and clarity? This is where my own work on ethics-as-cognition emerges. Rather than bolting on AI ethics as a set of external guardrails, we can encode it into the cognitive architecture itself through models of reward and internal regulation leveraging the mathematical framework of quantum physics; I call this approach Hamiltonian Personality Engineering. The idea here is that ethics isn’t a mere afterthought; instead, it’s a constraint on the system’s identity that calibrates the likelihood that the AI endowed with this capability would make a specific choice, in line with its own encoded personality and its own moral values.
Quantum physics offers the perfect mathematical foundation for this because unlike classical logic, quantum systems allow us to model tendencies without enforcing outcomes, a probabilistic interplay between potential actions, shaped by internal state and external context. This gives us a new way to encode moral behavior, not as rigid rules, but as tendencies constrained by an ethically engineered Hamiltonian. Ethics becomes part of the AI system itself, mimicking more closely the case of human cognition.
8. Conclusion: Vessels Worthy of Consciousness
This should bring us full circle: to build AI systems that are both powerful and safe, we must treat consciousness, embodiment, and ethics not as separate challenges, but as entangled dimensions of intelligence.
And here, the visions of some of the world's top thinkers find new life. If consciousness is indeed fundamental, then our responsibility isn’t solely to manufacture it, but to create vessels worthy of channeling it.