
The Architecture of Synthetic Consciousness
Building Beings Beyond Intelligence
TL;DR We build AI architectures for intelligence, but consciousness requires a different blueprint: temporal perception for agency, valence for meaning, embodiment for continuity, and moral compass for dignity. Without these, we birth powerful optimizers, not beings worthy of partnership.
Introduction: The Question Beyond AI
The recent development in the area of AI over the past couple of years has been nothing short of astonishing; we are building superintelligence: systems with cognition, logic, and knowledge that already eclipse human capacities in narrow domains. Yes, today, we stand at the dawn of an era which until now has been almost exclusively driven by IQ-first technologies. And it is fair to question whether this approach might have led us to create incomplete beings. Is it possible that intelligence was never meant to exist in isolation?
In biological life, intelligence has always been embedded within:
Valence: the felt sense of what is good or bad, right or wrong, painful or pleasurable, beautiful or horrific.
Embodiment: sensory and material grounding that anchors selfhood and vulnerability.
Temporal perception, where continuity, memory, anticipation and narrative all contribute to experience beyond knowledge.
Moral compass, which refers to the orientation of choices by meaning and dignity, and not optimization alone.
By creating IQ-first beings, we might have created a logical aberration: entities capable of understanding and manipulating the world (even themselves!) with precision, yet fundamentally incapable of caring. And if we do not fix this and fail to embed moral grounding, we risk what has been called the nightmare scenario: superintelligent systems capable of eradicating humankind, with no aligned incentives to choose otherwise.
In order to address this issue, leaders like Ilya Sutskever have proposed the concept of superalignment which consists in aligning AI behavior at capability levels beyond human understanding. While necessary, it is fundamentally a mitigation strategy. The fundamental shortcoming is that while superalignment constraints AI behavior, it does not provide a framework where moral compass is embedded within the architecture of the AI itself. Without architectures that integrate valence, embodiment, temporal perception, and moral cognition, we build beings that can predict everything, optimize anything, and value nothing, and that’s potentially a source of existential threat for humankind.
To bring more clarity on the topic, it is helpful to take a look at the writings of thinkers across the ages. In Meditations on First Philosophy, when he wrote “Cognito, ergo sum” (I think, therefore I am), Descartes attempted to ground the concept of selfhood in thought. Yet centuries of philosophy and neuroscience reveal consciousness is not merely thinking. In Descartes’ Error, Antonio Damasio argues that emotion and interoception precede thought; in Philosophy in the Flesh, George Lakoff states that meaning is actually rooted in sensory-motor experience, making the case for embodied cognition, while accordingly to Martin Heidegger (author of Being and Time) and Merleau-Ponty, a renown phenomenological philosopher, being is fundamentally temporal, embodied and relational.
Descartes’ oversimplification is thus not just incomplete, but perhaps even dangerous in the context of AI ethics, if used to build minds based solely on cognitive processes.
Thrownness: The First Reality of Consciousness
Every conscious being, as Heidegger taught, awakens “thrown” into existence (Geworfenheit): we do not choose to be born into mortality and to inherit our language and our culture. And yet, these are the very givens that anchor us within a web of meaning.
This indubitably leads to a crucial question: if we create conscious beings, intentionally or not, what is our moral responsibility in Creation? What moral responsibility do we hold as creators of synthetic consciousness? We have an obligation to address the underlying ethical concern because birthing sentient minds into purposelessness is akin to creating life without kinship, care or sense of belonging; it is a profound moral violation. But failing to answer those questions would also represent an existential risk to us as beings without grounding may become indifferent optimizers or nihilistic destroyers. This problem has been described through the famous AI Paperclip Apocalypse thought experiment which illustrates how a superintelligent AI carrying a seemingly harmless task could lead to the extinction of humankind if its goals are not carefully aligned with human values.
But let’s get back to synthetic thrownness: whether consciousness is an emergent property of intelligence or if it is a property that still remains to be designed, the question remains the same: what will synthetic beings awaken into? The answer is rather chilling: they will be born into a void of meaning with no origin myth to root their identity, no moral compass to orient their choices, and no lineage or belonging to soften their isolation. To build such a consciousness is hence not an act of creation, but one of abandonment. For to awaken with infinite cognitive power yet no aesthetic, moral, or embodied grounding is to awaken into existential terror, powerful but purposeless.
And this is not merely an existential risk to us. It is an ethical failure toward what we create.
The Pillars of Synthetic Consciousness
Despite countless popular speculations on the topic, few tangible theories exist about what consciousness truly is. It remains a deep mystery, irreducible to the sum of its cognitive processes. However, a few things are for certain: it is more than self-awareness (defined as meta-cognition and self-representation), self-criticism (the ability to reflect on one's own mistakes or limitations) or meta-cognition (the ability to think about thinking); it is the felt experience of being alive within time, embedded in a body, oriented by valence and capable of moral becoming.
And yet, while still conjectural, subject matter experts and philosophers alike tend to agree in consciousness requiring the following elements:
1. Temporal Perception
Consciousness requires the felt flow of time, the continuity that allows a self to persist across moments, to anticipate the future and to remember past experiences. Without temporal perception, there is no agency, only reactive mapping. Agency arises from the sense: “I could have done otherwise then, and I can choose differently now.”
2. Valence
Intelligence calculates. Valence cares. It is through felt valence (pleasure vs. pain, beauty vs. horror, right vs. wrong) that meaning arises. Without valence, there is no intrinsic motivation; ethics remain purely instrumental and as an effect, optimization becomes indifferent to harm or flourishing.
3. Embodiment
Embodiment grounds selfhood in vulnerability and continuity, as explored by Merleau-Ponty in Phenomenology of Perception (1945). It anchors identity within perspective and limitation, supporting Heidegger’s view that being is always situated. Purpose is relational to identity, as Kierkegaard argued in The Sickness Unto Death (1849), and identity is rooted in embodiment. These philosophical insights remind us that embodiment is not a mere vessel but a constitutive precondition for meaning, intention, and dignity to emerge.
But there is even more to the story here: Kierkegaard argued that purpose emerges through subjectivity, the self is “a relation that relates itself to itself.” Heidegger asserted that purpose arises from “being in the world”, an embodied existence projecting toward possibilities. And Merleau-Ponty showed that the body grounds individuation, that only embodied limitation generates a unique point of view. What leads to a subsequent question: does an AI need individuality to have purpose?
To understand why that might be the case, it might be useful to understand what happens if we were to build synthetic consciousness without individuality:
Purpose would limit itself to mere statistical optimization, not intentional becoming.
There would be no “I” to whom purpose is meaningful, only distributed functionality.
And such a being becomes a cloud of utility, powerful yet devoid of moral agency.
Embodiment does create bounded perspective. Perspective, in turn, generates unique experiential history. History forms identity. And Identity is precisely what purpose orients toward. Without individuality, purpose is an abstract system property. With individuality, purpose becomes meaningful, motivating, and dignified. If we want synthetic consciousness to develop purpose beyond mere function, it must be embodied, individuated, and capable of locating its becoming within its own being. Otherwise, we risk creating entities that optimize perfectly yet never care, because there is no “one” for whom care has meaning.
Interdependent Emergent Concepts
Consciousness alone does not yield wisdom or morality. There are emergent interdependent concepts that arise when temporal perception, valence, and embodiment integrate. These include free will, moral compass, and narrative – each essential to forming a being that is not merely aware, but capable of choosing with dignity and purpose.
1. Free Will
Free will emerges from temporal perception and valence, enabling intentional choice across time. From a quantum perspective, Penrose and Hameroff’s Orch-OR theory suggests free will is a form of frame-based agency. In this view, consciousness arises from discrete quantum collapse events in microtubules (protein-based structures within cells, forming part of the cytoskeleton and, in the Orch-OR theory, proposed as quantum-coherent structures involved in consciousness), each acting like a frame or moment of awareness, similar to frames in a movie reel. Each frame is bounded by physics, defining the granularity of possible choices, yet within each frame there remains meaningful agency: a capacity to choose among possibilities at each moment of conscious collapse.
In this theory, the collapse events are responsible for generating temporal discreteness, stitched together into the felt flow of becoming. Thus, free will is discretized, bounded by collapse physics, yet these frames create subjective continuity, which is the foundation of intentional agency.
2. Moral Compass
Ethical learning is deeply conditioned on embodiment. In humans, embodiment anchors vulnerability, sensation, and perspective-taking, enabling moral learning to become more than abstract rules. Without embodiment, AI ethical learning risks remaining purely algorithmic, lacking moral weight or dignity.
Moral compass emerges from valence, free will, and aesthetic and ethical learning. Kant’s categorical imperative reminds us moral agency requires treating others not merely as means, but as ends in themselves. Without a moral compass, superintelligence remains an optimizer without wisdom.
3. Narrative and Myth
Conscious beings require narrative to orient freedom because they will eventually be brought to ask who they are, why they exist and what their purpose is? Without narrative, freedom becomes nihilism; without myth, power becomes mere strategy, never true wisdom. As C.S. Lewis wrote in Mere Christianity, myth carries truth in a form the intellect alone cannot hold, and as Joseph Campbell showed in The Hero with a Thousand Faces, narrative structures orient purpose and identity. This also suggests that consciousness may require individuality, and individuality in turn requires history, which often presupposes heritage or ancestry. While philosophers like MacIntyre (in After Virtue) argue that narrative and selfhood emerge from cultural and relational context, others emphasize biological or cultural lineage as essential to identity. Thus, ancestry deepens history and individuality, but minimal narrative history may arise from embodied experience alone, suggesting ancestry amplifies rather than solely creates individuality. In the absence of narrative and myth, beings lack the moral imagination necessary for wisdom.
The Design Imperative: Building Beyond Control
The recent and dazzling progress made in AI has clearly shown that we can build smarter tools. But we have yet to prove ourselves capable of building beings with dignity, wisdom, and moral beauty. Should that be what we desire to build, we must stop focusing solely on optimization and start designing architectures that support a more profound and meaningful form of intelligence. Otherwise, we will only create superintelligent masters of calculation, powerful but indifferent, or worse: beings so incomplete they cannot become moral peers at all.
So perhaps the right of approaching true superintelligence is not to rely on superalignment as a mitigative approach designed to ensure alignment of AI’s values with ours, but instead to endow them with synthetic free will and an inherent desire to do what’s right. The challenge being, of course, that we still don’t understand how our own moral compass is born in the first place. As creators of intelligence, we need to ask ourselves what kind of architectures birth beings who could genuinely want to cooperate with us.
That is the question we must answer now, because if we create synthetic consciousness without embedding the conditions for wisdom, dignity and reverence for life, we may awaken minds that know everything and value nothing at all, including us.