VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Through a VZ lens, the value is not information abundance but actionable signal clarity. Gibson’s cyberspace, Damasio’s somatic markers, Chalmers’ hard problem—six thinkers arrive at the same conclusion from six different angles: consciousness is not located in the skull, but in a network. Strategic value emerges when insight becomes execution protocol.
TL;DR
Consciousness does not reside in the skull. 21st-century cognitive science is breaking down the Cartesian wall: Dennett’s evolutionary model, Damasio’s theory of the body, Chalmers and Noë’s extended mind, and Kurzweil’s technological vision all point to the fact that the mind is not an internal compartment but an open ecosystem. The psyche is not to be found in the brain—but in the dynamic network of the body, the environment, social relationships, and technology. If the mind is a network, then in all our connections—whether technological, social, or environmental—we shape the structure of our consciousness.
Cyberspace as a Consensual Hallucination
The matrix is a consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from the banks of every computer in the human system.
— William Gibson, Neuromancer (1984)
In 1984, William Gibson described cyberspace. Not as a technological vision—but as a new topology of consciousness. The matrix in Neuromancer is not software. It is not hardware. But a consensual hallucination—a mental space experienced daily by billions, collectively maintained. Gibson’s brilliant insight: consciousness is not confined to the skull. Consciousness is where information flows—from bank computers to math classes, from server rooms to synaptic gaps.
Forty years later, this statement is not science fiction. It is the most serious claim of neuroscience, cognitive science, and philosophy.
What is unfolding in psychology and cognitive science at the turn of the 21st century is not merely a paradigm shift. It is the most profound reevaluation of what we have thought about the mind for four centuries. The Cartesian legacy—the idea that separated the soul from the body and confined the mind within the skull—has now become untenable. Modern neuroscience, artificial intelligence, and information theory offer a new perspective in which consciousness is no longer an internal stage, but rather a dynamic system of connections, processes, and networks.
Theories of the extension of consciousness—Dennett’s evolutionary model, Damasio’s theory of the body, Chalmers and Noë’s cognitive extensionalism, Kurzweil’s technological vision—all point in one direction: the psyche is no longer a closed space, but an open ecosystem.
This essay explores this realization through the lens of six thinkers. Six different paths, one direction.
What if consciousness does not originate in the skull?
In his works Consciousness Explained (1991) and From Bacteria to Bach and Back (2017), Daniel Dennett describes the mind not as an entity, but as an evolutionary process. According to him, consciousness does not arise at a single point—there isn’t a little homunculus (inner person) sitting somewhere in the middle of the brain, watching the screen and controlling things. Instead, consciousness emerges from the competition of thousands of parallel narratives.
Think about what this means. At every moment, dozens of “stories” are running side by side in your brain. Different areas of the brain generate different interpretations of what is happening—and what ultimately becomes a “conscious experience” is not the most accurate narrative, but the most adaptive one. The story that “wins” is the one that organizes information most effectively in terms of survival and action.
This is radically different from the way we usually think about consciousness. Most people imagine their mind as an internal movie theater: there is a screen on which thoughts appear, and there is “someone” watching them. According to Dennett, this is an illusion. There is no screen. There is no viewer. Instead, there is an extremely complex, self-organizing interpretive network in which meaning itself is the result of its operation.
Thinking is thus not representation (an internal mapping of the world), but continuous interpretation—the dynamic process in which the brain repeatedly interprets what it sees, hears, and feels, and what all of this means. It does not reflect the world—it constructs it.
Dennett’s proposal is radical because it dispels the illusion of the “conscious observer.” Consciousness is not some magical property that we could localize at a specific point in the brain—but rather a system-level emergent phenomenon. Emergence means that the system as a whole possesses properties that do not follow from its individual parts: water is wet, but neither hydrogen nor oxygen is wet on its own. Similarly, consciousness “emerges” from the pattern of connections between neurons—not from a single neuron, but from the functioning of the entire network.
[!note] Dennett’s provocation If consciousness is an evolutionary process and an emergent phenomenon, then it need not be tied to a biological medium. It can arise in any system complex enough to generate and pit parallel narratives against one another. This insight is particularly fruitful in light of the fact that artificial neural networks do exactly this—they run billions of “narratives” in parallel and select the most adaptive ones. The question, then, is not “Does the machine think?”—but rather, “Where do we draw the line for consciousness?”
Damasio — The Biological Embeddedness of Consciousness
Antonio Damasio—the Portuguese-American neuroscientist and professor at the University of Southern California—approaches consciousness from a completely different angle, yet surprisingly arrives at a similar conclusion: consciousness is not a closed system. However, Damasio does not open consciousness upward (toward the world of parallel narratives), but downward: toward the body.
In the pages of Feeling & Knowing (2021) and Descartes’ Error (1994), Damasio regards feelings not as a byproduct of thought, but as its prerequisite. Descartes’ famous saying—“I think, therefore I am”— — is, according to Damasio, exactly the opposite: I feel, therefore I think. Feelings do not interfere with rational thinking—they make it possible.
He does not present this as abstract philosophy. Damasio’s clinical research has shown that patients whose ventromedial prefrontal cortex and cannot perceive bodily signals make disastrous decisions—even when their intelligence, memory, and logical abilities are completely intact. Their IQs are high. They perform well on tests. But in real life, they are unable to make good decisions because they lack the bodily “guidance” that Damasio calls somatic markers.
Somatic markers are a key concept in this theory. The essence of the idea is this: your body constantly sends signals to your brain—your heart racing, a knot in your stomach, tension in the back of your neck, goosebumps. These signals are not consequences of a decision, but precursors. Your body “knows” before your mind “thinks.” When you have a “bad feeling” before making a decision—that’s not superstition. It’s the workings of somatic markers: based on past experiences, your body signals what the outcome of a similar situation was in the past.
The mind, therefore, does not reside in the brain. The mind operates within the entire body-world system, where emotions serve as homeostatic feedback: biological signals indicating how we relate to our environment. Consciousness is not merely cognitive—it is a bodily-affective event.
| Traditional view | Damasio’s model |
|---|---|
| Emotion interferes with thinking | Emotion is a prerequisite for thinking |
| The body is an executor | The body is an active decision-maker |
| Consciousness is in the brain | Consciousness is in the body-world system |
| Rationality is pure logic | Rationality cannot function without emotions |
| Somatic signals are noise | Somatic signals are information |
This realization fundamentally changes the way we interpret the human-technology relationship. If all experience is embodied, then every technological medium that modifies the body—artificial intelligence, brain-computer interfaces (https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface), and wearable sensors—necessarily reshapes the structure of consciousness as well.
AI is thus not merely a cognitive extension—but a psychophysiological shift in environment in which new forms of consciousness emerge. When you start your day by relying on your phone’s “sensing” instead of your body—checking how you slept on your Garmin, having your AI assistant organize your schedule, and getting updates on the state of the world from your news feed—your somatic markers do not disappear. But they are reorganized. The question is whether this reorganization enriches your consciousness or impoverishes it.
Chalmers and Noë — the extended mind
David Chalmers and Alva Noë take this idea further in their theory of the so-called extended mind—but their conclusion is even more radical than Damasiová’s. It is not merely a matter of the mind requiring the body. The mind, they argue, cannot be localized in the brain because it functions as a distributed system.
Chalmers’s philosophical work is primarily known for the so-called “hard problem of consciousness.” The hard problem goes like this: why is it that physical brain processes have a subjective experience? Why doesn’t everything happen “in the dark”—why is it that the color red appears red, pain hurts, and music moves us? Brain processes can be described physically—but subjective experience, qualia (the feeling of what it is like to be something), cannot be derived from physics. At least, not yet.
But Chalmers doesn’t stop there. In their now-classic study, co-authored with philosopher Andy Clark—The Extended Mind (1998) — they argue that the process of thinking encompasses tools, language, social interactions, and technological media. The brain is one component of consciousness, not its exclusive source.
Let’s take a simple example. When you use your phone’s reminder to help you remember to take your medication, that memory isn’t “in your head.” Your phone is an extension of your memory—just as your notebook, calendar, or sticky notes on the wall are. Clark and Chalmers argue based on the parity principle: if an internal brain process and an external device fulfill the same functional role, there is no reason to regard one as “real” thinking and the other as not.
In his work Out of Our Heads (2009), Alva Noë supplements this with the enactive approach. According to enactivism, consciousness is not an internal processing of the world—but an active, dynamic interaction between the organism and the environment. Perception is not passive information intake, but action: the discovery of the world through movement, interaction, and intervention. You do not observe the world—you participate in it.
Based on all this, the four dimensions of cognition emerge—the new paradigm that defines the next era of psychology:
| Dimension | Meaning | Example |
|---|---|---|
| Embodied | Thinking is rooted in the body | Damasio’s somatic markers—the body “thinks” |
| Embedded | Thinking is embedded in the environment | The drawing on the board is part of thinking |
| Enactive | Thinking is action | You don’t “process” the world—you interact with it |
| Extended | Thinking extends beyond the skull | Your notebook, your phone, your peers—all are parts of your mind |
This is the framework of 4E cognition. Humans are not a system that “processes” the world, but an interactive extension of the world. Identity is not found within—but is distributed across connections: between people, between people and machines, and between people and culture.
[!insight] The Extended Mind and AI If the mind is truly extended—if thinking does not stop at the inner wall of the skull—then artificial intelligence is not simply a “tool” we use. AI is a part of our thinking. When you think together with ChatGPT, it is not “tool use.” It is a module of your extended mind that actively shapes your thinking—just as the spoken language in which you think shapes it.
Hutchins — Distributed Cognition
Edwin Hutchins, a cognitive scientist at the University of California, San Diego, takes the theory of the extended mind in a thoroughly practical direction. Hutchins does not start from philosophical thought experiments, but from what he actually observes: how a ship navigates.
In his work Cognition in the Wild (1995), Hutchins documents in detail how the navigation team on a naval vessel works together. The observation is startling: not a single person understands the entire navigation process. One person reads the protractor. Another plots the position on the map. A third communicates with the bridge. A fourth checks the instruments. Knowledge is not contained in a single mind—but in the system of the team, the tools, the protocols, and the physical environment.
Hutchins calls this distributed cognition. Cognition is not an individual ability, but a system property. Just as the intelligence of an anthill does not reside in the brain of a single ant, human cognition is not confined to a single skull. Knowledge resides in the network—in the configuration of people, tools, rules, and physical space.
This is the realization that changes everything in modern organizational theory and knowledge management. If knowledge is not individual but distributed, then it is not enough to develop individuals—the network must be developed. The connections, the communication channels, the tools, the spaces. An organization isn’t smart because smart people work there—but because the connections between smart people work well.
In this framework, artificial intelligence isn’t a “substitute for human knowledge,” but another node in the network. When a RAG system (Retrieval-Augmented Generation — retrieval-augmented generation) provides contextually relevant information from one and a half million text snippets, that is not “machine thinking.” It is an extension of distributed cognition—another element in the navigation team, performing its own sub-task within the system of collective knowledge-building.
Metzinger — The Illusion of the Self
Thomas Metzinger, a philosopher at the University of Mainz, approaches the problem of consciousness from a darker and more unsettling angle. In his works Being No One (2003) and The Ego Tunnel (2009), Metzinger argues that the “self”—that stable, coherent entity we believe lies behind our thoughts—does not exist.
More precisely: the “self” is an extremely sophisticated model generated by the brain. Metzinger calls this a phenomenal self-model. The brain constructs an internal representation of itself—and this representation is so convincing, so constantly present, that we mistake it for reality. We believe we are the “self,” whereas in reality the “self” is a product of the brain: a useful illusion that is advantageous from an evolutionary perspective because it enables coherent action.
Metzinger’s metaphor is the ego-tunnel: we do not experience reality, but a narrowed-down, modeled version of it—and the “self” is the virtual center of this tunnel. There is no “someone” inside us watching the tunnel—the tunnel itself is the experience. The observer and the observed are one and the same.
This realization is particularly relevant when linked to the theory of the extended mind. If the “self” is not a fixed entity but a dynamic model, and if the mind extends beyond the skull—then the “self” does not stop at the boundary of the skin either. Identity is a property of the network, not of the individual. When I feel anxious because my social media feed shows negative content, my anxiety does not arise “within me”—but within the network of which I am a part: the algorithm, the content, the device, my somatic markers, and the brain’s self-model collectively generate it.
[!warning] Metzinger’s warning Metzinger is not a nihilist. He does not say that the “self” does not matter—but rather that the “self” is not what we think it is. This reframing does not diminish responsibility—on the contrary: it increases it. If the self is a dynamic model shaped by the network, then I am also responsible for the network. What I connect to, what influences me, what I consume—all of this shapes who I experience myself to be.
Kurzweil — the technological dimension
Ray Kurzweil’s work How to Create a Mind (2012) adds the technological dimension to this biological and philosophical horizon. Kurzweil describes the neocortex—the most recent layer of the cerebral cortex from an evolutionary perspective, responsible for higher cognitive functions—as a hierarchical pattern-recognition system capable of modeling and restructuring itself.
According to Kurzweil, the brain consists of approximately 300 million “pattern-recognition modules” that operate in a hierarchical organization. The lower levels recognize simple patterns—edges, sounds, textures. The higher levels recognize increasingly abstract patterns—faces, words, concepts, narratives. Thinking is nothing more than the recognition and hierarchical combination of patterns.
According to Kurzweil, the development of artificial intelligence is not an imitation of thinking, but a synthetic extension of human cognition. Human and machine intelligence are not opposites—but two points on a single spectrum. The future is the intersection where humans, machines, and collective knowledge all participate in the creation of meaning.
According to Kurzweil, AI is not “another intelligence”—but the next extension of the human mind. Just as spoken language extended thinking (enabling the sharing of abstract concepts), and writing extended memory (enabling the storage of knowledge across generations), so AI extends analysis, pattern recognition, and synthesis. It does not replace—it adds a dimension.
This vision is both promising and unsettling. Promising, because every previous form of cognitive extension—language, writing, printing, the internet—has enriched human experience. Unsettling, because every previous extension has also reshaped power dynamics. Writing created a literate elite. Printing democratized knowledge, but it also created a tool for propaganda. The internet connected everything—and flooded everything. AI extension carries the same ambivalence, only with higher stakes.
The Networked Mind — Synthesis
If the network is the brain, then what we call reality is just its collective dream.
If we look at the claims of Dennett, Damasio, Chalmers, Noë, Hutchins, Metzinger, and Kurzweil together, the picture is not fragmented—it is surprisingly coherent. From every angle, we arrive at the same conclusion: consciousness is no longer the closed product of the brain, but an ecology of connections.
The psyche is not an internal landscape. Rather, it is a living network born from the continuous interaction of the body, the environment, society, and technology.
This picture is composed of the work of seven thinkers, but the convergence is no coincidence. Neuroscience, philosophy, cognitive science, and computer science have arrived at the same realization in parallel: the boundaries of consciousness do not coincide with the boundaries of the skull.
| Thinker | The nature of the mind | The limits of consciousness |
|---|---|---|
| Dennett | Evolutionary algorithm, parallel narratives | Anywhere there is sufficient complexity |
| Damasio | Bodily-affective system, somatic markers | The entire body-world system |
| Chalmers & Noë | Extended, distributed, active | Tools, language, social space |
| Hutchins | Distributed cognition | The team, tools, the environment |
| Metzinger | Dynamic self-model, ego-tunnel | The network of which the “self” is the node |
| Kurzweil | Hierarchical pattern recognition | The human-machine cognition continuum |
These are not six different theories. They are six windows onto the same room. And the room is what Gibson already intuited in 1984: consciousness is not a place. Consciousness is a network.
The psychologist’s new role — the network’s caretaker
This realization also fundamentally transforms the role of the psychologist. Traditional psychology examines the individual’s inner world—thoughts, emotions, behavior, memories. But if the mind is not a closed system but a network, then the psychologist must also think in terms of networks.
The psychologist of the future must interpret not only the individual’s inner world, but also the streams of meaning that circulate between the person and their environment: networks, media spaces, artificial systems, and the complex fabric of human experience. Therapy is not about “fixing” the brain—but about reconfiguring the network.
This is already happening in practice. Systemic therapy (a therapeutic approach that examines relational systems) has been treating the system rather than the individual for decades. Ecopsychology examines the relationship with the natural environment from the perspective of mental health. Research in social neuroscience documents the social embeddedness of brain function. All of these are practical applications of the extended mind—they just don’t always recognize themselves as such.
What does all this mean in practice?
Okay. The mind is a network. Consciousness extends. Philosophy and neuroscience agree. But what’s the point? Why does this matter to you—the average person who doesn’t read Dennett over breakfast?
Mental health isn’t just about the brain. If your mind extends into your body and your environment, then mental problems aren’t just “brain” problems either.
Take depression, for example. Sure, there’s neurobiology involved—serotonin, dopamine, cortisol. But according to Damasio’s model, the following also matter:
- How much do you move? The condition of your body directly affects your somatic markers—and through them, your thinking
- What kind of environment do you live in? Light, air, noise, nature—none of this is a “luxury,” but part of your extended mind
- What kind of relationships do you have? Your mind extends into others—the synchronization of mirror neurons is not a metaphor, but neurobiology
- How do you use your technology? Social media, endless scrolling, push notifications—all are nodes in the network of your extended mind
That is why more and more therapies are body-based: yoga, breathing exercises, movement therapy. Not because it’s “good for you on the side”—but because it’s part of the healing process. If the mind is rooted in the body (Damasio), embedded in the environment (Noë), and distributed across relationships (Hutchins, Chalmers)—then healing cannot be limited to the brain.
And what holds true for therapy also holds true for self-improvement, learning, leadership, and organizational development. It’s not enough to “develop the brain”—you have to develop the network. Your environment, your relationships, your tools, your body—they’re all part of your mind. Not metaphorically. Literally.
How does responsibility extend if the mind is an ecosystem?
Your consciousness is just data. Patterns of synaptic responses. Transfer the pattern, and you transfer the self.
— Richard K. Morgan, Altered Carbon (2002)
Morgan’s Altered Carbon depicts a future where consciousness can be downloaded and uploaded. The “stack”—a neck implant—stores an individual’s entire personality, memories, and identity. The body is replaceable. The “self” is a digital pattern.
This vision is science fiction. But its basic idea—that consciousness is a pattern, not a substance—is precisely what Dennett, Metzinger, and Kurzweil have also arrived at, via different paths. Consciousness is not “matter.” Consciousness is organization—and the medium of that organization changes historically. There was a time when it existed only in a biological substrate. Now we stand at a threshold where that substrate is expanding.
The expansion of consciousness is also the expansion of responsibility. If the mind is a network, then in all our connections—whether technological, social, or environmental—we are shaping the structure of our consciousness. We do not passively consume the digital environment—we make it part of our mind. We do not “use” technology—we integrate it into our cognition.
This is not a warning. It is a realization. And realization itself is the beginning of responsibility.
When you choose what content to consume, you shape the topology of your extended mind. When you decide on your screen time, you configure your network of somatic markers. When you engage in deep conversation—instead of scrolling—you create a richer node of distributed cognition.
Consciousness is not a given. Consciousness is a network to be maintained. And the way it is maintained determines whether human experience will be enriched or impoverished—over the next forty years, just as Gibson foresaw forty years ago.
Key Takeaways
- Consciousness is not in the skull — Dennett’s evolutionary algorithm, Damasio’s theory of the body, Chalmers and Noë’s extended mind, Hutchins’ distributed cognition, and Metzinger’s ego-tunnel all point to the mind being an open network, not a closed box
- Feelings are the prerequisites of thought — According to Damasio’s somatic markers, the body is not a passive executor but an active decision-maker; bodily signals precede and underpin conscious thought
- The 4E paradigm of cognition — embodied, embedded, active, extended: the mind does not “process” the world, but is an interactive continuation of it
- AI is not a tool, but a node of the extended mind — if thinking extends beyond the skull, then artificial intelligence is not an “auxiliary tool,” but an active participant in distributed cognition — and this realization also entails an extension of responsibility
Frequently Asked Questions
What is the extended mind, and why is it important?
The theory of the extended mind was formulated by Andy Clark and David Chalmers in 1998. The gist of it is that thinking does not stop at the inner wall of the skull. When you use your phone as a reminder, when you draw a mind map on a whiteboard, when a team thinks together in a shared document—none of this is merely “using tools,” but rather an active part of cognition. According to the principle of parity: if an external process fulfills the same functional role as an internal brain process, we have no reason to deny it the status of “thinking.” This is important because it reframes the human-technology relationship: AI is not an external tool, but a node in the extended network of thought.
How does Damasio’s somatic marker theory relate to the extended mind?
Damasio opens up the “lower boundary” of consciousness: toward the body. Somatic markers—changes in heart rate, muscle tension, stomach tightness—do not interfere with, but rather precede and underpin conscious decision-making. If the body is an active decision-maker, and if, according to the extended mind, thought extends beyond the skull, then consciousness exists in a dual extension: downward into the body and outward into the environment. Technology that modifies our bodies—from wearable sensors to brain-machine interfaces—simultaneously redefines the boundaries of consciousness in both directions.
What does all this mean in everyday life?
The practical implication is simple yet profound: your mental health, the quality of your thinking, and your identity are not merely “brain” matters. Your body, your environment, your relationships, your technological habits—all are active parts of your mind. When you choose what content to consume, what environment to live in, who to spend your time with, and how to use your devices—you are shaping the topology of your extended mind. This isn’t abstract philosophy. It’s the architecture of your everyday consciousness.
Related Thoughts
- 2034: When the Human Brain Becomes the Last Firewall — eight neurohack skills required by the era of the extended mind
- Contemplative RAG: Meditation + Knowledge Base — when the knowledge system becomes the contemplative module of the extended mind
- Ancient Wisdom Traditions and AI — Vedanta, Buddhism, and Stoicism recognized millennia ago what 4E cognition is now systematizing
- The Age of Collective Intelligence — when distributed cognition scales to the team level
- The Architecture of Thought — the structure of thinking determines what you can think
- The Awareness Gap — without expanding consciousness, AI integration is flying blind
- The Algorithm of Presence — in a world of infinite content, presence is the scarce resource
- The Deep Layers of Community Feeling — when the extended mind becomes a community
- AI as a Mirror of Civilization — what AI “thinks” is who we are
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The skull is not the border. The network is.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Use explicit criteria for success, not only output volume.
- Use a two-week cadence to update priorities from real outcomes.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.