VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
In VZ framing, the point is not novelty but decision quality under uncertainty. There are three levels of tacit knowledge, and AI is already breaking through the first two. The third—knowledge embedded in the body—will not be destroyed by AI, but by our inaction. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
There are three levels of tacit knowledge: unspoken rules, community knowledge, and embodied knowledge. AI replaces the first, erodes the second, and cannot reach the third. But if we don’t practice, it’s not AI that will kill the third—it’s our own inaction. This article digs deeper: it examines how technology is not merely a tool for change, but creates an environment that fundamentally transforms how we acquire, preserve, and pass on knowledge that cannot be put into words. The question is one of survival—not against AI, but about preserving the values inherent in our own human capabilities.
The Window of the Shelter
The wooden beams creak in the cold. The embers in the fireplace are now just red-hot, and the air in the room is sharp and clear. Outside, in the darkness of dawn, the line of the mountain ridge stands out, white with snow. I hold the mug in my hands; rings of steam from the hot tea rise into the air. I watch them disperse. There is a knowingness in my body as I hold the mug—the careful distance from the heat, the precise pressure of my fingers. No one taught me this, yet it is there. I sit by the fire and think about all that dwells in this silence, in this movement, which I have never put into words. It is like the mountain itself: unspoken, yet present.
The Pastry Chef’s Hand: An Introduction to the Three Dimensions of Tacit Knowledge
Tacit knowledge can be broken down into three levels: unspoken rules, communal knowledge, and embodied knowledge. AI makes the first level explicit, atomizes the second, and cannot reach the third—but a lack of practice can. The question is not what AI knows, but what we still do by hand.
In the Gerbeaud confectionery workshop, a master is tempering chocolate. The thermometer reads 31.2 degrees, but the master isn’t looking at the thermometer. He’s watching the surface of the chocolate. He touches it with his finger. “Not yet,” he says. Two minutes later: “Now.”
What changed in those two minutes? Nothing a sensor could measure. Something forty years of experience can sense. This is the pinnacle of embodied, somatic tacit knowledge. As Collins (2010) expanded on Polányi’s (1966) seminal work, there are three distinct types of tacit knowledge:
- Relational tacit knowledge: Knowledge that could be articulated if asked (e.g., an internal rule). This is the least tacit.
- Somatic (bodily) tacit knowledge: Often known as embodied knowledge, which is part of our body (e.g., riding a bicycle).
- Collective (community) tacit knowledge: Refers to common sense or knowledge of how the world works (e.g., knowing how to drive in traffic).
Nonaka and Takeuchi — pioneers of Japanese knowledge management — called this type of knowledge “ba”: the transfer of knowledge that flows within a shared space and cannot be put into words. The master does not teach the apprentice. The apprentice stands beside the master — and slowly absorbs what cannot be spoken. However, this process takes place within a physical and social context, in a shared practice. What happens when this context is broken down and atomized?
What is the first level, and how does AI replace it?
In most organizations, there’s a layer of knowledge that’s never been written down, but everyone follows. How to speak to your boss differently than to a client. In what order to present the elements of a proposal. When not to reply to an email immediately.
LLMs extract these patterns from billions of text samples. What was previously tacit is now explicit. This first level has been broken—and that’s not necessarily a bad thing. Articulating unspoken rules democratizes knowledge. A junior employee can instantly access behavioral patterns that would previously have taken years to internalize within the organizational culture.
But here lies a deeper paradox. One of the quotes in the corpus points out: “Today’s chess algorithms are taught only the basic rules of the game. They learn everything else on their own… AI is not stupid…” [CORPUS]. AI, therefore, not only reads between the lines of unspoken rules but is also capable of learning new patterns on its own—and even generating new rules. The question is, as we make this relational tacit knowledge explicit, do we lose the subtle context in which this knowledge gained meaning? The rules of email etiquette differ between a startup and a multinational corporation. AI can state the rule, but it cannot necessarily convey the meaning of the rule—the underlying cultural and social understanding. This understanding belongs to the next level: communal knowledge.
How does AI atomize communal knowledge, and why is this dangerous?
The second level is what no single person can grasp—only the group. The coffee-break conversation after a design meeting, where the real ideas are born. The code review, where the senior developer isn’t fixing the code, but the thinking. The bedside consultation, where tone of voice speaks louder than words.
AI fragments this level. When everyone individually asks the AI about their own problem, community interaction—and the knowledge embedded within it—disappears. Not intentionally. It’s simply no longer necessary to get together to get an answer. The fundamental process of building collective tacit knowledge—what Collins calls “common sense” or “knowledge of how the world works”—is disrupted.
Imagine a design studio where people used to debate, sketch, and build on each other’s ideas at the whiteboard. Today, everyone works on their own computer, optimizing their own tasks with the help of their own AI assistant. The end result may be more efficient, but where is the collective “aha!” moment when the group as a whole figures out something that no one could have arrived at individually? Collective knowledge is not merely the transfer of information; it is built through shared practice, shared challenges, and coordinated actions. AI replaces this process rather than enriching it. Another excerpt from the corpus warns: “AI is already capable of independently creating works of art and making scientific discoveries.” [CORPUS] If scientific discovery increasingly becomes the work of individual AI agents, the debates among lab benches, the eye contact, the desperate enthusiasm—all that which fuels collective creativity—will disappear.
What remains unstoppably human: the third level of knowledge inscribed in the body
The pastry chef’s touch. The surgeon’s hand. The pilot’s reflexes. This is the kind of knowledge that can only be acquired through action, and can only be retained through action. Collins calls this somatic tacit knowledge: a kinesthetic memory encoded in our bodies. This is when your hand “knows” how to correct the balance of a bicycle without you consciously calculating the laws of physics.
AI cannot achieve this—because it has no body. It has no sense of touch, no kinesthetic memory, no decades of movement patterns. No matter how advanced a robot is, the sensitivity of the human body, its fine motor skills, and its complex, analog interaction with the environment are currently beyond its reach. This level is safe. At least from AI.
But not from a lack of practice. A pilot who hasn’t flown manually for years loses their feel for the controls. The pastry chef who relies solely on machines for tempering forgets the knowledge of the fingers. The surgeon who performs more and more robot-assisted surgeries risks dulling the finest senses of their own manual dexterity. It is not AI that takes it away—it is inaction that takes it away. We fall into the trap of convenience, efficiency, and short-term savings. One of the ideas in the corpus relates to this: “Baby algorithms must learn to doubt themselves, to signal their uncertainty” [CORPUS]. But one of the greatest strengths of human knowledge, embedded in the body, is precisely the ability to handle uncertainty through the body’s intuitive reactions. Only constant practice keeps this ability alive.
What else do you practice by hand in the age of AI? The philosophy of practice
Three levels. The first is broken—and that’s okay. We’re losing the second one now—and that’s going to hurt. The third one, AI can’t touch—but we can, through our inaction.
So the question shifts. It’s not what AI can do. The question is: what do you still practice by hand? This isn’t a nostalgic backlash against technology, but strategic self-defense. Preserving knowledge embedded in the body is like practicing a language: if you don’t use it, you lose it. But how can we practice intentionally in a world that wants everything to be as fast and automated as possible?
- Intentional Slowing Down: Choose an element of your work that AI could do, but that you do by hand. This could be a hand-drawn summary sketch, estimating a complex calculation in your head, or even drafting an important email with pen and paper.
- Seeking physical crafts: Find an activity that uses your body and hands: pottery, woodworking or metalworking, gardening, learning a musical instrument. These are not hobbies, but practices for reestablishing our cognitive-experiential connection.
- Protecting collaborative practices: Fight to ensure that certain decisions and creative processes are based not on individual AI consultations, but on genuine, in-person collaboration. Bring back the whiteboard, physical engineering models, and joint problem-solving sessions.
The corpus suggests that we can speak of “the goals and decisions of computers, algorithms, and chatbots” without attributing consciousness to them [CORPUS]. But the purpose and meaning of human action often lie within the action itself, in the dialogue between the body and the world. If we abandon this dialogue, we lose more than just a skill. We lose the connection that defines who we are. Ultimately, the question is not a technological one, but an existential one.
Key Takeaways
- There are three types of tacit knowledge: relational (unspoken rules), collective (community knowledge), and somatic (embodied knowledge)—according to Collins’s (2010) classification.
- AI replaces and democratizes relational knowledge, atomizes and weakens collective knowledge in the absence of shared practice, and is fundamentally unable to affect somatic knowledge because it has no body.
- The greatest threat to somatic (embodied) knowledge is not AI, but the lack of practice, the passivity lulled by comfort.
- The most important question from the perspective of personal and organizational strategies: What else do you intentionally practice with your hands and body to preserve the knowledge that cannot be digitized?
Frequently Asked Questions
What are the three types of tacit knowledge?
The three types: (1) Somatic (bodily) knowledge — what your body knows but you cannot describe (cycling, playing a musical instrument, manual crafts), (2) Social (collective) knowledge — what you know from social situations and group practice (the nuances of conversation, collaborative problem-solving), (3) Expert/Relational knowledge — what you know from years of experience and could explain if asked (internal rules, mental models). The “expert intuition” mentioned in the original article stems here more from the intertwining of embodied and communal knowledge.
Which does AI know and which does it not?
AI can simulate and make explicit a significant portion of relational/expert knowledge through pattern recognition. It atomizes the components of collective knowledge, disrupting the shared practice that is a prerequisite for transmission. Physical (somatic) knowledge is completely inaccessible to it, because it has no experience of a physical body. Therefore, the most important human abilities remain those that require physical presence and communal practice.
How can we preserve communal knowledge in the age of AI?
We must intentionally create space for non-digital, personal collaboration. Examples include: in-person brainstorming, mentoring relationships based not only on the exchange of information but also of experience, and team exercises where AI is a tool rather than the central participant. The goal is to preserve the “ba”—the space of shared presence.
Is the loss of embodied knowledge a threat only to skilled workers?
Not at all. We all possess embodied knowledge: writing in the form of handwriting (which offers cognitive benefits), reading on paper (which strengthens spatial memory), and even the communication of complex thoughts through body language. These are subtle but fundamental connections between our thinking and our bodies. If we do everything through digital and passive interfaces, these connections weaken.
Related Thoughts
- The Polanyi Paradox: Tacit Knowledge
- Tacit Knowledge of Coding (SECI)
- The Meaning of Friction: Learning in the Age of AI
Zoltán Varga - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership Not all latent space is conscious space.
Strategic Synthesis
- Convert the main claim into one concrete 30-day execution commitment.
- Set a lightweight review loop to detect drift early.
- Review results after one cycle and tighten the next decision sequence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.