VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From the VZ perspective, this topic matters only when translated into execution architecture. In 1966, Hungarian scholar Mihály Polányi demonstrated that we know more than we can express. Sixty years later, AI is pushing the boundaries of precisely this concept. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
The concept of tacit knowledge was coined by a Hungarian scholar, Mihály Polányi. Ironically, there is almost complete silence in the Hungarian-speaking world regarding how artificial intelligence breaks down and transforms this form of knowledge, which, by definition, is difficult to put into words. This article digs deeper: it presents the three interrelated levels of tacit knowledge and explains how large language models (LLMs) break down two of them, while the third, deepest layer is endangered by another, more insidious threat: the displacement of direct practice. The real paradox lies not in the power of AI, but in the fact that we fail to realize which level of knowledge we are losing—and while visible, measurable results improve, our invisible knowledge base is leaking away unnoticed.
Grandma Can’t Dictate a Recipe—and That’s the Point
My grandmother was rolling out dough. I asked her: how does she know when it’s thin enough? She looked at me as if I’d asked a pointless question. “I can feel it,” she said. That was it.
She wasn’t being secretive. She really couldn’t explain it. The knowledge was in her fingertips, in the rhythm of her movements, in the subtle changes in the dough’s resistance. Decades of experience that were never put into words—because that’s not where they belong. This knowledge resides in action, in the body, and can only be “accessed” from there. If you try to extract a recipe from it, it’s like asking a poet to write down the chemical formula for emotions. You can try, but the essence—the experience—is always missing.
Mihály Polányi described exactly this in his 1966 work, The Tacit Dimension: “We know more than we can say.” The concept of tacit knowledge was thus introduced to the world by a Hungarian scientist. Polányi was not merely a philosopher, but a physicist and chemist who, drawing on his practical work in science, argued for the limitations of formal, expressible knowledge.
It is ironic that this topic is virtually a blind spot in Hungarian discourse, even as the technological revolution—which dominates the realm of formal knowledge—begins to undermine precisely this invisible foundation. What happens when a tool that feeds exclusively on the world of sayable things encounters that which is unsayable?
What Are the Three Types of Tacit Knowledge? A Layered Model
Tacit knowledge is not a homogeneous mass. It has three distinct, interrelated levels, and AI treats each of them qualitatively differently, with varying degrees of depth. As one source in the corpus puts it: “tacit knowledge consists of three constituting aspects: the phenomenal, the semantic, and the ontological aspects” [CORPUS]. We can translate this philosophical division into more practical categories in the world of work.
1. First Level: Unspoken Rules – What AI “Learns”
What is this? These are the practical patterns, heuristics, habits, and rules that we have never put into writing but follow almost automatically. How do we draft a sensitive email? In what order do we arrange the slides in a PowerPoint presentation to achieve the greatest impact? How do we broach a difficult conversation? These are patterns we’ve internalized from our environment, our successes, and our mistakes. They often seem algorithmic, but we haven’t written them down.
How does AI handle this? Large language models excel at this. In the vast ocean of training data, they seek out and consolidate precisely these unspoken yet common patterns. An LLM can easily compile a protocol for “effective client outreach” or a template for “constructive code review,” as it has seen countless examples of these. As the corpus quotes: “knowledge engineers—whether they’d read Polanyi or not—had long recognized that professional skills are largely unstated. Indeed, the interview techniques they’d established had been developed precisely in order to uncover normally hidden inferences and assumptions” [CORPUS]. AI has essentially become an infinitely patient and comprehensive “knowledge engineer” that extracts these hidden rules. Consequence: This level has effectively been broken. AI can not only replace it, but often summarizes it more coherently and consistently than any human. The loss here is imperceptible; in fact, it seems positive: we gain access to an optimized, “extracted” body of knowledge. But something disappears: the context of the rules and the personal experience of exception handling. AI provides a general template, but it doesn’t convey when to break it.
2. Second Level: Collective Knowledge – AI Atomizes It
What is this? It is the knowledge that exists not within an individual, but in the dynamic interactions, debates, and collaboration of a team or community. A controversial question on Stack Overflow, where the dialogue between answers and comments leads to the real solution. Pair programming, where deeper logic is shared through questions like “why not this way?” The quiet conversations in a lab during an experiment. This knowledge is created and updated through the process of exchange and social interaction.
How does AI handle this? AI does not achieve this level; rather, it replaces it and thereby atomizes it. Why would you go to Stack Overflow if ChatGPT instantly provides a ready-made code snippet? Why would you discuss an approach with a colleague if AI can generate three alternatives in a matter of seconds? Community-based, interactive knowledge sharing is being replaced by individual, asynchronous consultation. This high-bandwidth mode of transmission appears in a quote from the corpus: “The third strategy, the interactive, implies a far higher bandwidth than purely formal learning… the intelligently designed experimental apparatus enables the learner to alter his simulation, his perception” [CORPUS]. AI can serve as an “intelligent experimental apparatus,” but it removes human interaction from the process—the very element that provides most of the context, nonverbal cues, and immediate adaptation. Consequence: This level erodes. Imperceptibly in the short term, catastrophically in the long term. An entire generation of software developers will never participate in a heated Stack Overflow debate that would teach them deep architectural knowledge. The building blocks of collective knowledge—dialogue, debate, collaborative problem-solving—fade away, replaced by a transactional, “question-and-answer” dynamic. Knowledge remains, but the social process that maintains, refines, and recontextualizes it withers away.
3. Third Level: Knowledge Embedded in the Body – AI Cannot Reach It, but It Clears the Path
What is this? My grandmother’s fingertips. The dexterity of a surgeon who “feels” the tissue even beyond the surgical plans. The instinctive correction of an experienced pilot in an unexpected air pocket. A musician’s improvisation. This knowledge has completely merged with the action. It was not preceded by a set of rules, but rather the body and mind’s direct response to a complex situation. According to Polányi, this is the kind of knowledge we focus on through something else (“we attend from something to something else”). While riding a bike, you don’t focus on hand and body positions, but rather from that knowledge to the direction of travel. How does AI handle this? This level is practically unattainable for it. AI has no body, no sensorimotor experience, and no physical interaction with the world. It cannot extract the knowledge of rolling out dough because it does not exist in digital form anywhere. The example of AlphaGo also illustrates this: “Even in hindsight, Suleyman and his team could not determine how AlphaGo arrived at the decision to make that move in order to win” [CORPUS]. Even the AI’s own decision-making processes, which have become tacit, remain inexplicable. Consequence: Here the paradox reaches its peak. AI cannot steal this knowledge. But it can cause a lack of practice, which also leads to a loss of knowledge. If pilots fly manually less and less because systems are automated, their manual dexterity will deteriorate. If young surgeons practice only with robotic assistants, their own fine motor skills will not develop properly. AI will be a great tool, but if the tool is replaced by a crutch on which we rely entirely, the knowledge embedded in our bodies will never develop or will fade away.
Why don’t we realize which level of knowledge we are losing? The Polányi Trap
The real paradox is not that AI kills tacit knowledge. The original Polányi paradox concerns the inarticulability of knowledge. The paradox of our time, however, is that we do not realize which level of knowledge it is killing or replacing. This is a misleading distortion that we can call the “Polányi trap”: what is easy to measure and automate (the first level), we overvalue; what is difficult to measure and lives in silence, in practice, and in community (the second and third levels), we underestimate until its loss becomes a critical problem.
- The first level is gone, and we don’t miss it. AI perfectly replaces the collection of unspoken rules. The increase in efficiency is visible; the loss of knowledge is invisible. It’s like a master chef writing down a recipe: the recipe remains, but the “know-how” of his hands does not.
- The second level is now eroding, and it doesn’t hurt yet. Community platforms are quieter, meetings can be shorter because “we have the answer.” But the community’s collective intelligence, its ability to solve problems together, is weakening. As the corpus quotes the Polányi paradox: “we know more than we can tell… it’s this paradox that has, until recently, kept anyone from creating software that could play the game Go as well as the top human practitioners can” [CORPUS]. AlphaGo defeated the best human players, but it would not have been created without learning the deep strategies generated and passed down by the Go community. If we cut the machine off from the community, the source of future development will dry up.
- The third level is threatened by a lack of practice. This is the most insidious. Pilots who haven’t flown manually for years lose their feel for the controls. Programmers who only review and modify AI-generated code never develop that deep debugging instinct that allows them to deduce the root cause from strange behavior. It is not AI that takes away knowledge embedded in the body—but the removal of the path to acquisition and maintenance caused by AI dependency.
How can you preserve your tacit knowledge? A conscious defense strategy
Ask yourself: what tacit knowledge do you use in your daily work? At what level does it exist? And if tomorrow an AI takes over half of your tasks—which of your skills will wither away because you no longer practice them?
- Protect the ecosystem of collective knowledge. Use AI not as a responder, but as a discussion partner. Present the ideas and code it generates to the team for discussion. Say no to working completely alone. Look for tasks that require working in pairs or groups, where AI is just one of the participants, not the center.
- Build “practice routines” for embodied knowledge. Identify the skills that stem from physical or deep cognitive practice. Schedule regular “hands-on sessions”: write code from scratch occasionally, sketch architectures by hand, and participate in training sessions that require manual dexterity. Use AI to build upon the fundamentals, not to skip learning them.
- Be aware of the Polányi Trap. When weighing your options, don’t just look at efficiency gains. Ask yourself: “If AI does this entirely, what will I never learn or will I forget?” Evaluate factors that are difficult to measure: team cohesion, the development of your own intuition, and your professional self-esteem.
Key Takeaways
- The Hungarian scholar Mihály Polányi coined the concept of tacit knowledge (“we know more than we can say”)—yet in Hungary, we hardly address the challenges it poses in the 21st century.
- Tacit knowledge has three distinct levels: (1) unspoken rules and patterns, (2) collective knowledge realized through community interactions, (3) practical skills embedded in the body and action.
- AI breaks down the first level (extracts and optimizes it), erodes the second (atomizes and replaces it), and cannot grasp the third, but the path leading to it (practice) is at risk of being lost.
- The greatest danger is the Polányi trap: the overvaluation of easily measurable and automatable knowledge, while undervaluing the ineffable, collective, and embodied knowledge, until its loss becomes irreversible.
Frequently Asked Questions
What is the Polányi paradox? Mihály Polányi, a Hungarian philosopher of science and natural scientist, put it this way: “We know more than we can say.” Tacit knowledge is a form of knowledge that cannot be fully put into words, formalized, or algorithmized because it is embedded in experience, the body, and action. A classic example is riding a bicycle: you know how to do it, but describing exactly how you maintain your balance is extremely difficult. The corpus states: “‘we can know more than we can tell’. Economists called this constraint on automation ‘Polanyi’s Paradox’” [CORPUS].
Why is tacit knowledge important in the age of AI? Because AI primarily works with explicit knowledge, and its impact on different levels of tacit knowledge varies radically. (1) It learns unspoken rules and reproduces them effectively, which is useful, but it loses the context. (2) It atomizes the transfer of collective knowledge (e.g., online forums, teamwork), which weakens the ability for community learning in the long run. (3) It cannot take away embodied knowledge, but it can render the path to it (regular practice) meaningless if we rely on it completely. The paradox: the greatest damage occurs where we notice it the least.
How can I preserve my tacit knowledge?
- Conscious practice: Identify the parts of your work that require deep, ingrained skills (planning, critical thinking, fine motor tasks), and schedule time for regular practice when you turn off the AI.
- Cultivate collective knowledge: Be an active member of learning or professional communities. Teach others, whether formally or informally. Use AI in a team setting for discussion, not just for generating individual answers.
- Watch out for the Polányi trap: Avoid focusing your performance evaluation solely on results that are easily imitated and measured by AI. Value and strive to preserve the processes (collaboration, creative exploration, trial and error), not just the end result.
Related Thoughts
- The Three Types of Tacit Knowledge
- Tacit Knowledge in Coding (SECI Model)
- What the vibe coder doesn’t know they don’t know
- The three types of tacit knowledge in the age of AI new
Varga Zoltán - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
Compression is not knowledge.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Use explicit criteria for success, not only output volume.
- Use a two-week cadence to update priorities from real outcomes.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.