VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
In VZ framing, the point is not novelty but decision quality under uncertainty. Code generation speed can mask the erosion of tacit engineering competence. Teams need explicit learning loops to keep capability compounding. The practical edge comes from turning this into repeatable decision rhythms.
TL;DR
Vibe coding—writing code “by feel,” using AI, without testing or understanding—is currently at its peak. But decades ago, Mihály Polányi described why tacit knowledge cannot be bypassed: because you don’t know what you don’t know. AI generates code. But why it works—or why it doesn’t—is the layer that vibe coding doesn’t see.
A Keyboard’s Memory: The Two Faces of Knowledge
Szentendre Artists’ Colony, an autumn morning. I watch a sculptor’s hands as he shapes the clay. He doesn’t think about the movements—his hand “knows” what to do. He’s been doing this for thirty years. The touch, the pressure, the formation of the shape don’t stem from theory, but from knowledge embedded in the body, gathered over decades. This knowledge is so deeply ingrained that it’s almost impossible to put into words. The sculptor knows, but cannot teach it in a single description. It can only be passed on directly, through example, through practice.
On the train home, I recall an article by Andrej Karpathy. Karpathy—a former researcher at OpenAI—introduced the concept of “vibe coding”: coding where you “forget the code exists” and simply let the AI handle the implementation. “Vibe-based coding,” he says. “I don’t even write tests—if something doesn’t work, I copy the error message, and it usually fixes itself.”
This is where the contrast becomes sharp. There is a crucial, metaphysical difference between the sculptor’s hand and the vibe coder’s hand. The sculptor knows what he knows—he possesses an internal map drawn by experience, but whose legend he cannot fully describe. The vibe coder’s situation is the opposite: he doesn’t know what he doesn’t know—and he doesn’t even suspect that he doesn’t know. They feel no sense of lack, because AI immediately fills the apparent gap with a functioning (or seemingly functioning) piece of code. One lives in the uncertainty of the completeness of knowledge, the other in the security of the lack of knowledge.
This phenomenon is not new. One quote from the corpus offers a perfect parallel: “The Getty Museum in Los Angeles invited leading experts in Greek sculpture to examine a kouros… The experts consistently reacted with what is known as an ‘intuitive aversion’—the strong feeling that the kouros was not 2,500 years old, but a modern forgery. None of them could immediately say why they thought the statue was a forgery.” [UNVERIFIED] The experts, like the sculptor, relied on their deep, tacit knowledge. The vibe coder, on the other hand, is like an art dealer who checks the statue with a complex scanner; the scanner gives a “working” signal, but he himself lacks the subtle intuition that raises suspicion of a forgery. Machine validation is convenient, but the underlying, inexplicable sense of certainty is missing.
Why Can’t We Skip Over Tacit Knowledge? The Deep Layers of the Polányi Paradox
Mihály Polányi — a Hungarian-born philosopher — articulated his fundamental paradox in 1966: “We know more than we can say.” This is tacit knowledge. But what is it, really? It is not merely a “feeling.” It is a complex cognitive infrastructure consisting of the intertwining of experience, pattern recognition, bodily memory, and contextual understanding. Riding a bike, facial recognition, diagnosis—these are all forms of knowledge that we use but cannot break down into explicit rules without losing their essence.
In software development, this tacit knowledge does not lie in a knowledge of syntax, but in the infinitely complex network of decisions that exists in the mind of an experienced developer. Nicholas Carr cites the Polányi paradox in The Glass Cage: “Since a software program is essentially a precise, written sequence of instructions—do this, then that, then this—we assumed that computers could replicate skills based on explicit knowledge, but would struggle with tacit knowledge.”
Herein lies the fundamental fallacy of vibe coding. It assumes that code = explicit knowledge. Write down the desired behavior (the “vibe”) in natural language, and the AI spits out the implementation with the correct syntax. It’s like believing that a chef’s recipe is equivalent to their culinary artistry. The recipe specifies the ingredients and steps (explicit knowledge), but it doesn’t convey a sense of cooking time, the intuitive amount of salt, or the knowledge of when to deviate from the recipe (tacit knowledge).
An experienced programmer doesn’t know what they write down—but rather what they don’t write down: why they choose this particular architecture for a given scalability problem; why they avoid a certain design pattern in a given context; what compromises lie behind a seemingly simple solution; how they “sniff out” potential errors from the code’s structure. This knowledge cannot be fully algorithmized because it involves an infinite number of context-dependent variables.
Herbert Simon, the Nobel Prize-winning cognitive psychologist, points precisely to this in his theory of intuition. The corpus quotes: “There is a code in the situation; with the help of the code, the expert finds the information stored in their memory, and this information provides the answer. Intuition is nothing more and nothing less than recognition…” [UNVERIFIED] The experienced developer’s brain contains thousands upon thousands of “codes” linked to past problems, solutions, and consequences. When faced with a new problem, instead of reasoning linearly like a beginner, their brain immediately recognizes patterns and accesses these mental codes. Vibe coding attempts to bypass this process, but in doing so, it deprives the developer of the opportunity to build these mental connections.
Why Have Developers Become 19% Slower? The Anatomy of Cognitive Friction
A 2026 study by METR revealed shocking yet profoundly clear data: developers who used AI tools (primarily code completers) intensively were 19% slower at completing tasks than those who did not use them. At the same time, these developers themselves felt that their work was, on average, 20% faster.
This is not merely a statistical anomaly—it is a living, measurable embodiment of the Polányi paradox in workplace performance. How is this possible?
- The Hidden Burden of Validation: A vibe coder doesn’t write code; instead, they primarily evaluate and validate. Checking every single line of AI-generated code, ensuring it fits the context, and searching for errors places a massive cognitive burden on the developer. They don’t build; they check. This process is slower than writing code based on one’s own, fully thought-out mental model because it requires constant context switching: from my own thoughts to the generated code, back to the requirements, and back to the code again.
- Lack of a Mental Model: When you write the code yourself, a mental model of how the system works develops in parallel. This model enables quick debugging and future expansions. With AI-generated code, this model is shallow or absent. The developer uses a “black box” that they do not fully understand. When something goes wrong, there is no deep understanding to fall back on, so the fix takes us back into the coding cycle: another prompt, another validation.
- The Illusion of False Fluency: AI’s instant responses create a false sense of knowledge. The machine is fast, so I am fast too. This misunderstanding stems from confusing “knowledge” with “access.” Searching with Google is also quick access to information, but that doesn’t make you an expert on the subject. The developer “feels” at the level of tacit knowledge that they are effective because the machine responds immediately to their requests. But the creation and implementation of system-level, architectural decisions—which are precisely guided by tacit knowledge—happen more slowly and uncertainly because there is no deep foundation to rely on.
A quote from Merleau-Ponty in phenomenology comes to mind: “Embodied intelligence—tacit knowledge—arises from the close connection between perception and action.” The vibe coder breaks down precisely this link: it outsources the action (code generation), but the duty of perception (understanding) remains. This separation creates a small cognitive gap every single time—and it is the sum of these gaps that accounts for the 19% slowdown.
Another fascinating example from the corpus highlights the consequences of a lack of understanding: “The thirty-seventh move demonstrated the incomprehensibility of AI. Even in hindsight, Suleyman and his team could not determine how AlphaGo arrived at the decision to make this move in order to secure victory.” [UNVERIFIED] The vibe coder is in exactly this situation: they see the code (move 37), it works (leading to victory), but they have no idea why this was the optimal move. If the game board (the requirements) changes, they won’t be able to adapt the strategy.
Key Takeaways: What You Don’t Know Rules You
- The fundamental misconception of vibe coding is that code is merely explicit knowledge—a translatable specification. In reality, code is the frozen, visible trace of an implicit knowledge process.
- Polányi’s paradox (“We know more than we can say”) is not merely a philosophical observation. In vibe coding, the opposite holds true: “We understand less than our code reveals.” The generated code can be an empty symbol.
- The 19% slowdown in the METR study is not a flaw in the technology, but a reflection of how cognition works. The gap between the illusion of speed (the perceived +20% acceleration) and the actual slowdown is a graphical representation of the chasm between tacit and explicit knowledge.
- The ultimate question will never be whether AI writes code. It will be: when the code must face the chaotic complexity of reality, who will be the one to understand why that code was written and how it needs to be transformed? Those who follow only the vibes will find themselves in the middle of a stormy sea without a compass.
Frequently Asked Questions: Further Clarification
What does the vibe coder not know that they don’t know?
According to the Polányi paradox, an experienced professional “knows more than they can say.” The vibe coder is at the other end of this spectrum: they understand less than what their code shows. AI writes syntactically correct instructions, but does not convey the underlying, context-building, pattern-recognizing tacit knowledge. It doesn’t know what it left out of its own mental model, because that model was never built in the first place. As one part of the corpus reminds us: “The first lesson every algorithm should learn is that it can be wrong. Baby algorithms must learn to doubt themselves…” [UNVERIFIED] The vibe coder must also learn to doubt their own AI-dependent understanding.
Why is this a problem if the code works?
Because software development isn’t about creating a static product, but about managing the lifecycle of a system that exists in a changing environment. The code works as long as the circumstances match what the AI was trained on. When it breaks—and in real, complex systems, it’s not a matter of “if” but “when”—the vibe coder finds themselves on a foreign planet. They can’t pinpoint the error because they don’t understand the system’s internal logic. They can’t come up with a creative solution because they lack the deep knowledge needed to create new combinations. The fix will once again consist only of prompts and validation, accumulating ever-deeper technical debt.
How can we use AI without falling into the trap of vibe coding?
The answer lies in deliberate practice and the “high-level augmentation” model. Use AI:
- As a critic, not an author: Ask it to analyze your own code, suggest alternatives, and point out potential errors.
- As a teacher: Have it explain how the generated code works, and ask about the underlying principles. Build your own mental model.
- To automate repetitive tasks, not to outsource critical thinking. Write the architecture and main logic yourself, and let the AI help with boilerplate code or documentation. The goal shouldn’t be to know as little as possible, but to use AI to gain higher-level knowledge—just as a calculator helps you master advanced math, not forget how to do mental arithmetic.
Related thoughts
- The Polányi Paradox: Tacit Knowledge
- Vibe Coding: The Next Chapter in Deskilling
- Tacit Knowledge of Coding (SECI)
Zoltán Varga - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership You can’t vibe what you can’t name.
Strategic Synthesis
- Define one owner and one decision checkpoint for the next iteration.
- Track trust and quality signals weekly to validate whether the change is working.
- Iterate in small cycles so learning compounds without operational noise.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.