VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From a VZ lens, this piece is not for passive trend tracking - it is a strategic decision input. AI projects don’t fail because of resistance. They fail because of competing priorities. Kegan’s model explains what Gartner cannot. Its advantage appears only when converted into concrete operating choices.
TL;DR
Robert Kegan explains in his “Immunity to Change” model what Gartner 80% failure rate does not: AI projects do not fail because of resistance. But because the organization wants to change AND remain unchanged at the same time. An invisible immune system of competing commitments protects the status quo.
The Café Window
I sit behind the wet glass, watching the droplets wind their way down the panels. The steam from my coffee blends with the view; the city is gray and hums softly. On the table next to me, a laptop is open, displaying a blank milestone plan. Outside, a runner dashes through a puddle, and the wet asphalt reflects in the ripples left behind by his shoes. Inside, in the warmth, my thoughts meander in a similar way—not forward, but in circles. Every new endeavor begins this way: with a blank canvas, a promise. Then something invisible begins to shape it, like this rain on the window. It’s not about resistance. But something deeper, the formation of a pattern that lurks there from the very beginning, protecting the familiar, even when the words speak of change.
The Tokyo subway in the morning
I stand at the edge of the platform, beneath the city. The air is thick with the warmth of bodies and the smell of iron. The train arrives, its doors open, and a flood of people pours out. Another flood waits to get in. I stand motionless, surrounded by the flow of the crowd, but it doesn’t carry me away. I see the white shirts, the black suits, the gazes fixed outward. Everyone is moving in one direction, with a clear goal, but there is a tension in the air. It’s as if everyone is in a hurry, yet some invisible force is holding them back. The system works perfectly, and that is precisely the problem. I stand in front of the next door, waiting for the order to let me in. The question isn’t where we’re going. The question is why we can’t move differently within this system when the goal is so clear.
How does the structure prevent us from reaching the goal? The pull-up analogy
Gym, Wednesday night. Next to us, a man is trying to do a pull-up. He can’t do it. It’s not because he’s weak—he’s been training diligently for three months. But pull-ups require strengthening different muscle groups than the ones he’s been training so far. His old workout plan, which focused on strengthening his chest and biceps, didn’t prepare him for the dominant movement of the back and elbows. The structure—the sequence of familiar exercises, the weights—implicitly determines which muscles develop. The man genuinely wants to do pull-ups, but the system he’s training in is secretly designed to serve a different goal: building an aesthetically pleasing upper body. The system is hindering his own goal.
This is not just an individual parable, but an organizational one as well. Kegan’s model is precisely about this. The problem isn’t the individual’s shortcomings or lack of willpower. It’s the logic of the system—which carries hidden rules, commitments, and compromises that simultaneously strive forward AND backward. As Kegan and Lahey described in their laboratory: “Our lab is out in the world, in real work settings… where, in every case, courageous leaders—CEOs, senior managers… struggle with a hidden dynamic of countervailing forces that preserves and sustains itself for a very good reason.” [CORPUS].
This dynamic is not pathological; it is a defense mechanism. But when the external environment changes radically (e.g., with the emergence of AI), this internal immune system can become the organization’s greatest obstacle.
What is competing commitment, and why does it dominate organizational thinking?
Kegan and Lahey base their research on a fundamental observation: people (and organizations) do not fail to change because they do not want to. It is far more common for them to simultaneously pursue multiple, structurally conflicting goals without being aware of it. This is not hypocrisy, but a natural response to complex situations. Every important goal is accompanied by a defense mechanism for another important goal.
Let’s consider a typical organizational escalation:
- Executive Level: The CEO consistently wants both innovation (“Let’s be market leaders in AI”) AND risk aversion/control (“Not a single scandal should touch our brand; not a single violation of the rules should occur”). Control is not the enemy of innovation, but its bastion of defense in the leader’s mental map.
- Middle management level: The manager wants team satisfaction and productivity (“Implement AI to make your work easier”) AND the recognition and trust of senior leaders (“Let’s prove that all our processes remain fully transparent and controllable”). The latter often manifests as micromanagement, which makes the former impossible.
- Individual level: The professional wants to simultaneously leverage AI’s efficiency-boosting power AND preserve the social and financial value of their unique expertise and experience. The fear that AI will “eliminate” their job often leads to deliberate underutilization of the tool or criticism, rather than progress.
Every level sincerely believes it wants change. And every level sincerely, yet invisibly, defends the existing structure that provides it with security, meaning, identity, or power. This duality creates a structural tension that gives rise to the 80% failure rate indicated by Gartner.
How is AI different from all previous technologies, and why does it trigger the immune system?
As the corpus quote points out: “My investigations made two things clear. First, AI is substantially different from previous digital transformations… AI feels different. It’s coming fast; you know that. But you are probably much more apprehensive about this technology…” [CORPUS]. Previous digital waves (ERP, CRM, cloud) mainly optimized information flows and processes. AI is different: it simulates decisions, generates creative content, and replaces or supplements human cognitive functions. That is why it affects not only the how, but also the for whom and the why.
The AI project’s immune system is therefore not triggered by a flaw in the technology. The organization’s immune system reacts to disruption—because AI adoption directly threatens those deeply entrenched, stabilizing structures:
- A culture of hierarchy and control: Access to information and decision-making authority are directly linked to power. An AI that prepares a sales proposal or writes a risk analysis could bypass the middle management approval layer. The team leader won’t openly sabotage the process, but as the article’s example shows, they will introduce “additional quality assurance steps,” which restores control and negates the increase in efficiency. The immune system has successfully defended itself.
- The valuation of expertise: In an organization, pay, status, and respect are often tied to accumulated tacit knowledge. If an AI can simulate or document it, this threatens the expert’s identity and sense of security. The hidden commitment might be: “I want the team to be effective (Goal 1), BUT I am also committed to ensuring that my expertise remains indispensable (Goal 2).” This conflict can manifest in the form of “knowledge preservation” projects that require endless documentation, effectively paralyzing AI integration.
- The model of assigning responsibility: Who is responsible if an AI-based decision is wrong? Legal, regulatory, and ethical fears can serve not only as external constraints but also as pretexts for internal immune reactions. The hidden commitment to risk avoidance (“Let’s protect the organization and ourselves from taking responsibility”) can completely paralyze a statement supporting innovation (“Let’s adopt new technologies”).
How can this organizational immune response be identified and managed? Kegan’s methodology
Kegan and Lahey’s method is not about assigning blame, but about uncovering and understanding. The title of the chapter “PART I: UNCOVERING A HIDDEN DYNAMIC…” [CORPUS] also refers to this. The essence of the process is the conscious, collaborative uncovering of hidden competing commitments. This is not a source of shame—but rather a deeper understanding of how the organization functions.
In practice, this involves a group completing a four-step “Immunity Map”:
- The proposed change: E.g. “We will fully integrate the X AI tool into the sales process.”
- Behaviors we do not engage in (what we do not do): E.g., “We do not fully trust the generated proposals; we manually verify everything.” “We do not share all data with the AI model.”
- Conflicting commitments (the concerns behind point 2): E.g., “We are committed to not losing a single customer due to a potential error.” “We are committed to ensuring maximum security for customer data.”
- Underlying assumptions (fears, worldviews): E.g., “We assume that a mistake would be fatal to our relationship.” “We assume that the technology partner is unreliable.” “We assume that if the AI works perfectly, we will no longer need our human salespeople.”
Once the map is complete, the conflict becomes visible. The question is not “how do we force change?” but “which of our hidden commitments are truly important, and how can we safely preserve them while still moving forward?” For example: protecting customer data is critical. Instead of simply saying “we don’t share data,” we can establish strict data protection protocols and controls around the AI, enabling its safe use. This conscious decision resolves the immune response.
The Cascade: The Chain Reaction of an AI Project’s Failure in the Real World
The impact of these immune reactions is not limited to the project level. They spread like a cascade throughout the organization and beyond:
- Digital/Manufacturing Level: The AI proof-of-concept (PoC) works, but scaling up is stalled.
- Corporate Level: Middle managers, whose performance is measured based on risk avoidance and control, unconsciously slow down or derail projects. Senior management is disappointed because they see no ROI.
- Workplace/Community Level: Employees’ anxiety over the loss of meaningful content leads to low adoption and resistance. Experienced experts resist or leave. Knowledge sharing ceases.
- Local/Regional Economic Level: If this happens at many companies, the region falls behind in technological adoption, productivity growth stalls, and a long-term competitive disadvantage emerges. As another quote from the corpus clearly states: “The greatest lesson of history is that much of what we believe to be natural and eternal is in fact man-made and changeable. However, we must not be content with merely accepting this…” [CORPUS]. Failure can become a self-reinforcing cycle.
What should be done? Practically overcoming resistance to change
The success of an AI project therefore depends less on the chosen model and much more on the organization’s ability to resolve its own internal contradictions.
- Change the question: Don’t start with “How do we implement this?” Start with “What are we unwilling to give up for this change?” This question uncovers hidden commitments.
- Create an immunity map with key groups: Do this BEFORE the proof-of-concept, or at least in parallel with it. Technical testing and organizational self-awareness must go hand in hand.
- Plan experiments to test fears: The fourth column (“assumptions”) consists of hypotheses that need to be tested. For example, you can test the assumption “We assume that losing a customer due to an error is catastrophic” on a small, low-risk customer base where you can measure the actual reaction. These safe experiments break down inhibitions.
- Embrace the value of the immune system: Don’t view it as an enemy. This system has protected the organization’s stability, identity, and past success. The task is not to destroy it, but to develop and recalibrate it to suit a new era.
Key Takeaways
- The real reason most AI projects fail is not technological, but psychosocial and organizational: the clash of competing commitments.
- The organization wants innovation AND control, stability AND change, all at once. This is not bad faith, but a structural tension arising from the functioning of the immune system.
- The “immune system” is not an abstract metaphor. It protects concrete, hidden structures: decision-making hierarchies, the monopoly on expertise, and the customary distribution of responsibility.
- The solution is NOT coercive manipulation or persuasion. The solution lies in the collaborative exploration of competing commitments and the safe testing of the fears underlying them during the early stages of implementation.
- AI triggers a particularly strong immune response because it simulates not only processes but also human decisions, creativity, and value creation, and therefore affects the organization’s identity more deeply.
Frequently Asked Questions
What is Immunity to Change and how does it relate to AI?
Kegan and Lahey’s Immunity to Change model is a psychological framework that describes why individuals and organizations fail to change even when they consciously and sincerely want to. AI projects are one of the most transformative forces in the history of technology, which is why they are particularly effective at triggering this defense mechanism. The model shows that the cause of failure is often an internal system of conflicting goals, rather than external obstacles.
Why do AI projects fail if the technology works?
Precisely because the cause of failure is almost never technological. A working proof-of-concept answers a technical question. For real-world adoption, however, the organization must change: power dynamics, work methods, and areas of responsibility. Hidden assumptions (e.g., “AI will take our jobs,” “We don’t trust machine decisions”) and the power structures that defend them hinder real, widespread implementation. Technology is a statue; its implementation is like moving that statue to another, living city, where new rules and interests prevail.
What is the difference between “resistance” and “immunity”?
Resistance is an active, often conscious obstruction or opposition. Immunity is a property of a system: an invisible, automatic, self-protective reaction aimed at maintaining the status quo. While resistance can be argued against or fought, immunity can only be worked with—we must understand its cause and value before we can transform it.
Can this failure rate be avoided?
Not entirely, but it can be radically reduced. The key lies in expanding the project definition. An AI project cannot be merely a “technology implementation” project. It must simultaneously be an organizational development and change management project that dedicates time and resources to mapping teams’ immunity and resolving hidden conflicts.
Related Thoughts
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The system defends what the system denies.
Strategic Synthesis
- Convert the main claim into one concrete 30-day execution commitment.
- Set a lightweight review loop to detect drift early.
- Close the loop with one retrospective and one execution adjustment.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.