Skip to content

English edition

Why do 90% of IT/AI projects fail?

90% of IT/AI projects fail not because of the code, but because an organization is a living system, not a blueprint. AI doesn’t bring order—it amplifies chaos.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

In VZ framing, the point is not novelty but decision quality under uncertainty. 90% of IT/AI projects fail not because of the code, but because an organization is a living system, not a blueprint. AI doesn’t bring order—it amplifies chaos. The real leverage is in explicit sequencing, ownership, and measurable iteration.

TL;DR

IT/AI projects don’t fail because of the code—they fail because the organization doesn’t respond as planned. An organization is a living system: it has its own reflexes, survival strategies, taboos, and reward mechanisms. AI doesn’t create order; it amplifies it—both order and chaos—and it does so quickly, on a large scale, and with convincing confidence. As long as a company’s most critical integration layer is someone putting things together in Excel, every AI agent project is nothing but automated vulnerability.


The Terrace, Before Sunrise

I sit on the cold marble bench, in the silence of the Greek island. The scent of last night’s rosemary still lingers in the air. Before me, the sea is a dark, leaden-gray expanse, but a thin, orange streak is already brightening on the horizon. I hear the steady, slow pounding of the waves beneath the rocks. The light of a ship flashes in the distance, a tiny, moving dot in the infinite. I stroke the cold surface of the bench with my palm. Everything seems calm and predictable. Then the first ray of sunlight touches the water, and the entire landscape suddenly transforms—lights, shadows, movements I hadn’t anticipated. This is the moment between the plan and the living system: when something new kicks in, and everything starts moving differently.

Between the plan and the living system

Most IT/AI projects fail not for technological reasons, but because the organization reacts as a living system: with its own immune response, survival reflexes, and unspoken rules. AI does not improve this functioning; it amplifies it—both order and chaos.

Launching an IT or AI project appears from the outside to be a technological decision. In reality, it rewrites modes of conduct. Behavior. It enforces new rules, establishes new concepts, draws new lines of responsibility, builds new feedback loops—and in the process assumes that the organization will react exactly as outlined in the plan.

Most failures begin with the fact that the organization is not a “plan,” but a living system. With its own reflexes. Survival pathways. Taboos. Reward mechanisms. And a living system has its own immune response to anything that threatens its internal balance—be it new software, a new KPI, or a workflow supported by artificial intelligence.

The great paradox of digital transformation is that while AI technology is advancing rapidly, the majority of IT and AI projects still fail. The problem is rarely in the code. It is much more often in the operations.

AI as an Amplifier — Not a Savior

Many people envision AI as a smart layer that will “fix” operations. In reality, AI is an amplifier. It amplifies patterns, definitions, exceptions, workarounds, and unspoken rules—because it learns from them, optimizes for them, and automates around them.

If, within a company, “ready” means to development that it’s been developed, to sales that it’s usable, to operations that it’s monitorable, and to finance that it’s billable—then AI won’t create a unified meaning. The system will do what the organization actually does. Not what it says it does.

That is why the end result often seems “strange”: because the model reflects the company’s internal contradictions with ruthless precision. And it does so with a magnifying glass.

[!important] The Two Scenarios In the case of order, AI scales order. In the case of chaos, it scales chaos—and it doesn’t do so gently, but quickly, on a large scale, and with convincing confidence. The difference between the two isn’t the technology, but what it builds upon.

What new failure modes does the introduction of an IT/AI system bring?

Introducing a new IT or AI system doesn’t just add new features. It creates new interactions and gives rise to new failure modes.

Most of these new failure modes don’t look like “something broke.” It looks like two things are working separately, but together they don’t do what we’d expect. Because in this collaboration, factors become critical that were previously masked by human buffers. Timing, order, delay, permissions, state management, exception handling. Who sees the data and when. When something is considered complete. The same date can be the creation date, completion date, or posting date. The same status means something different to different teams.

In the old way of operating, these differences were resolved by someone manually correcting them—flexibly, often without a word. In the new system, the machine has to decide, and this doesn’t make the system dumber, but more consistent. The organization, however, is often inconsistent.

In such cases, the implementation doesn’t just add something, but also takes away human leeway. This can be good because it reduces chaos. But reality is exceptional, messy, and full of micro-decisions that people have been making quietly until now. If this project doesn’t account for that, the exceptions don’t disappear—they just move elsewhere. They move into emails, chats, Excel sheets, separate lists, and “I’ll handle it” workarounds.

In this case, the new system becomes the official reality. Alongside it, however, exists another reality that drives actual operations. This is the most costly state: everyone is doing double the work, and in the meantime, they even have to argue about which one is the real one.

The organization doesn’t react the way it was planned

Every company has an official process and a real process.

The official process is what’s in the documents. The real process is what the company actually does—when the house is on fire, when decisions need to be made quickly, when no one wants conflict, when “we’ll handle this this way for now,” when “we’ll fix it later.”

An IT/AI implementation lands on the actual process. On the actual reward system. On the actual fears. On the actual taboos. On the actual survival strategies. That’s why it happens time and again that the project team thinks they’re implementing a system—but the organization is actually acquiring a new tool for its old way of operating.

If heroic firefighting is the glory at the company, then the new system will also become a firefighting tool, and building long-term order will take a back seat. If mistakes are punished, they don’t disappear—they just hide—and the project doesn’t receive the most important information in time. If “green status” is what counts, then the dashboard becomes a status badge, and the appearance of status becomes more important than the actual state.

In this case, it is not the technology that fails, but the organization’s own operational logic that swallows up the implementation.

AI holds up an even sharper mirror here. Because AI doesn’t just execute—it learns patterns. The workarounds, the unspoken rules, the “that’s just how we do things here” moves. If these become automated, the poor functioning doesn’t just persist—it scales.

The system fights back

The system typically reacts against any intervention. Not out of malice, but because it is protecting its own equilibrium.

You introduce a control—and a culture of circumventing that control emerges. You introduce a KPI—and KPI-gaming emerges, where the numbers look better while reality deteriorates. You introduce a new tool—and the organization figures out how to carry its old habits over into it.

An implementation has matured when you’re not surprised by this, but factor it in. Backlash is not the exception, but the norm. If there’s no plan for the backlash, then what’s happening isn’t an implementation, but wishful thinking.

Excel, the Smoke Machine, and the Power of Information

One of the most fundamental questions of digital transformation is: where is the true data, and who can say why it is true?

When the answer to this is “Pisti puts it together in Excel,” then the company doesn’t have a data backbone—it has a smoke machine. Excel itself isn’t the problem. The problem arises when Excel isn’t a tool but a makeshift system. When the company’s most critical integration layer is someone manually piecing together reality.

In this case, the company doesn’t have a data system, but stories. Three files, three “finals,” three different numbers. The filename itself is a diagnosis: final, végleges, mostmartényleg, final 2. The name means there is no single source, no single set of concepts, no single state model—and therefore reality is filled in by people.

This is the point where Excel becomes more than just heroism and operational debt: it becomes a source of informational power. Whoever puts Excel together isn’t just administering; they’re manufacturing reality. Whoever manufactures reality influences decisions. No malicious intent is required. It is enough that they see the exceptions, they know the loopholes, they know what the real number is.

This dependency has a very distinctive flavor: the report is not a query, but a request. The organization cannot query itself—only through people. The debate over numbers is not about which one is correct, but about whom we believe. This is no longer a data-driven decision, but a decision based on trust. And in this context, trust is not an abstract concept, but a concrete internal channel of power.

Without a data backbone, AI agents are deadly weapons

Many people now treat AI agents as if they were a smart workforce. But an agent doesn’t just make suggestions—it acts. They intervene in processes. They change statuses. They open tickets. They write to customers. They initiate orders. They approve. They automate.

Without a data backbone, there is nothing to build upon. An agent does not need “data,” but rather stable concepts, events, statuses, decision points, and exception handling. It must know what counts as a valid source. What is the baseline. What is an exception. What is an escalation. What is authorization. What is the definition of “done.” If these aren’t defined, the agent won’t help—it will spread the error on an industrial scale.

Malfunctioning has spread slowly so far because humans were the bottleneck. The agent eliminates the bottleneck—and this is what scales the chaos.

The most dangerous self-delusion is that the model will “fix” reality. It won’t. The model learns the distorted reality and will consistently reproduce it. That is what makes it convincing. And that is what makes it dangerous.

What is a semantic bible, and why is it indispensable for AI?

Without a shared meaning, there is no stable measurement, no stable reporting, no stable automation—and in the case of AI, no stable learning.

The “semantic bible” is not just for show. It is infrastructure. A shared thesaurus and a shared semantic agreement. It defines what an order, inventory, reservation, shipment, closure, ready status, good data, bad data, responsible party, and owner mean. It defines what an event and a status are, what a decision point and administration are. It defines what counts as an error and what counts as a deviation.

Until this is established, every KPI is a battle of interpretation, every report is a debate, and every IT/AI implementation is a political minefield. In such cases, the project is actually a language and process project—they’re just trying to get away with it under the guise of technology.

Why Are Feedback Loops the Nervous System of Operations?

Systems shouldn’t be managed; they should be kept alive through feedback.

An IT/AI project fails where feedback is slow, noisy, or inconsequential. The most expensive mistakes are typically not expensive because they are complex, but because they are discovered too late. If the organization makes a mistake today and notices it three months later, the cost of correction is no longer a decision but a new project.

Just because something is called feedback doesn’t mean it’s actually been fed back. Feedback occurs when a signal from reality actually rewrites the next step.

Two bad extremes tend to emerge. One is fixing things too early: KPIs are chosen simply because something is needed—and from that point on, the organization optimizes them, even if they’re wrong. The other is drifting: “it’s still evolving,” so there’s no measurement—and the narrative becomes reality.

The mature solution is two-tiered feedback. You need a stable operational loop that tells the truth even when the goal is still changing—observability, safety of change, ability to roll back, decision turnaround time, number of dependencies, key person exposure, and the ability to localize errors. And you need a value loop, which is initially hypothesis-driven learning. The question here is not what the ROI is over six months, but what we have learned that will shape the next decision.

Key Takeaways

  • An organization is a living system, not a plan — it reacts to every intervention with its own reflexes, taboos, and survival logic, regardless of how elegant the plan may be
  • AI is an amplifier, not a savior — it scales order or chaos, depending on what it is built upon
  • Official and actual processes are not the same — implementation occurs within the actual process, and the organization creates a new tool for its old way of operating
  • Excel dependency creates informational power — whoever “piles up” reality influences decisions, not just administers them
  • AI agents are useless without a data backbone — they spread errors on an industrial scale because they eliminate the human bottleneck
  • The semantic bible is not a decoration, but infrastructure — without a shared vocabulary, every IT/AI implementation is a political minefield
  • Feedback is only feedback if it has consequences — otherwise, it’s just another layer on top of the old way of operating

Key Takeaways

  • The root cause of IT/AI project failures is rarely the technology itself, but rather the fact that an organization is a living system that reacts to change with its own immune responses, survival reflexes, and unspoken rules. The project rewrites this actual operation, not the ideal process described in documents.

  • AI does not create or correct order; rather, it acts as an amplifier: it rapidly scales up existing patterns (whether orderly or chaotic), detours, and internal contradictions with convincing confidence. As pointed out in CORPUS, AI has deeper consequences than previous digital transformations.

  • Introducing a new system brings new modes of failure, where elements that function independently do not produce the expected result when combined. Exceptions previously smoothed out by human buffers (e.g., date interpretation, status) become critical, because the machine must make consistent decisions within an often inconsistent organization.

  • The project’s success depends on its ability to build upon processes and incentive systems that are not documented in official records. If the organization’s culture rewards heroic firefighting or maintaining the appearance of order, the new system will merely become a tool for perpetuating the old, undesirable way of operating.

  • One of the most costly forms of failure occurs when the implemented system becomes the “official” reality, while a parallel system that actually handles operations (e.g., Excel, email) continues to exist alongside it. This generates double work and constant debate over which is the true version, just as the lack of project management can lead to failure in other areas as well.

Frequently Asked Questions

Why do 90% of IT/AI projects fail if the technology is getting better and better?

Because the cause of failure is rarely the code—it’s almost always the operations. An organization is not a “plan” but a living system that protects its own equilibrium. When AI implementation touches the organization’s actual processes—the actual reward system, the actual taboos, the actual survival strategies—the organization does not adapt to the system, but rather adapts the new tool to the old way of operating. AI reinforces this because it learns patterns: the workarounds, the unspoken rules, and the exceptions as well. Malfunctioning does not persist but scales up.

What is the semantic bible, and why is it essential for AI implementation?

The semantic bible is a shared glossary and semantic convention—it defines what “order,” “ready” status, “error,” “responsible party,” and “owner” mean within the organization. As long as “ready” means something different to development, sales, operations, and finance, AI will not be able to establish a unified meaning. The model will do what the organization actually does—not what it says it does. Without a shared meaning, there is no stable measurement, no stable reporting, and no stable automation.

Why are AI agents more dangerous than traditional automation?

Because the agent doesn’t just make suggestions—it takes action. It opens a ticket, writes to a customer, initiates an order, and approves it. If there isn’t a stable conceptual framework, clear decision points, and exception handling behind it, then it spreads malfunctions on an industrial scale. In traditional automation, humans were the bottleneck that slowed down the spread of errors. The agent eliminates this bottleneck—and as a result, chaos doesn’t slow down; it accelerates.



Varga Zoltán - LinkedIn

Neural • Knowledge Systems Architect | Enterprise RAG architect

PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership

Your org chart is a map. Your culture is the territory. AI reads terrain, not legend.

Strategic Synthesis

  • Convert the main claim into one concrete 30-day execution commitment.
  • Set a lightweight review loop to detect drift early.
  • Close the loop with one retrospective and one execution adjustment.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.