VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Through a VZ lens, the value is not information abundance but actionable signal clarity. The use of AI has doubled. So has the failure rate. Companies are spending more on AI while getting less value in return. This paradox points to deep-seated structural causes. Its business impact starts when this becomes a weekly operating discipline.
TL;DR
AI adoption among companies has risen from 22% to 40% in one year. During the same period, the failure rate of AI projects has risen to 80–96%. 56% of CEOs report zero ROI. The paradox: more AI = worse results. But AI isn’t to blame.
Dawn on the Danube, under the bridge
I’m sitting on the cold stone ledge, on the Danube bank at dawn. The water is still, its surface almost glassy, reflecting the glowing arches of Margaret Bridge. The city is still asleep; only the occasional car rumbles across the bridge, a vibration in the air that then fades away. A warm mug steams in my hand, the steam mingling with the river’s mist. I look at the bridge. A massive, stable structure whose sole purpose is to let something pass over it—people, cars, cargo. But now, in this silence, arrival isn’t what matters. Sitting beneath the bridge, a question forms in my mind: what if we build something perfectly, but don’t carry across what it was meant for? What if the structure is meaningless in and of itself? The fog slowly lifts, and I watch the steam rising from my own mug.
The Danube bank at dawn, under the bridge
I sit on the cold stone ledge, on the Danube bank at dawn. The water is still, its surface almost glassy, reflecting the glowing arches of the Margaret Bridge. The city is still asleep; only the occasional car rumbles across the bridge, a vibration in the air that then fades away. A warm mug steams in my hand, its steam mingling with the river’s mist. I look at the bridge. A massive, stable structure whose sole purpose is to let something pass through it—people, cars, cargo. But now, in this silence, it just stands. It simply exists. Its usefulness lies not in being constantly used, but in being there when it is needed. The fog slowly dissolves the contours, the boundary between the goal and the means. I sit here and think about how much we build simply because we’re afraid that while we’re not doing it, someone else already is.
Why does the boardroom presentation show zero?
At the quarterly board meeting, the CTO is presenting. Twelve slides on AI integration. Nice graphs. At the end, a number: the return on investment is zero. The board is silent. The CEO asks, “Then why are we doing this?” The CTO replies, “Because if we don’t, we’ll fall behind.”
This sentence is the core of the paradox. It’s not about a strategic choice, but a compulsion. The presentation isn’t about the value achieved, but about risk avoidance. The message hidden among the slides: “We don’t know exactly what we’re doing, but at least we’re trying.” This is planned helplessness at a high level.
The Numbers: The Widening Gap
Data is not opinion. Gartner, 80% of generative AI projects fail to deliver meaningful business results. 89% of agentic AI—AI systems that act autonomously—projects never make it to production. 96% of autonomous AI experiments fail to deliver on their promise.
Meanwhile, AI adoption is soaring. 40% of companies are actively using AI—double the figure from a year ago. Investment is growing, but results are not. The gap is widening. This widening is not a technological phenomenon, but an organizational and strategic one. As one quote from the corpus notes: “My investigations made two things clear. First, AI is substantially different from previous digital transformations… you are probably much more apprehensive about this technology than you were about previous…” [UNVERIFIED]. Fear moves faster than understanding.
The Anatomy of Fear-Driven Adoption
The conventional explanation: “The technology isn’t mature yet” or “It’s not being implemented properly.” Maybe. But there’s a deeper reason.
AI adoption in most organizations is fear-driven, not value-driven. Companies don’t implement AI because they’ve calculated the return on investment. They do it because they’re afraid their competitors will. The corpus provides a perfect example: “In his vision, customers could virtually scan the retailer’s product line… With help from his IT department, he had prepared a sales pitch for his shareholders and the board on why investments in AI were appealing and why, frankly speaking, the company had little choice given that their competitors were already heavily involved…” [UNVERIFIED].
Fear is not a bad motivator—but it is a poor planner. Characteristics of fear-driven implementation: quick decisions, superficial assessments, half-baked solutions, and a lack of measurable results. Then, at the next quarterly meeting, another slide, another AI tool, another zero ROI.
What is the real problem behind these failures?
The root of the problem is often that leaders underestimate the complexity of the transformation. According to the corpus: “Business leaders underestimate the complexity of translating the decision to adopt AI into an execution that effectively gets the entire organization involved. They expect that AI engineers on the ground will carry forward the AI adoption and that their own leadership job at that point is largely done.” [UNVERIFIED]. After the decision is made, leaders often step back, assuming that the technicians will handle the rest. This is a fatal mistake.
The result is often organizational chaos: siloed teams that do not communicate, resulting in a lack of quality data to train the models. The same corpus continues: “seldom exchanged any feedback, so there was insufficient data to train the AI models effectively. Because the teams working with data were not briefed on best practices for data privacy, there were violations of legal and regulatory requirements…” [UNVERIFIED]. The project collapses both technically and organizationally.
The Reverse Jevons Effect: The Illusion of Productivity
The Jevons paradox states: increased efficiency leads to increased total consumption (e.g., more efficient steam engines -> cheaper coal -> more steam engines). The adoption-ROI paradox is the reverse: an increase in total consumption decreases the value per unit.
Consider a treadmill. If every office worker gets one, the company’s “treadmill adoption” will be 100%. But how many use it actively and regularly, and how many achieve real health improvements? The numbers (the spread) are impressive; the results (health) are often dismal. The same thing happens with AI.
When an organization deploys AI everywhere, it deploys it nowhere deeply. Marketing uses a text generator, HR uses a resume filter, finance uses a data analyst. All superficial, isolated uses. Superficial adoption yields superficial results. But the board isn’t shown the depth—they’re shown the reach. “Twelve teams are using AI” sounds better than “One team is really benefiting from it because we transformed the customer service process, which reduced costs by 30%.”
At what percentage does AI deliver more value than it costs? Asking the right question
The question isn’t what percentage of people use AI. It’s: For what percentage of people does AI deliver more value than it costs? If you can’t answer that, your next quarterly meeting will look exactly the same.
This question forces us to talk about value flow, not tools. AI isn’t a goal; it’s a tool. The goal could be: shorter time-to-market, higher customer value, lower operational risk. AI only costs more than it’s worth if it tangibly advances these goals.
The “Illusion of Success” and Optimistic Bias
Leaders often turn to AI with overconfidence, partly due to the “optimistic bias” discussed in the corpus: “81% of entrepreneurs rated their chances of personal success at 7 or higher on a 10-point scale, and 33% said they saw no chance of failure.” [UNVERIFIED]. The same bias applies to AI projects. Everyone believes that their project will be among the successful 20%, which is statistically impossible. This bias facilitates fear-based, but unevaluated, investments.
How to Break Out of the Zero-ROI Spiral? A Three-Level Change
Level 1: The Language of Leadership – From Fear to Value
The CEO and the board must shift the “we’re falling behind” narrative to a “we’re creating value here” narrative. This means that at the next board meeting, the first question should not be “How many teams are using it?” but rather “In which business process did we manage to improve performance by 10% with the help of AI, and exactly how?” The corpus highlights the lack of personal commitment from leaders: “Now leaders never used the tools, thus missing out on the opportunity to demonstrate to the workforce that AI was meaningful… employees felt no sense of ownership of the tools.” [UNVERIFIED]. A leader must first be a learner, then a role model.
Level 2: The Focus of Strategy – From Breadth to Depth
Instead of launching 12 small pilot projects, choose a single, critical business process (e.g., optimizing the procurement cycle, the product development funnel). Put everything into it: the best people, high-quality data, and integrate it deeply into the systems. The goal should not be “AI project success,” but rather “the procurement cycle is 15% faster.” This depth creates measurable value. As another part of the corpus suggests, the successful minority operates this way: “If 15–25 percent of the world’s many projects deploy, that’s far from nothing. Predictive models positively impact our lives on a daily basis…” [UNVERIFIED].
Level 3: The Philosophy of Measurement – From Output to Impact
AI ROI cannot be measured using traditional, linear financial models. A new measurement framework is needed that focuses on four areas:
- Time Transformation: How did employees reinvest the time they saved? (E.g., not only did they gain 10 hours of free time, but they used it to develop three new client concepts.)
- Decision Quality: How much more accurate have the forecasts become? How much has risk decreased?
- Capacity freed up: What new activities was the organization able to take on that were previously impossible?
- Flexibility: How much faster can the organization adapt to market changes?
We can summarize this three-level change with an analogy: Fear-driven adoption is like a farmer randomly scattering seeds across his entire field, hoping that something will sprout. Value-driven adoption, on the other hand, is like selecting a plot of fertile land, tilling it deeply, providing it with the right nutrients, and planting just one or two varieties of seeds deep in the ground—ones you know will yield a valuable harvest. The first approach is much more spectacular; the second is much more productive.
Key Takeaways
- AI usage has doubled (22%→40%), while the failure rate has risen to 80–96% — this widening gap is a clear sign of fear-driven, superficial adoption.
- 56% of CEOs report zero ROI — this is not a failure of the technology, but of organizations’ ability to deeply integrate it and create value with it.
- Adoption is fear-driven, not value-driven — fear of competition drives quick decisions, but it is a poor planner, leading to the illusion of widespread adoption and a lack of depth.
- The question is not how many people use it — but how many benefit from it — The only relevant metric: does the value generated by AI exceed the total cost of investment (integration, training, data management) in a given business process?
- The three steps to breakthrough: Changing the language of leadership (value), narrowing the focus of strategy (depth), and reshaping the philosophy of measurement (impact).
Frequently Asked Questions
What is the adoption-ROI paradox in AI? The adoption-ROI paradox means that the more people implement AI superficially out of fear, the harder it is to demonstrate a unique competitive advantage and positive return on investment. If everyone uses it superficially, no one gains a significant relative advantage—yet those who don’t use it at all fall behind. This is a modern prisoner’s dilemma fueled by collective fear.
How can the return on investment in AI be measured? Traditional, linear financial ROI calculations often don’t work because the impact of AI cannot be isolated and is non-linear. Effective measurement requires a systems approach that evaluates the following holistically: time transformation (how the time saved is converted into higher-value work), improved decision quality (fewer errors, more accurate forecasts), capacity freed up (seizing new opportunities), and increased flexibility. The goal is not “savings,” but “reinvested capacity.”
Who is responsible for the success of AI projects? The CTO/CIO is responsible for the technical teams and implementation. However, the line manager (e.g., the sales director, the product development manager), and ultimately the CEO, are responsible for the project’s business success—that is, for creating value. The message is clear: a leader’s role does not end at the moment of decision-making; on the contrary, that is where it begins. The key to success is leadership commitment and communication and engagement that encompass the entire organization.
Related Thoughts
- Why Do 90% of AI Projects Fail?
- The Jevons Paradox: Why We Work More with AI
- Immunity to Change: Why AI Projects Fail
Varga Zoltán - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
What you measure changes what you build.
Strategic Synthesis
- Identify which current workflow this insight should upgrade first.
- Use explicit criteria for success, not only output volume.
- Iterate in small cycles so learning compounds without operational noise.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.