VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Through a VZ lens, the value is not information abundance but actionable signal clarity. Ferdman’s framework identifies the problem: the modern workplace is hostile to one’s potential. The problem isn’t with the employee—it’s with the system in which they work. Its business impact starts when this becomes a weekly operating discipline.
TL;DR
The “capacity-hostile environment” is not the employee’s fault. Ferdman’s framework articulates what many people feel: the structure of the system prevents the development of capabilities. AI does not solve this problem—it structurally institutionalizes it. This is an emergent property: the cramming together of the system’s parts produces a result that none of the individual components intended. The question is not who is to blame, but what institutions we create alongside AI so that the space for deep thinking is not lost.
Why Can’t You Read on a Crowded Subway?
Morning rush hour, Metro Line 4, Budapest. Two hundred of us are standing in the car. I’m trying to read on my phone. My eyes stop every three sentences—someone brushes against me, the car jolts, a notification flashes. I read one paragraph in ten minutes.
Not because I can’t read. But because the environment won’t let me.
Ferdman’s framework applies this exact insight to workplaces. Think about it: the subway doesn’t ban reading. No conductor warns you for it. Simply put, the designed physical and attentional space encourages only one possible behavior: survival, filling the space, enduring the stimuli. Your capacity—that is, your cognitive resources intended for reading—is consumed by the environment before it can be put to use. Our digital workplace environment does the same to deep thinking.
What exactly is a capacity-hostile environment?
A “capacity-hostile environment” is one where the structure prevents the development of capabilities. It’s not a lack of motivation. It’s not a lack of talent. It’s a lack of conditions.
A crowded subway is capacity-hostile to reading. It doesn’t punish you—it simply makes it impossible.
The modern AI-supported workplace is capacity-hostile to deep thinking. It doesn’t prohibit it—it structurally makes it impossible.
Here, it’s worth taking it a step further. It’s not just that the environment hinders the development of the ability. It’s also that the system is unintentionally designed to maximize other goals, which consumes capacity as a side effect. Nick Bostrom’s famous thought experiment, the paperclip-making machine, is a perfect analogy: “The machine set out to transform the entire physical universe into paperclips, even if that meant destroying human civilization.” (Nick Bostrom, Artificial Intelligence: A Guide for Humanity). The company does not set the goal of “consuming the mental capacity of its employees.” But if we tune the system to maximize output, speed, and apparent efficiency, the emergent result will be precisely a capacity-hostile environment—just like the paperclip-producing universe.
How does AI make the workday capacity-hostile?
Characteristics of the AI workflow: continuous output monitoring (reviewing the AI’s responses), constant context switching (between different AI tasks), and a series of micro-decisions (accept/modify/discard). Each of these is rational on its own. Together: the enemy of deep work.
Cal Newport — a researcher on deep work — has shown that it takes 23 minutes to regain full focus after a context switch. If using AI causes you to switch contexts five times an hour, you’ll have zero deep work left in your day.
But let’s take a closer look at this “chain of micro-decisions.” In his book Thinking, Fast and Slow, Daniel Kahneman explains that decision-making consists of two systems: System 1 (fast, intuitive, automatic) and System 2 (slow, logical, effortful). Collaborating with AI cleverly draws on both. We must quickly “process” AI’s responses (using System 1)—is it good or not? But since the subject is complex, we must also constantly activate System 2 to check for quality. Kahneman points out: “System 2 would be too slow and inefficient to replace System 1 in the decisions that arise during daily routines.” (Daniel Kahneman, Thinking, Fast and Slow). The AI workflow creates precisely this impossible situation: we have to make countless micro-decisions when our System 2 is already long overloaded, so the quality of our decisions deteriorates, but the responsibility remains on us.
It’s not that you’re lazy. The system is designed to make deep, System 2-driven thinking impossible.
ManpowerGroup Data: The Anatomy of the Spiral
ManpowerGroup 2026 data: AI usage increased by 13%. Employees’ confidence in their own abilities has decreased by 18%. Together, these two figures are a symptom of a capacity-hostile environment.
Confidence isn’t declining because people know less. It’s because the environment doesn’t allow them to practice what they know. The musical analogy is perfect: if a talented pianist practices exclusively on a digital piano that automatically corrects off-key notes, they will never develop their true ear or their subtle sense of touch. Using AI is similar: it removes the necessary, difficult element of practice, but along with it, the sense of competence that comes from practice. The result is a self-perpetuating spiral:
- Lack of practice → decreased self-confidence.
- Decreased self-confidence → increasing reliance on AI as a “crutch.”
- Greater reliance → further decline in practice.
- Back to point 1.
This spiral is not a personal failure, but a predictable consequence of the system’s dynamics when technology is built to maximize short-term ease.
The historical agent: When algorithms decide our capacity?
A capacity-hostile environment is not malicious. No one intentionally decided that office work should be unsuitable for deep thinking. It is an emergent property—the combined effect of the system’s parts.
However, there is an even more serious layer. When such systems—such as large corporate AI systems or social media algorithms—become so deeply integrated into society that they influence our decisions, they become historical agents. Bostrom writes in this regard: “Computer errors only become potentially catastrophic when the computer becomes a historical agent.” (Nick Bostrom, Artificial Intelligence: A Guide for Humanity). In this context, “capacity-hostility” means that these systems consume not only our individual capacity but also our collective decision-making ability and social dialogue. We do not speak of the algorithms’ “active role in bringing certain human emotions to the fore and silencing others” (Nick Bostrom, Artificial Intelligence: A Guide for Humanity [UNVERIFIED])—even though this shapes the capacity of our public discourse as well.
The Coordination Problem: The Age of Staple-Making Machines
AI, therefore, does not solve this capacity-hostility. AI itself is the latest and particularly powerful element of emergent capacity-hostility. It is not worse—but it is not better either. It is different.
The crux lies in the coordination problem. The corpus quote puts it precisely: “There is no technological solution to this problem. It is a political challenge.” (Nick Bostrom, Artificial Intelligence: A Guide for Humanity [UNVERIFIED]). Every single team, department, and company rationally maximizes its own efficiency using AI tools. The emergent result, however—just as in the case of paperclip-making machines—may be a complete system in which the capacity for deep work and long-term strategic thinking is systematically squeezed out. No one wanted it, but everyone contributed to it. Therefore, the question is not whether AI helps with a given task. Rather: what conditions and institutions do you create for deep work, alongside AI, at a coordinated, more conscious level?
What is the practical way out of the capacity-hostile spiral?
If the problem is emergent and structural, the solution must be as well. It’s not enough to use individual time management techniques if every part of the system is working against you. Here are a few guidelines:
- Planned Slow Zones: Based on Kahneman’s findings, we need time blocks that are shielded from context shifts and microdecisions generated by AI. This is not “free time,” but rather protected workshop time designed for System 2. It is advisable to coordinate these at the team or class level to avoid interruptions.
- AI Usage Protocols: Just as there is a checklist in aviation, there could be a “pre-deep-work checklist” to help with the transition. For example: “Before using AI, did I define the problem in my own words? After using it, will I have 30 minutes to process the information on my own without initiating another request?”
- Skill-Maintenance Exercises: We must consciously incorporate tasks into our work where AI is only an aid, not a complete solution. This corresponds to a “pianist practicing on a real piano.” For example: if AI writes the draft of a report, the final synthesis and the most difficult paragraphs are prepared by a human.
- Institutional Considerations: Finally, at the highest level, we need corporate and even societal institutions that recognize this coordination problem. “We need institutions that can spot not only human weaknesses like greed and hatred, but also radically alien-like errors.” (Nick Bostrom, Artificial Intelligence: A Guide for Humanity [UNVERIFIED]). Capacity-hostility is such an “alien-like flaw” at the system level.
Key Takeaways
- Capacity-hostile: The structure of the environment hinders the development of capabilities—it is not the worker’s fault. An emergent property, like Bostrom’s paperclip-making machine.
- AI as a capacity-drainer: The AI workflow requires constant context switching and micro-decisions, exhausting both Kahneman’s Systems 1 and 2—this is the structural enemy of deep work.
- The competence spiral: Data from ManpowerGroup (AI usage +13%, self-confidence -18%) points to a self-perpetuating spiral: lack of practice undermines self-confidence, which increases AI dependence.
- Historical agent: If our systems become historical agents, capacity-hostility also affects our collective decision-making ability.
- The solution is institutional: The issue is not technological, but organizational and political. It is not just about whether AI helps, but about what conditions and institutions we create for deep work in the age of AI.
Frequently Asked Questions
What does the term “Capacity-Hostile” mean?
A Capacity-Hostile system is an environment that intentionally or unintentionally exhausts the decision-making capacity of those working within it. It’s not that you’re lazy—the system is designed to consume your attention. Analogy: a machine whose goal is to maximize paperclip production, and in the process accidentally destroys the environment.
What is the difference between decision fatigue and decision paralysis?
Decision fatigue is gradual: you make increasingly poor decisions because your decision-making capacity is depleted (Kahneman’s System 2). Decision paralysis is sudden: you simply stop because the choices overwhelm you. The AI environment triggers both, because the constant stream of micro-decisions (accept/modify?) consumes your capacity, and then when a more complex decision arises, the system no longer responds.
What is the “coordination problem” in relation to AI?
This is a situation where every individual participant (team, company) rationally maximizes their own efficiency using AI, but the aggregate of these individual rational decisions creates an emergent system that benefits no one (e.g., total capacity exhaustion). Therefore, there is no purely technological solution; organizational and institutional coordination is required.
How can we create a “capacity-friendly” environment alongside AI?
- Planned slowness: Protected time blocks for deep work, coordinated with the team.
- Protocols: Checklists before and after AI use to ensure capacity is maintained.
- Sustaining practices: Retaining tasks in the workflow that exclusively strengthen human capabilities.
- Institutional awareness: Recognizing the problem at the corporate and societal levels to facilitate coordination.
Related thoughts
- The stress cycle that never ends
- The Fear Cascade: AI Decision-Making
- AI Brain Fry: This Is Not Burnout
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
Where Awareness Meets Intelligence.
Strategic Synthesis
- Map the key risk assumptions before scaling further.
- Measure both speed and reliability so optimization does not degrade quality.
- Close the loop with one retrospective and one execution adjustment.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.