VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
In VZ framing, the point is not novelty but decision quality under uncertainty. Kahneman’s radish experiment and the Nagoski sisters’ stress cycle model together explain why the AI workday never ends. With over 100 micro-decisions a day, it’s never “done.”. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
Working with AI combines two well-established mechanisms: Kahneman’s ego depletion (decision-making depletes willpower) and the Nagoski sisters’ stress cycle model (unresolved stress makes you sick). AI output isn’t a solution—it’s just another decision point. And decision points never end.
A bowl of radishes and a Copilot session
Margitsziget running track, Tuesday evening. I ran three laps, but my mind is still spinning with the Copilot session from this morning. Eleven code suggestions. For each one, I had to decide: accept, modify, or reject. Eleven micro-decisions—with none of them did I feel like I was “done.”
Daniel Kahneman describes the mechanism in Thinking, Fast and Slow: “Both self-control and cognitive effort are forms of mental work. Numerous psychological studies have shown that people who are simultaneously burdened with a demanding cognitive task are more likely to give in to temptation.”
One of the most elegant experiments: the radish experiment. One group of participants had to eat radishes alongside freshly baked chocolate cookies—resisting the temptation. Afterward, they were given a perseverance task. The radish eaters gave up sooner. Not because they were weaker—but because their willpower was already exhausted.
The corpus quote expands on this: “Their experiments involved sequential rather than simultaneous tasks. Baumeister’s team repeatedly found that exercising willpower or self-control is exhausting; if we force ourselves to do something, we become less willing or able to exercise self-discipline when it comes time for the next task. We call this phenomenon ego depletion.” [CORPUS]
This is not merely mental exhaustion. Another experiment, as the corpus mentions, showed that as a result of ego depletion, “people’s ability to endure pain caused by sustained muscle tension” also decreases, and “when faced with a difficult cognitive task, these individuals give up the struggle sooner than usual.” [CORPUS]
Working with AI repeats the radish experiment a hundred times a day. Every prompt, every generated response is another radish next to a chocolate chip cookie. “Resistance” here refers to active, critical evaluation: Is this code good? Is the text correct? Is the logic consistent? Every time we reach for the radish instead of the cookie—that is, when we check and modify the generated content—we deplete our finite mental resources.
Kahneman’s Two Systems: AI as the Accelerator of System 1
To understand why this process is so exhausting, we must return to Kahneman’s fundamental model. In his book, the author “described the workings of the mind as the bumpy collaboration between two imaginary figures: the automatic System 1 and the effortful System 2.” [CORPUS]
- System 1 is fast, automatic, associative, and operates with minimal effort. It is like a well-trained artificial intelligence: it recognizes patterns and provides quick answers.
- System 2 is slow, sequential, logical, and requires concentration and conscious attention. It is the critical thinker, the monitor.
AI, such as ChatGPT or Copilot, is essentially an external, hyperactive System 1 cache. It offers the results of rapid pattern recognition: a code snippet, a text excerpt, a summary. The problem is that this rapid output is not a final answer. For it to be usable, our own System 2 must take over: it must evaluate, contextualize, verify, and often rewrite.
The corpus quote sharply characterizes System 2: “A defining feature of System 2 is that any of its activities requires effort, and furthermore, one of its main traits is laziness, since it intends to invest no more energy in anything than is absolutely necessary.” [CORPUS] When working with AI, we constantly force this lazy yet effort-demanding system to work. Writing the prompt is System 2 work. Evaluating the output is System 2 work. Deciding whether to accept it is System 2 work. This triple whammy leads to rapid ego depletion.
Why Does the AI Workday Stress Cycle Never End?
Emily and Amelia Nagoski distinguish between the stressor (what causes stress) and the stress response (what your body does) in Burnout: Solve Your Stress Cycle. The key: the disappearance of the stressor doesn’t mean the stress response is over. You have to complete the stress cycle—through movement, crying, laughing, or physical contact.
But working with AI never completes the cycle.
Writing an email used to look like this: thinking → drafting → proofreading → sending → done. This was a complete stress cycle. The stressor (the email) appeared, the response (writing it) took place, and the cycle closed with sending, which was a clear, definitive boundary. Our body and mind received a signal: “It’s okay to relax. This is done.”
Now: prompt → AI output → review → revision → regeneration → review → approval → … and in the meantime, three other AI outputs arrived, which are also waiting. The work is constantly shifting, “perfectible,” regenerable. Where is the end? When do we send that email if the “I suggest fine-tuning the text” button is always flashing? The stressor isn’t the amount of work. The stressor is that it never ends. There is no physically perceptible moment when you put down your pen, close your notebook, or send the email. The cycle remains open, and the body is in a constant state of readiness—which leads to chronic stress.
The Biochemical Reality of Decisions: Glucose and the Brain
Ego depletion is not merely a psychological metaphor. It has a physiological basis that explains the connection between the radish experiment and AI fatigue. The corpus quote points out: “The most surprising discovery made by Baumeister’s team highlights—as he puts it—that the idea of mental energy is more than just a metaphor. The nervous system consumes more glucose than most parts of our body, and mentally demanding activities are particularly costly for glucose metabolism. When we are actively engaged in a difficult cognitive operation or a task requiring self-discipline, our blood sugar levels drop.” [CORPUS]
Decision-making, critical thinking, self-discipline—these are all brain functions that require glucose. When we activate our System 2 eleven times during a Copilot session, we are essentially tapping into our finite glucose reserves eleven times. According to the corpus, in one experiment, “participants who drank glucose-sweetened lemonade showed no signs of ego-dissolution,” because “restoring the brain’s sugar levels prevented a decline in performance.” [CORPUS]
This provides a powerful insight: working with AI is not only mentally but also biochemically exhausting. The decline in decision-making quality in the afternoon is not merely “fatigue,” but the result of a measurable energy deficit. The more AI decisions we make, the less fuel remains for the next ones—and this is not a psychological but a physical limit.
How does the spiral of decision fatigue work?
Rolf Dobelli writes in The Art of Thinking Clearly: “Decision-making is exhausting. Even a few seconds’ interruption—the time it takes to switch to your email program—can double your mistakes.”
Working with AI doesn’t reduce the number of decisions. It increases them. Every AI output is a decision point. Every decision point consumes the willpower (and glucose) described by ego depletion. After every exhausted decision, the next decision is worse. This is the spiral.
The Israeli court study cited by the corpus perfectly illustrates this process: “(The baseline decision was to deny parole… The researchers…) observed that this rate peaks after every meal, when 65% of applications are approved. In the roughly two hours remaining until the next meal, the approval rate steadily declines, and before the break it is nearly zero” [CORPUS] (i.e., returns to the default “no”).
The judges aren’t becoming more unfair. They’re suffering from ego depletion. Their initial decision-making energy runs out, and they switch to the path requiring the least effort: sticking to the default “no.” Exactly the same thing happens with your Copilot session. In the morning, “after breakfast,” with a fresh mind, you thoroughly review the first few lines of generated code. By the afternoon, as your energy levels drop, the “accept” button becomes the new default—because you no longer have the glucose or mental capacity for complex review. The spiral comes at the expense of quality.
Breaking the spiral: conscious cycle closure in practice
If the problem is an unfinished stress cycle and an exhausted ego, then the solution must consciously address these mechanisms. Not at the end of the work, but in the middle. Here are some practical strategies:
- Timed decision blocks and closing rituals: Work in 25–50-minute sprints using a single AI tool (e.g., code generation only). When the time is up, immediately perform an end-of-cycle ritual. Stand up, stretch, walk down the hall for 5 minutes, drink a glass of water. This physically signals to your body: “This cycle is over.” Don’t just move on to the next task.
- The “three-generation” rule: Make it a rule that after every third AI output, you must take a mandatory break from decision-making. This prevents decision fatigue from spiraling out of control. During the break, engage in an activity that is clearly not a decision (e.g., simple physical tidying up).
- Energy management in judge mode: Recognize that your decision-making ability fluctuates. Schedule the evaluation of the most critical, most complex AI outputs (where the most System 2 work is required) for your peak energy times (usually in the morning, after a meal). Leave routine tasks that require less scrutiny (e.g., reviewing formatting suggestions) for the afternoon.
- Pre-mortem on the prompt: Before you hit the “Generate” button, spend 60 seconds imagining: what could go wrong? What kind of errors might you expect in the response? This activates critical thinking before generation and, by engaging System 2, results in a more precise prompt, which reduces the number of subsequent rounds of editing (and decision-making).
Broader implications: the workplace and society
This phenomenon is not just an individual problem. If an increasing proportion of knowledge workers is plagued by this small but constant decision fatigue, it causes damage at the organizational level as well.
- Decline in quality: Just as judges tend to issue less favorable rulings at the end of the day, developers, content creators, and analysts also make poorer decisions regarding the generated material, leading to bugs, misinformation, and bad business decisions.
- A culture of default “acceptance”: Exhausted decision-makers increasingly choose the path of least resistance: they accept AI recommendations without critical thinking. This can result in organizational “AI dependency” and the atrophy of critical thinking.
- Redefining performance and capacity theory: The question is not “how many tasks have you automated with AI,” but rather “how much decision-making capacity has the automation process consumed?” True efficiency depends on preserving the energy for critical thinking.
As the corpus illustrates through the example of a high-level executive: “I have too many other decisions to make… You need to routinize yourself. You can’t be going through the day distracted by trivia.” [CORPUS] AI often does not reduce the number of decisions we make, but rather burdens us with trivialities, diverting our attention from the decisions that truly matter.
Key Takeaways
- Kahneman’s ego depletion: Evaluating every AI output is a decision point that draws on the finite resource of self-control (and glucose)—the radish experiment is repeated a hundred times a day.
- Dynamics of the two systems: AI functions as an external System 1, but using it forces our lazy yet effort-intensive System 2 into constant work, leading to rapid exhaustion.
- Nagoski stress cycle: The structure of AI work (endless fine-tuning, regeneration) prevents the natural closure of the stress cycle, causing a chronic state of readiness.
- The spiral of decision fatigue: Decision quality declines along with energy levels (see: judicial research). Good decisions in the morning; by afternoon, the “accept” button becomes the energy-saving default.
- Biochemical reality: Burnout has a physiological basis (drop in blood sugar), which explains the physically exhausting effect of AI work.
- Solution: Conscious cycle closure and capacity management: Break the spiral not at the end of the day, but in the middle, in timed blocks, with rituals, and by scheduling critical tasks for peak energy times.
Frequently Asked Questions
What does the stress cycle have to do with AI?
According to Emily and Amelia Nagoski’s stress cycle model, stress does not go away simply because the stressor disappears. For the body’s stress response to end, it must receive a physical signal (e.g., movement, laughter, a sense of release). In work involving AI, the stress cycle never closes because the nature of the work (endlessly refineable, regeneratable) offers no clear, perceptible end. There is always one more output to check, one sentence to improve, one “Generate again” option.
How does this relate to Kahneman’s work?
Kahneman’s two systems (System 1: fast and automatic; System 2: slow and effortful) are key to understanding this. When using AI, the external AI functions as a hyperactive System 1, but to use the content it generates, we must engage our own System 2 for evaluation, contextualization, and decision-making. This continuous System 2 activity is the direct cause of exhaustion.
Is it true that decision-making actually lowers blood sugar levels?
Yes, numerous studies, including those by Roy Baumeister, which Kahneman also cites, suggest that tasks requiring self-control and complex cognitive activity are associated with measurable glucose consumption in the brain. In experiments, participants who replenished their energy levels with glucose did not show the typical signs of ego depletion. This provides a clue: healthy eating and taking breaks while working intensively with AI are not a luxury, but necessary for maintaining cognitive capacity.
What can I do if I feel myself getting caught in this spiral in the middle of the day?
The immediate first step: Take a break. Step away from the computer. If possible, move around a bit (even just a short walk around the premises). Drink a glass of water or eat a healthy, slow-release carbohydrate (e.g., a piece of fruit). Your goal should not be to dive back into work, but to signal to your body and brain that the previous work cycle has ended. This helps partially restore your decision-making capacity and break the cycle of ongoing stress.
Related Thoughts
- AI Brain Fry: This Is Not Burnout
- Capacity-Hostile: It’s Not That You’re Lazy
- FOBO: Fear of Becoming Obsolete
Varga Zoltán - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership The loop never closes. You have to close it.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Monitor one outcome metric and one quality metric in parallel.
- Review results after one cycle and tighten the next decision sequence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.