VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From a VZ lens, this piece is not for passive trend tracking - it is a strategic decision input. The real risk of AI isn’t job loss. It’s that your work will lose its meaning. AIRD—AI Replacement Dysfunction—is now a clinical diagnosis. Strategic value emerges when insight becomes execution protocol.
TL;DR
Debates about AI are all about job losses. But the real risk runs deeper: the loss of the meaning of work. Viktor Frankl, John Vervaeke, and the recent clinical diagnosis of AIRD all point to this. Those who lose the meaning of their work are not unemployed—they are simply empty. This loss of meaning is more systematic and harder to remedy than the loss of a job, because it affects the deepest layers of our identification with work.
What did the potter say about the machine?
A potter is shaping clay at the market. I ask: wouldn’t it be faster with a machine? “Faster, yes,” he says. “But that’s not what it’s about.”
He isn’t talking about the product. He’s talking about the process. About how the clay reacts to his hands. About how every plate is a little different. About how work and the person are inseparable.
AI cuts precisely this thread.
But let’s think about it: what exactly is happening? The potter isn’t just making an object; they’re engaging in a dialogue with the material, the form, and their own limitations. The creative role of design intent and chance works together. When a machine, say a 3D clay printer, takes over this process, the product may become more perfect, but the dialogue itself ceases. The software optimizes predefined parameters rather than discovering them. The ceramist’s issue isn’t with speed—it’s that the machine transforms a transactive process into a purely transactional one. The goal isn’t the plate, but the journey that leads to the plate. It is this journey that gives meaning.
Viktor Frankl’s three sources of value: which one does AI threaten the most?
Viktor Frankl identified three sources of value in human existence: the value of action (what we do), the value of experience (what we go through), and the value of attitude (how we relate to the unchangeable).
AI primarily threatens the first source of value, the value of action. If what I do can also be done by a machine—and better—then the value of the action diminishes. Not because the machine is bad. But because the meaning of the action lay not in the result—but in the intention, in taking responsibility, and in the struggle experienced.
The potter’s plate is no better than the factory-made one. But the potter’s plate is someone’s plate. According to Frankl’s logotherapy, humans are fundamentally oriented not toward happiness, but toward the search for meaning. Work can be one of the primary sources of meaning. When AI strips away the value of action, it does not merely automate an activity—it dismantles one of the pillars of meaning. The other two sources of value, experience and attitude, are not necessarily capable of filling this void in the world of work. Where do we gain experience if there is nothing to go through? How do we develop attitude if there is no unchangeable challenge?
And here comes the deeper layer: Frankl’s theory emerged from the horrors of Nazi concentration camps. He observed that those who had a purpose in life, a meaning that gave them reason to endure, had a greater chance of survival. The modern workplace “meaning vacuum” is not equivalent to death camps, but the psychological mechanism is the same: the experience of meaninglessness erodes mental resilience. AI can accelerate the spread of this vacuum—not as a destructive physical force, but as a slow yet effective meaning-siphon.
How does this relate to John Vervaeke’s “meaning crisis”?
John Vervaeke — a researcher of the “meaning crisis” — argues that the modern world is systematically eroding the structures that once gave life meaning. Religion, community, and craftsmanship—all are eroding.
AI does not cause the meaning crisis. But it accelerates and deepens it. Vervaeke emphasizes that meaning is not merely a feeling, but a cognitive-relational process. We feel our lives are meaningful when we are able to interact with the world in a relevant, understandable, and connected way. Traditional trades and crafts have often provided precisely this participatory realism: a carpenter does not merely make a chair, but becomes part of the wood’s transformation, understands its properties, and engages with the client’s needs.
When AI takes over creative work—writing, designing, analyzing—the remaining human work becomes increasingly supervisory, administrative, and monitoring in nature. This type of work is functional but rarely meaningful, because it diminishes participatory realism. We are not part of the process; we are merely its overseers. As a corpus quote indicates: “The resulting turmoil will take on political, economic, and social dimensions, but it will also be intensely personal… A regular paycheck has become a way not just of…” [UNVERIFIED]. A paycheck alone is not enough. AI can accelerate the process by which work loses this personal, identity-based meaning and is reduced to a mere transaction.
What is AIRD, and why is it clinically relevant?
In February 2026, the journal Cureus published the first clinical description of AIRD — AI Replacement Dysfunction — the first clinical description of AIRD. This is not psychological speculation. It is a clinical observation: people in whom AI replacement has manifested as a cluster of symptoms. Anxiety, loss of identity, lack of motivation—not because they lost their jobs, but because their jobs lost their meaning.
AIRD is not about the unemployed. It is about those who work—but don’t know why.
Imagine a software engineer who, for years, enjoyed the skill-building, problem-solving process that was programming. Then advanced code-generating AI is introduced. His job suddenly changes: he no longer designs and codes, but writes prompts, checks, and corrects AI-generated code. The output is the same, or even better. The pay is the same. But the internal logic that gave meaning to his work—understanding complex systems, seeking creative solutions—has disappeared. The work has become mechanical. This disconnect may cause the symptoms of AIRD, which often resemble burnout, but whose root cause is not overload, but the loss of meaning.
Why might the loss of meaning be psychologically more severe than the loss of a job?
AI debates rarely touch on this layer. The discourse is bipolar: AI takes away jobs (fear) or AI frees us from routine (optimism). Both are about the outcome—either its availability or its absence.
But people don’t work for the outcome. People work for meaning. If the outcome remains—but the meaning disappears—that’s worse than losing a job. Why?
- Lower social awareness: Unemployment is a clear, measurable economic and social problem. The loss of meaning is a personal, invisible psychological crisis that is easily downplayed (“at least you have a job”).
- Loss of identity: Losing a job is often attributed to external circumstances. Losing the meaning of your work, however, strikes at the core of your self-image. As the corpus quote illustrates: “Drifting farther away from the shore, he begins to panic… ‘Help, my son, the doctor, is drowning!’” [UNVERIFIED]. The mother does not cry out that “my son is drowning,” but that “my son, the doctor, is drowning.” The profession merges with identity. When AI strips work of its meaning, it breaks down this fusion, causing an identity crisis.
- Lack of closure: Job loss has a beginning and (hopefully) an end. Loss of meaning can be a diffuse, persistent state, a kind of professional melancholy in which the individual “mourns an activity that technically still exists.”
Another part of the corpus points out: “The risk is not something external that exists independently of our minds and our culture… Humans invented the concept of risk to help them understand the dangers and uncertainties of life” [UNVERIFIED]. The risk of meaning loss caused by AI is similar: it is not an absolute, objective danger, but a subjective, existential risk that affects a fundamental layer of human experience. Dealing with this is far more complex than finding a new job.
How can the loss of meaning be transformed into the creation of meaning?
The outlook is not necessarily tragic. The crisis also holds an opportunity. The quote from the corpus refers to this: “But there is another path, an opportunity to use artificial intelligence to double down on what truly brings meaning to a person’s life: sharing love with those around us.” [UNVERIFIED]. This idea warrants further exploration.
If AI takes over transactional, technical tasks, human resources and attention are freed up for activities more deeply connected to Frankl’s other two sources of value and Vervaeke’s “participatory realism”:
- The value of experience: AI can serve as a tool for deeper understanding. A doctor from whom AI takes over the task of extracting information from medical records can devote more time to empathetic, deep conversations with the patient—which creates genuine experience and connection.
- The value of attitude: When AI takes over tasks that previously defined our value, we are forced to confront the question: “Who am I if not my work?” This can compel a radical shift in attitude, where we prioritize inner values over external performance.
- Rediscovering the craft: Staying with the example of the ceramist: AI can free the creative process from market pressures. The potter can use the machine as a tool for mass production, while focusing on meaningful, handcrafted works—not for a living, but for the joy of the craft.
The challenge, therefore, is not technological, but cultural and psychological. We must build social structures that support not the maximization of performance, but participatory realism and the capacity for meaning, and integrate AI into this process as a useful tool. As the quote from the corpus reminds us: “According to Slovic, ‘both sides must respect each other’s insights and wisdom.’” [UNVERIFIED]. Both AI designers and those seeking meaning in work must take each other’s perspectives into account so that the narrative of opportunity, rather than risk, prevails.
Key Takeaways
- The real risk of AI is not job loss, but the loss of the meaning of work, which causes damage deep within our identity.
- AIRD — AI Replacement Dysfunction — was introduced as a clinical diagnosis in February 2026 and refers to those who suffer from a mental vacuum while in a functional position.
- According to Frankl’s logotherapy, AI primarily attacks the source of value in action—intention, responsibility, and struggle, not just the result.
- According to John Vervaeke’s theory of the “crisis of meaning,” AI accelerates the erosion of traditional meaning-making structures (e.g., craftsmanship), reducing “participatory realism.”
- The loss of meaning is often more severe than the loss of a job because it is invisible, undermines identity consistency, and is harder to come to terms with.
- The answer is not to stop AI, but to engage in culture-shaping work that uses AI as a tool to strengthen the sources of value in experience and attitude, reinterpreting what it means to “work.”
Frequently Asked Questions
Why is the loss of meaning the real risk of AI?
Most AI risk analyses focus on job losses. But the deeper risk is when people lose their sense of meaning in their work—even if they keep their jobs. The task remains, but the meaning has vanished. This leads to psychological impoverishment, which in the long run can be more socially dangerous than the short-term economic effects of unemployment, because it undermines individuals’ resilience and zest for life.
Who is most affected?
Those who have built their identities on their expertise: writers, programmers, researchers, consultants, artists. When AI takes over the essence of their work—creative generation, complex problem-solving, deep analysis—they do not lose their jobs, but rather their self-image and the participatory realism that gave meaning to their daily tasks. However, the threat is gradually seeping downward: any profession where work requires not merely physical but cognitive and/or creative engagement is vulnerable.
Is there a way to defend against this on an individual level?
Yes, but it’s difficult. Deliberate meaning-making and diversifying one’s identity are key. This means:
- Redefining the meaning of work: For example, seeking meaning not in writing code, but in understanding human needs and collaborating with AI.
- Reallocating resources: Investing the time and energy freed up by AI in activities that nourish Frankl’s other two sources of value: building relationships (experience) or accepting and processing personal challenges (attitude).
- Craft as practice: Maintaining or taking up non-economic activities (cooking, gardening, music) that are done for their own sake, for the experience of the process, thereby reinforcing the experience of participatory realism.
Related Thoughts
- FOBO: When You Don’t Lose Your Job
- The Consciousness Economy: What Comes After the Attention Economy
- AI as a mirror of civilization
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
Signal over noise. Always.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Monitor one outcome metric and one quality metric in parallel.
- Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.