Skip to content

English edition

Zuboff 1988: What the Smart Machine Predicted

In 1988, Shoshana Zuboff made seven predictions about the effects of automation. By 2026, six of them had come true. The seventh—automate vs. informate—is now being decided by AI.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

Through a VZ lens, this analysis is not content volume - it is operating intelligence for leaders. In 1988, Shoshana Zuboff made seven predictions about the effects of automation. By 2026, six of them had come true. The seventh—automate vs. informate—is now being decided by AI. Its advantage appears only when converted into concrete operating choices.

TL;DR

Shoshana Zuboff made seven predictions about the human impact of automation in her book “In the Age of the Smart Machine” (1988) made seven predictions about the human impact of automation. Thirty-eight years later, six have come true. The seventh—that humans can consciously choose the path of “informating”—is now being decided.


Morning Tea on an Indian Train

Foggy spots on the windowpane; behind them, the graying fields drift by. The thin porcelain cup in my hand is hot; the tea’s aroma is spicy and sweet. The rhythmic jolting of the train beneath my body, the shadows of figures jostling in the aisle. Outside, in the dawn light, the blurred silhouette of a village. Here I sit, in this moving, breathing space, feeling the warmth of the cup in my palm. My thoughts slowly take shape, like on the fogged-up window. Something about change. About how we, too, sit in a moving vehicle and watch as our world—our work, our knowledge—slips away before our eyes.

The Factory Visit: The Present That Is Already the Past

I’m visiting a factory in Debrecen. The production hall is semi-automated. The operators watch monitors; they don’t operate machines. I ask one of them: does he like working here? “I used to do something,” he says. “Now I watch as the machine does it.”

Zuboff documented exactly this—in 1988, at a paper mill. The fact that this statement remains the same even thirty-five years later is no coincidence. It captures a philosophical state: the transition from activity to passivity, the loss of the immediacy of experience. The operator does not say, “I am now doing a different, more meaningful kind of work,” but rather emphasizes the absence. This absence is the key.

What did Zuboff predict in her seven observations? Layered reality

Zuboff spent eight years researching industrial environments and observed what happens when automation transforms work. She made seven observations that we can now read as predictions. These were not mere forecasts, but an anatomy of systemic change that continues to define the relationship between technology and work to this day.

1. “Automation takes away manual knowledge.” Machine operators, who previously sensed the manufacturing process through touch, smell, and sound, were placed behind monitors. Physical knowledge was transformed into digital data.

Further explanation: Zuboff called this “the alienation of bodily knowledge.” A paper mill worker “read” the state of the process from the hum of the machines, the feel of the paper web, and the humidity in the air. This was implicit, embodied knowledge. Automation broke this analog, continuous signal down into digital, discrete data points: temperature: 147°C, speed: 450 m/min. Knowledge was not merely transferred, but transformed. What the worker knew cannot be fully expressed in numbers. Part of it was lost in the transformation. Yuval Noah Harari describes the transformative effect of information networks similarly in his book Nexus: “A computer is essentially a machine capable of two astonishing things: making its own decisions and coming up with its own ideas” [CORPUS]. The first step in automation is precisely this: shutting down human decision-making and perceptual processes and cramming them into a narrower, data-driven model.

2. “People become observers.” They don’t do—they watch. The nature of the work changes: action becomes observation.

Further explanation: This is not merely a change in status. During action, there is feedback: the result of your movement is immediately visible and perceptible. Supervision is a delayed, abstract world. The operator’s reaction time is limited not by their own reflexes, but by the system’s processing speed. The work shifts from an unlearnable physical routine to a state of waiting marked by unpredictable cognitive stress.

3. “The cognitive aspect of supervision is more exhausting.” Monotonous monitoring—when you’re doing nothing but must be ready—is more exhausting than action.

Further explanation: Zuboff described how workers struggled with the “flood of information” that constantly poured onto screens, but without context. Today’s “AI brain fry” phenomenon is the digital descendant of this. Research by the Boston Consulting Group (BCG) has shown that constant interaction with AI tools, uncertainty, and high expectations cause mental exhaustion and indecision. Action releases energy; constant vigilance burns you out.

4. “Management uses automation for control.” The data machines collect not only optimizes processes—it also monitors workers.

Further explanation: This is where Zuboff first introduces the idea that later became a cornerstone of surveillance capitalism. The automated system produces a dual output: the primary product (e.g., paper) and a vast amount of behavioral data. Management realizes that this data can be used not only to optimize machines but also to optimize human behavior. Today’s “shadow AI” (AI used secretly by employees to circumvent the rules) is the reverse of this dynamic: workers are trying to regain control in a system that is fundamentally built on surveillance.

5. “Collective knowledge is becoming atomized.” Group work is turning into individual screen-watching. Informal knowledge transfer is disappearing.

Further explanation: In the old factory, knowledge was social: an experienced worker would show a novice how to delicately “touch” a part of the machine. Know-how spread during breaks and through collaborative problem-solving. Automation breaks down this community network. Everyone is tied to their individual terminal. Knowledge does not flow organically, but is distributed through a formal system, top-down, as documented “training modules.” The organization loses its tacit, collective intelligence.

6. “The organization splits into two layers.” Those who understand the data, and those who do not. Digital division.

Further explanation: This is no longer the classic worker-trainee or manual-intellectual distinction. A new divide is emerging: between those who are capable of querying, modeling, and interpreting the system, and those who must only execute the limited commands predetermined by the system. This divide manifests not necessarily in pay, but in influence, autonomy, and job security. The corpus quote refers to Norbert Wiener’s warning: “We can be humble and live a good life with the aid of the machines,” he wrote, “or we can be arrogant and die.” [CORPUS]. The two strata are situated precisely between these two different choices.

Six of these have come true. ManpowerGroup’s 2026 data: AI usage increased by 13%, while employee confidence decreased by 18%. The BCG Brain Fry study confirms Zuboff’s third prediction. The phenomenon of shadow AI confirms the fourth.

The seventh prediction: The crossroads where we stand now

7. “There are two paths: automate and informate.” Automate replaces humans. Informate empowers them—it uses data not for control, but for understanding.

Zuboff hoped that organizations would choose the informate path. That automation would not replace, but enrich, human work.

Detailed explanation: This seventh prediction is not a consequence, but an ethical and strategic choice. The automating path is the “logic of replacement.” Goal: as little human intervention as possible; a process that is as cheap and predictable as possible. The informating path is the “logic of empowerment.” Here, the primary goal of technology is to make hidden processes visible and understandable. Data are not tools of surveillance, but tools of understanding, through which workers can gain deeper insight into their own work and its environment, innovate, and prevent problems. In her book, Zuboff writes: “Informating enriches work with intellectual content…” [CORPUS – quote from the book]. This choice is not a technical one, but a managerial and social decision.

This question has been open for thirty-eight years. In the age of AI, the decision is now being made: do we use AI to replace human thinking (deskilling)—or to help us better understand what we’re doing?

Analogy: Imagine two surgical procedures. In the first, the surgeon operates using an AI-controlled robotic arm; the surgeon only monitors the screens and rarely intervenes. In the second, the same robotic arm is used, but the surgeon views the organ’s anatomy, blood pressure trends, and tissue regeneration potential through a real-time, AI-generated 3D model—information that allows for more informed decisions and which the surgeon can interpret independently. The first is automating, the second is informating. The technology is the same, but the philosophy and the end result are fundamentally different.

Why does the doctor in Debrecen say the same thing as the paper mill worker in 1988? The spiral of history

Zuboff was not anti-technology. She clearly saw the benefits of automation. But she also saw that technology is not neutral—the decision of how we use it determines the outcome. Technology is a force that demands direction and commitment from the user. The fact that the statements of the worker in Debrecen and the paper mill worker are identical is a clear sign that, in practice, the path of automating has dominated. The reason is not a flaw in the technology, but rather short-term economic rationality, the desire for control, and the inertia of work organization paradigms.

In 2026, the operator at the Debrecen factory says the same thing the paper mill worker said in 1988: “I used to do something.”

The sentence hasn’t changed. Neither has the question. Only the machine has become smarter.

Cascading effects: This statement is not just a personal complaint. At the digital level: human experience is being replaced by algorithmic decision-making. At the corporate level: innovation capacity declines because tacit knowledge disappears, and employees become passive users. At the workplace level: employee engagement and satisfaction decline, leading to turnover and further pressure for automation. At the local economic level (e.g., Debrecen): if the task is merely monitoring a screen, then that work can easily be outsourced globally or fully automated, perpetuating uncertainty in the region. The spiral spirals downward.

How can we finally choose the informating path? Practical reflections

The fulfillment of the seventh prediction is not automatic. It requires conscious planning and new paradigms.

  • From a planning perspective: Don’t ask, “How can I automate this task?” Instead, ask: “What information does the human operator need to understand this process more deeply, manage it better, and improve it?” Interface design should focus not on replacement but on augmentation.
  • From a management perspective: Use data not for monitoring, but for feedback and learning. Give employees the time and freedom to explore data and ask questions—even legitimizing the use of “shadow AI” within a guided framework.
  • From an education/training perspective: Don’t just teach how to use the software; teach critical interpretation of data and systems-level thinking. Don’t let physical knowledge disappear; document it, record it on video, and incorporate sensory experiences into training wherever possible.

Key Takeaways

  • Six of Zuboff’s seven predictions have come true over the past 38 years, not because she was a visionary, but because she thoroughly analyzed the embedding of technology into the social system.
  • The prediction that “surveillance work is cognitively more exhausting” = the 2026 AI brain fry, which confirms that the root of the problems is not AI, but poorly designed human-machine interaction.
  • The seventh prediction—automate vs. informate—is now being decided in the age of AI. This is not a technical choice, but a strategic and moral one.
  • Technology is not neutral: the decision of how we use it determines the outcome. The factory slogan in Debrecen is the sad fruit of past decisions.
  • The opportunity exists: we can use AI, just as we previously used automation, to intellectually extend and empower workers. To do this, however, organizations must abandon the paradigm of exclusive control and replacement.

Frequently Asked Questions

What is a Smart Machine and what did it predict?

Shoshana Zuboff wrote “In the Age of the Smart Machine” in 1988, in which she predicted that automation could go in two directions: automate (replacing human labor) or informate (enriching human work). Most organizations chose the former, which led to the loss of physical skills, the cognitive strain of work, and organizational fragmentation.

How is this relevant in the age of AI?

We face exactly the same choice, only on a much larger scale and at a much faster pace: you can use AI to replace human thinking (automate) or to deepen it (informate). Zuboff said this 35 years ago—and we still haven’t learned from it. AI does not solve 20th-century work organization problems; it either amplifies them or offers an alternative path, if we consciously choose it.

Are the negative effects of automation inevitable?

No. Technological change is inevitable. However, the effects are shaped by our decisions. If replacement and control are the main considerations, the effects will be negative. If empowerment and understanding are the focus, the effects can also be positive. The problem lies not in the machines themselves, but in the short-sighted economic-logical framework into which we embed them.



Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The machine learns. The question is: do you?

Strategic Synthesis

  • Map the key risk assumptions before scaling further.
  • Monitor one outcome metric and one quality metric in parallel.
  • Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.