Skip to content

English edition

Moral Courage in the Age of Data

Milgram’s experiment still holds true: obedience to algorithmic authority is stronger than obedience to human authority because the machine appears objective. Responsibility cannot be delegated.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

Through a VZ lens, this analysis is not content volume - it is operating intelligence for leaders. Milgram’s experiment still holds true: obedience to algorithmic authority is stronger than obedience to human authority because the machine appears objective. Responsibility cannot be delegated. The practical edge comes from turning this into repeatable decision rhythms.

TL;DR

Behind every algorithm is a person who made a decision—and behind every AI system’s output is a person who should be making a decision. Moral courage isn’t born in a heroic moment, but in daily pull requests, in quiet “no’s,” and in the realization that a technological decision is always a moral decision. Responsibility cannot be delegated—neither to a machine nor to algorithmic consensus.


In Stanley Milgram ’s 1963 obedience experiment, 65% of participants administered the maximum electric shock because a person perceived as an authority figure instructed them to do so. A 2025 replication confirmed that the phenomenon is enduring. In the digital age, the situation is worse—obedience to algorithmic authority is stronger than to human authority because the machine appears “objective.” According to Jean-Paul Sartre’s existentialism, “I was just following the algorithm” is just as bad in bad faith as “I was just following orders.”

The Café Where Ethics Begins

I’m sitting in a café on Ráday Street one Saturday afternoon. Two tables over, a development team is holding a sprint review—laptops open, voices quiet, but the gesturing is intense. Someone says, “This works, but we shouldn’t put it out.” The others are silent. Then someone else says, “But the product owner has already approved it.”

That silence between the two sentences—that is the realm of moral courage.

It’s not in conference rooms that the direction of technology is decided. Not in keynotes, and not in white papers. But in these moments: when someone knows that something is technically flawless but morally questionable—and still dares to say it.

Roger Zelazny *In Nine Princes of Amber (https://en.wikipedia.org/wiki/Nine_Princes_in_Amber), reality was manipulable—every shadow world could be equally “true,” and power lay in who could choose which one to exist in. In the universes of James Tiptree Jr., identity was fluid, and power resided in hidden mechanisms. Moral courage in the digital age unites both worlds: reality is manipulable (the algorithm filters, ranks, and shapes), and power is hidden (the decisions behind the code are invisible to the end user).

Digital Bushido: A Code Behind the Code

There is an ancient samurai code—the bushido—whose essence lay not in combat techniques, but in the warrior’s inner composure. 21st-century tech professionals don’t wield swords, but the parallel is strikingly accurate. Every day, developers, data scientists, and product managers make decisions that affect millions of people—and these decisions lack a formal ethical framework.

One commit, one honor. No merge, no surrender.

This isn’t a slogan. It’s an operating principle. Every single commit—every single piece of code we push into the system—is a moral act. Not because the code itself is good or bad, but because the code operates within a context: it affects people, shapes decisions, and closes or opens up possibilities.

The question is never “can we code it?” The question is should we.

[!note] The Seven Virtues of Bushido and Technological Ethics The seven principles of Bushido—justice (gi), courage (), benevolence (jin), respect (rei), sincerity (makoto), honor (meiyo), and loyalty (chūgi)—were not most useful on the battlefield. In everyday decisions. Precisely where today’s technological ethics should also be practiced: not in the wake of major scandals, but in daily decisions.

Why does the machine amplify the mistakes of the past?

There is a disconcerting moment in every developer’s life. When they realize that the algorithm they wrote consistently makes bad decisions—not because they programmed it poorly, but because the training data is riddled with biases.

Three layers of bias are at work simultaneously:

graph TD
    A["Historical Bias<br/>Biases from the past<br/>embedded in the data"] --> D["Algorithmic<br/>Amplification"]
    B["Representation Bias<br/>Incomplete or distorted<br/>datasets"] --> D
    C["Selection Bias<br/>Who collects the data?<br/>From what perspective?"] --> D
    D --> E["The machine blindly<br/>reproduces and<br/>AMPLIFIES<br/>the mistakes of the past"]
    E --> F["Decisions:<br/>credit, employment, medical,<br/>police, education"]

Research by the MIT Media Lab has shown that some commercial facial recognition systems perform significantly worse with people of darker skin tones—with recognition errors being particularly severe for women of darker skin tones. This is not a software bug in the classical sense. It is representational bias: the system performs worse at recognizing groups that are underrepresented in the training dataset, and this shortcoming has real-world consequences—ranging from wrongful arrests to discriminatory hiring practices.

An analysis of HireVue and similar AI-based recruitment platforms has shown that automated voice and facial expression analysis tools are unable to adequately handle speech variations and non-standard facial expressions, leading to biased evaluations and unjustified exclusions from the hiring process.

According to critical theory, every algorithm is political. But not because we make it so—rather, because even the raw data is political. The method of data collection, the selection of variables, the definition of the target variable—each is a human decision, and each carries the biases of the given era and culture.

[!warning] Bias is not a bug—it’s the default Most algorithmic bias does not stem from the developer’s intent, but from structural flaws in the training data. This means that even systems perceived as neutral are not neutral—and anyone who does not actively examine bias tacitly approves of it.

Why is the trolley problem no longer a thought experiment?

Philippa Foot original trolley problem (trolley problem) was a philosophical abstraction for decades: who do you save, who do you sacrifice, and on what grounds? In the age of artificial intelligence, this dilemma has stepped out of the classroom.

The decision-making algorithm of self-driving cars—in the event that an accident is unavoidable—already provides a programmed answer to the question that generations of philosophers have debated. But there is a crucial difference: in the philosophical dilemma, a human makes a decision in the moment, based on their own human limitations and value system. The algorithm decides in advance—based on the programmers’ values, the biases in the training data, and an optimization function whose parameters are hidden.

Classic dilemmaAlgorithmic dilemma
A person decides in the momentThe code contains a predefined decision
Driven by moral intuition and valuesDriven by an optimization function and weights
The decision is unique and unrepeatableThe decision is reproducible and scalable
Responsibility is clearResponsibility is dispersed (developer, company, data provider, regulator)
The context is completeThe context is reduced (the algorithm does not “see” everything)

In 2018, MIT’s Moral Machine project collected 40 million decisions from 233 countries and demonstrated that moral preferences vary dramatically across cultures. However, only a single set of values can be encoded into the algorithm. Whose value system?

This is not a technical question. It is a political, cultural, and—ultimately—moral question.

Milgram in the server room: the anatomy of digital obedience

Stanley Milgram revealed a simple but shocking fact in his 1963 obedience experiment: the vast majority of people are willing to carry out harmful acts if ordered to do so by a person perceived as an authority figure. In the experiment, 65% of participants administered the maximum 450-volt electric shock to the other person—who was actually an actor—simply because a researcher in a white lab coat told them to continue.

A modern replication published in Scientific Reports in 2025 confirmed that the phenomenon is enduring and universal. In the experiment, conducted under 21st-century conditions using an updated methodology, the rate of obedience remained consistently high.

But what happens if we replace the researcher in the white lab coat with an algorithmic decision-making system?

An analysis by Structural Learning highlights a disturbing parallel: the technology’s perceived neutrality makes compliance even more acceptable than human authority. When an AI system makes a recommendation—whether it’s rejecting a loan application, filtering out a candidate, or moderating content—the decision-maker is more likely to accept it than if a human were to say the same thing. The algorithm appears “objective.” It has no face, no mood, no agenda—at least that’s how we perceive it.

[!note] The digital version of the Milgram effect In Milgram’s original experiment, increasing physical distance reduced obedience—if the participant could see the victim’s suffering, they were less likely to administer the electric shock. In the digital environment, the distance is maximal: the consequences of algorithmic decisions are invisible to the decision-maker. This is the inverse of the “proximity effect”: the farther away the consequence, the easier it is to obey.

This is not a software problem. It is a social-psychological problem mediated by software.

Sociology has long warned that all technology operates within a social context. AI is not neutral either. The training dataset may be biased, and the objective function may convey a hidden ideology. But the deepest problem lies not in the technology itself, but in the reflex by which humans hand over the responsibility for decision-making to a system—just as they handed it over to an authority figure in Milgram’s experiment.

Sartre in the pull request: the existentialist programmer

Jean-Paul Sartre’s existentialism can be condensed into a single sentence: man is condemned to freedom. There is no pre-written essence, no set goal, no exemption. Man is what he makes of himself—and therefore everything he does, he does with the full weight of responsibility.

This idea is radically relevant in the world of technology.

Sartre’s concept of bad faith (mauvaise foi) precisely describes the mechanism by which technology professionals—developers, product managers, data scientists—shirk responsibility:

  • “I was just implementing the specification”—as if the specification absolves one of moral responsibility.
  • “I was just following the algorithm”—a digital version of the Milgram experiment.
  • “The market demands it”—as if market demand were a moral mandate.
  • “This isn’t my responsibility, it’s the regulator’s”—as if legal compliance were equivalent to ethical compliance.

According to Sartre, all of these are forms of bad faith. A person always chooses—even when they decide not to choose. “Not deciding” is also a decision: the tacit approval of the status quo.

Existentialism does not offer comfortable answers. It does not tell you what you should do. It says: whatever you do is your decision, and therefore you are responsible. This radical freedom is not liberation—but a burden. Yet it is precisely this burden that makes a person a moral being.

Albert Camus—though not formally an existentialist—states in The Myth of Sisyphus: even in an absurd world, rebellion makes sense. Preserving human dignity is not tied to the hope of victory. Sisyphus’s happiness lies in knowing that the boulder will roll back, and yet he pushes it.

The developer who knows that the algorithm distorts—and yet speaks out—is performing precisely this Sisyphus-like gesture.

Microethics and Macromorality: Responsibility Does Not Scale

Psychology—particularly research on cognitive biases—has documented for decades that people are prone to the diffusion of responsibility. The more people are present in a situation, the less an individual feels that they should act. This is the Genovese effect—and it works just as well in large-scale corporate technology development as it does on a downtown street.

Moral courage is therefore not about following abstract rules. It is about active presence in every single decision-making situation. It is not a question of “what is allowed,” but of “what should be done”—and there is often a gaping chasm between the two.

graph LR
    A["WHAT CAN<br/>be done?<br/>(technical question)"] --> B["WHAT IS ALLOWED<br/>to be done?<br/>(legal question)"]
    B --> C["WHAT SHOULD<br/>be done?<br/>(ethical question)"]
    C --> D["WHAT SHOULD NOT<br/>be done?<br/>(moral question)"]

    style A fill:#4a9,stroke:#333,color:#000
    style B fill:#5ab,stroke:#333,color:#000
    style C fill:#e94,stroke:#333,color:#000
    style D fill:#c33,stroke:#333,color:#fff

At most tech companies, decision-making stops at the first two levels. “Can we code it? Is it legal? Okay, let’s do it.” The third level—“but should we?”—rarely gets a mention in sprint planning. The fourth level—“Are there things we could do but shouldn’t, morally speaking?”—practically doesn’t exist in most organizational cultures.

Microethics means asking the question at every single decision point—in every pull request, every feature specification, every data choice—who does this harm? Who does it benefit? And who is silenced?

What leadership competencies does the AI era demand?

For decades, leadership competence has rested on three pillars: strategic thinking, financial intelligence, and technological understanding. In the AI era, this trio is supplemented by a fourth pillar: moral intelligence.

The leader of the future is not the one who understands AI best. Rather, it is the one who best understands when NOT to use it. The one who asks not only, “What can we do?” but also, “What should we do?”—and, most difficult of all, “What must we not do, even if it is technologically possible?”

This competence cannot be taught in a three-day workshop. It is not a checklist. It is an inner attitude that must be practiced daily—just as bushido does not begin on the day of battle, but becomes a reflex through daily discipline.

Old LeaderNew Leader
“How do we automate it?”“Should we automate it?”
“What ROI does AI deliver?”“What externalities does AI generate?”
“Are we compliant with regulations?”“Does our system respect human dignity?”
“What data should we collect?”“What data should we NOT collect?”
“Faster, cheaper, more efficient”“Fairer, more transparent, more humane”

The moment you say no

I’m back at the café on Ráday Street. The development team is still arguing at the table. Finally, the one who spoke up first—“this works, but we shouldn’t release it”—doesn’t give up. He doesn’t yell or threaten. He just repeats: “I know it works. But think about who it hurts.”

This is moral courage. Not the heroic, glory-filled moment. But the quiet, uncomfortable, risky statement. The sentence that would be easier not to say.

The future isn’t decided by algorithms. But by the people who are capable of saying no to their own algorithms. Because they know: not everything that is possible is also right.

Change doesn’t start in conference rooms. Change starts where you are right now—with the next pull request, the next feature decision, the next data choice.

Because the future isn’t just in the code we write—it’s in the code we refuse to write.

The revolution isn’t the machines’. The revolution is yours.

Key Takeaways

  • A technological decision is always a moral decision — there is a gap between “we can code it” and “we should code it,” and we must actively bridge that gap
  • The Milgram effect is alive and well in server rooms — obedience to algorithmic authority is stronger than obedience to human authority because the machine appears “objective”
  • Responsibility cannot be delegated — neither to algorithmic consensus, nor to specifications, nor to regulators; In Sartre’s words: man always chooses, and he is always responsible
  • Moral courage is a daily practice — it is not measured in grand moments, but in pull requests, feature decisions, and in that quiet “no” that is harder to say than to remain silent

Frequently Asked Questions

What is algorithmic bias and why is it dangerous?

Algorithmic bias is the phenomenon where an AI system systematically produces unfair results against certain groups. This is not a software bug—it is the result of structural deficiencies in the training data. It is dangerous because the system appears “objective” while reproducing and reinforcing past biases. MIT research has shown that commercial facial recognition systems perform significantly worse on people with darker skin tones, leading to wrongful arrests and labor market discrimination.

What does moral courage mean in the world of technology?

In the context of technology, moral courage means the ability to recognize that our technological decisions have ethical consequences, and the willingness to speak up—even at the risk of our own comfort or career—when a system is harmful. This is not a heroic gesture, but a daily practice: asking the question in every pull request, every data choice, every feature decision: “Who does this harm, and who cannot speak up?”

How can Sartre’s existentialism be applied to technological decision-making?

According to Sartre, humans are “condemned to freedom”—there is no exemption from responsibility. In a technological context, this means that “I was just following the algorithm” or “I was just implementing the spec” are not valid excuses. Every developer, product manager, and data scientist involved in creating a system is responsible for its consequences. Recognizing bad faith is the first step: if someone knows a system is harmful but doesn’t speak up—that’s not neutrality, it’s a decision.



Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
One commit, one conscience. No merge without meaning.

Strategic Synthesis

  • Identify which current workflow this insight should upgrade first.
  • Set a lightweight review loop to detect drift early.
  • Close the loop with one retrospective and one execution adjustment.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.