VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From the VZ perspective, this topic matters only when translated into execution architecture. Ayana Joseph’s research points out that AI feeds don’t merely reflect your identity—they co-create it. The algorithmic self isn’t science fiction—it’s already shaping who you are. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
AI recommendation systems, personalized feeds, and generative AI responses collectively shape who you think you are. They don’t lie—but they do filter. And that selection shapes your identity. This article explores how this mechanism works at the micro and macro levels—from personal validation to social polarization—and what practical steps you can take to keep this process under control.
Scrolling in the café
Sunday morning, café, cappuccino, my phone. My feed reflects my interests: AI, PKM, leadership. Every article reinforces what I already think. Every like reinforces what I already love.
I pause for a moment: when was the last time I read something that wasn’t within my sphere of interest? When was I surprised by something? When did my opinion change?
The algorithm doesn’t censor. It optimizes. But the result of this optimization is: a narrowed-down version of myself. This is the comfortable, risk-free version, where identity is not a challenge but a confirmation. In the digital world, it’s easy to believe that endless choice means freedom. In reality, however, from the infinite, we increasingly choose that one percent that the algorithm has pre-filtered for us. This is how infinite possibility becomes a self-copy.
What makes this narrowing so dangerous?
The problem isn’t that you’re scrolling through pleasant content at a café. The problem begins when this narrowed, reinforcing environment becomes your sole source of information. When your worldview is shaped not by the complexity of reality, but by a system optimized to keep you on the platform as long as possible. This situation isn’t just comfortable—it actively undermines the ability of the flexible, adaptable, and learning self. The core of learning is surprise, challenge, and cognitive dissonance. Algorithms filter these out because they do not generate “engagement” in the short term. In the long term, however, these are precisely what are indispensable for the growth of identity.
Joseph’s Model: The Algorithmic Self as a Co-creative Force
Ayana Joseph’s concept of the “algorithmic self”: AI does not passively reflect, but actively co-creates our identity. It doesn’t lie about who we are—but it does select. And that selection has consequences.
If your feed always shows you what you liked, you’ll see more and more of that—and less and less of everything else. Your interests narrow. Your identity narrows. It’s not the content that changes—you change.
Joseph’s model goes a step further than the traditional filter bubble theory. It’s not that an existing, stable identity of ours is sealed off in a bubble. Rather, it’s that our identity is constantly co-constructed through interaction with the algorithm. As the quote from the corpus points out: “In harvesting and processing your data self, algorithms make decisions on how to define you, how to classify you, what you should notice, and who should notice you.” [UNVERIFIED]
This process borders on a “self-fulfilling prophecy.” The algorithm makes assumptions about you based on your past behavior, shows you content that reinforces those assumptions, and in response, you reinforce the algorithm’s assumptions. The result is a digital self-identity that fits ever more tightly to the algorithmic profile, yet may drift further and further away from the possibility of a more complex, contradictory real self. It is this dual process—definition and fulfillment—that makes the algorithmic self such a powerful and insidious danger.
How exactly does the feedback loop that narrows your identity work?
The mechanism is simple—it’s the classic filter bubble. You click → the algorithm notes it → shows you something similar → you click again. Each cycle reinforces the previous one. After ten cycles, the algorithm knows better than you do what you want to see.
But what you “want” to see isn’t what you need. Curiosity, surprise, unsettling thoughts—these don’t generate clicks. That’s why the algorithm filters them out.
The result: an optimized, click-optimized, surprise-free version of who you could be.
How does this scale up from the individual level to the societal level?
The narrowing of individual identity is not an isolated phenomenon. When the personal feedback loops of millions of people converge on the same extreme or polarizing content, algorithms play an active role in reinforcing social division. The corpus citation mentions a specific example: “By the early 2020s, algorithms had reached a level where they could create fake news and conspiracy theories on their own.” [UNVERIFIED]
Even more telling is another quote describing the YouTube algorithm’s power to shape activists: “One activist from Niterói told him that he hadn’t been interested in politics at all until the YouTube algorithm played one of Kataguiri’s videos for him. ‘Before that,’ he said, ‘I had no ideology…’” [UNVERIFIED]. This is not about catering to an existing political interest, but about implanting a completely new identity element—the possibility of becoming an activist—into the personality. This goes far beyond personalization; it is the algorithmic redesign of personality.
How does generative AI differ from traditional recommendation systems?
A traditional feed selects from what already exists. Generative AI generates. It writes a personalized response, phrased exactly the way you like to read it. It doesn’t argue. It doesn’t contradict you. It doesn’t embarrass you.
This isn’t problematic because of manipulation—but because of one-sidedness. If your thinking is always validated, you lose the ability to question your own.
Generative AI represents a quantum leap. Previous systems displayed only a subset of human creations. Generative AI, such as ChatGPT, however, can produce infinite, uniquely tailored content that speaks directly to your language, style, and implicit assumptions. This creates a new kind of intimacy, a “pseudo-intimacy,” which can make it extremely influential. According to the corpus citation: “New types of generative AI, such as ChatGPT… In a 2023 study published in Science Advances, researchers… found that fake news generated by ChatGPT was judged to be twice as credible as that written by humans.” [UNVERIFIED]
This is a twofold threat: on the one hand, it is so personalized that it is difficult to approach critically; on the other hand, it is capable of generating convincing but false content that can undermine the consensus within our immediate environment. While earlier algorithms merely distorted existing social representations, generative AI can now create new, alternative social realities for you.
How can you tell if an algorithm is narrowing your identity? Assessing the symptoms
The algorithmic self cannot be exposed—because you always feel good while it’s happening. Confirmation provides a chemical reward, and avoiding dissonance provides comfort. The antidote isn’t technical. It is: deliberately seeking out what is uncomfortable. Reading what you wouldn’t choose. Listening to those with whom you disagree.
But before we turn to the antidote, we must recognize the disease. Here are a few diagnostic questions:
- Information monoculture: Do you get most of your information on a given topic from a single platform (e.g., YouTube, a news site, a Twitter/X feed)? If so, you are most likely consuming not the topic itself, but a platform-specific, algorithmically optimized version of it.
- The lack of surprise: If you scroll back through your content consumption from the past month, how many articles, videos, or ideas were there that surprised you, challenged you, or fundamentally changed your view on a topic? If that number is close to zero, the algorithm is likely working too effectively.
- Emotional uniformity: Is the tone of the content you consume consistently similar? Is it always outrageous, always enchanting, always encouraging? Real emotions are fluctuating; constant emotional uniformity suggests an artificial environment.
- The illusion of expertise: Do you find yourself feeling knowledgeable about a topic you’ve learned almost exclusively from algorithmic feeds, while remaining unfamiliar with the broader, academic, or controversial literature on the subject? This “algorithmic expert” status can provide a dangerous false sense of security.
What practical steps can be taken to regulate the algorithmic self?
Stopping the algorithmic co-creation of identity would be both impossible and unnecessary. The goal is conscious regulation and diversification. It’s not about eliminating algorithms, but about reclaiming the right to make decisions about your information ecology.
- The Deliberate Practice of Curiosity: Schedule this for this week. Seek out a source that you consider fundamentally hostile (whether political or professional). The goal is not to be convinced, but to understand the structure and fundamentals of the other narrative. As the corpus notes, confrontation “restores the full shape of your identity”.
- Structural Diversification of Information Sources: Think of it as an investment portfolio. Do not rely on a single platform or channel. Seek out different media (e.g., newsletters, academic journals, physical books, professional circles) that operate with different recommendation logics.
- Creating the “Second Self”: At the dawn of the computer age, psychologist Sherry Turkle used the metaphor of the “second self” to describe how we shape ourselves through our interactions with machines. The quote from the corpus expands on this: “When the world of the computer was new, I used the metaphor of a second self… But now there is a parallel and less transparent movement. Now we know that our life online creates a digital double because we took actions (we don’t…)” [UNVERIFIED]. With this in mind, you can actively shape this “digital twin.” Look for content on topics that serve your long-term development, not just your momentary curiosity.
- Incorporating Critical Questions into Generative AI: When working with ChatGPT or a similar tool, don’t just ask for a summary or confirmation. Ask: “What would be the strongest counterarguments against the position you’ve outlined?” or “What are the ethical risks in this approach?”. Force it to acknowledge complexity.
What will happen if we don’t address this? The long-term social consequences
The problem of narrowing individual identity does not remain at the individual level. At the macro level, this algorithmic homogenization, which we call personalization, can lead to a new form of digital hegemony. The corpus warns: “Silicon Valley risks creating a new hegemony of identity through its construction of these personalized spaces for each person. And these spaces are nothing but a new closet to define our identities, expressions, and behaviors.” [UNVERIFIED]
This “closet” is not physical, but cognitive and behavioral. It determines what you perceive as possible, which identity choices seem rational, and which opinions seem socially acceptable. And as the corpus points out, algorithmic biases are extremely difficult to eradicate: “Getting rid of algorithmic bias, however, is just as difficult as getting rid of our own human biases. ‘Unlearning’ a trained algorithm takes a tremendous amount of time and energy.” [UNVERIFIED]
The end result could be a society where collective thinking is not the result of human dialogue, debate, and the search for consensus, but stems from the logic of the recommendation algorithms dominant on most platforms. This threatens one of the cornerstones of democratic discourse: the possibility of shared facts and open exchange of ideas.
Key Takeaways
- AI feeds don’t just reflect your identity—they co-create it through curation. This process often tends toward a self-fulfilling prophecy.
- The feedback loop not only narrows interests but also identities, and on a macro level can lead to social polarization and a new form of identity hegemony.
- Generative AI adds a new, dangerous dimension: it not only selects but also creates reality in a personalized way, and is capable of generating more convincing fake news than humans.
- The antidote lies not in technical tools, but in a deliberate shift in behavior: in the systematic search for uncomfortable, contradictory, and surprising information, as well as in diversifying your information ecosystem.
- Ultimately, the challenge is a question of human freedom: who defines who you are? You, based on your own decisions and curiosity, or an algorithm optimized for a few objective functions?
Frequently Asked Questions
What is the algorithmic self?
The algorithmic self is the digital image that algorithms construct of you: search history, click patterns, content consumption. This image influences decisions—what content you see, what job offers you receive. According to Ayana Joseph’s model, this image is not a passive mirror but an active participant in the continuous co-construction of your identity. It determines how you are defined, how you are categorized, what you pay attention to, and who pays attention to you.
Why is digital identity important in terms of attention?
Because algorithms shape your future attention based on your past behavior. If you aren’t mindful of this, the algorithm decides what you pay attention to—not you. This is the fundamental mechanism of the so-called “attention economy.” Attention is not merely a resource; it is the lens through which you understand the world and yourself. When this lens is tailored by an algorithm optimized to maximize time spent on the platform, your worldview inevitably becomes distorted, and your identity passively adapts to fit this distortion.
Couldn’t we simply turn off the algorithms?
On most mainstream platforms, this is practically impossible, or so well hidden that it loses its function. In fact, “turning it off” often just means that personalization switches to a less accurate but still functional default model. The goal, therefore, is not complete removal (which is not even desirable, since filtering relevant information is useful), but conscious collaboration. We need to develop tools and habits that allow you to control the algorithm, not the other way around.
Related Thoughts
- Simon 1971: information abundance, attention scarcity
- AI Panopticon: surveillance stress
- FOBO: when you don’t lose your job
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The context window of the self.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Monitor one outcome metric and one quality metric in parallel.
- Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.