VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Through a VZ lens, this is not content for trend consumption - it is a decision signal. Open-source AI is no longer only a developer movement. It now shapes national capability, supply-chain resilience, and strategic technology independence. The real leverage appears when the insight is translated into explicit operating choices.
TL;DR
Open-source AI is no longer merely a technological or business choice; it has evolved into a geopolitical tool. As these models have become infrastructure, control over them has become a matter of sovereignty, as evidenced by the emergence of national AI models (e.g., France’s Mistral, China’s Qwen, and DeepSeek). Both U.S. chip export restrictions and the EU’s AI Act demonstrate that regulation and hardware access have become overt political weapons.
For a long time, we thought of open-source AI primarily in terms of developers and corporate frameworks.
Cost. Flexibility. Fine-tuning. Avoiding vendor lock-in.
These are all valid arguments—and in a previous article, we analyzed in detail how the “open weights + private data” formula is becoming one of the most important elements of corporate AI strategy. But the picture is bigger now.
Open-source AI has become a geopolitical factor. Models are no longer just products—they are increasingly infrastructure, and control over infrastructure is always a political issue.
What makes AI a geopolitical issue?
The Intertwining of Infrastructure and Power
One of the constant lessons of technology history is that when a technology becomes infrastructure, control over it also becomes a strategic resource.
OPEC and crude oil. SWIFT and banking transactions. Internet backbones and data traffic. Semiconductor production lines and the chip supply chain.
In every case: whoever controls the infrastructure determines who can access it, under what conditions, and at what price.
AI is now entering this category. AI capabilities—from text processing to decision support to code generation—are being integrated into an increasing number of critical processes: healthcare, defense, public services, finance, and education.
When these capabilities are accessible through the APIs of two or three large corporations, infrastructural dependency arises. This is not merely a business problem—it is that, but primarily a matter of sovereignty.
The Picture of Concentration
AI development is currently extremely concentrated:
- The development of frontier models is concentrated among five or six organizations (OpenAI, Anthropic, Google DeepMind, Meta, Mistral, and a few Chinese players)
- The required compute: NVIDIA GPUs, whose supply chain relies on a few factories in Taiwan and South Korea
- Energy infrastructure: massive data centers, concentrated mainly in the US and China
- ML talent: the research base and engineering expertise are also concentrated within a narrow circle
This concentration means that much of the world—including the EU, emerging markets, and small and medium-sized economies—is building a dependency on the aforementioned players for access to AI capabilities.
In this context, open-source AI will become a tool for sovereignty.
Why is this important now?
The emergence of national AI models
Over the past two years, national and regional AI initiatives have been emerging one after another—either exclusively or partially based on open-source:
Mistral AI (France). The emergence of Mistral is no coincidence. The EU’s most significant open-source AI was launched with a deliberate strategic positioning: to ensure there is no exclusive reliance on American frontier models. The Mistral model family—Mistral 7B, Mistral 8x7B, Mistral Large—demonstrates that a European player can be competitive on a global scale while also strengthening the bastion of the EU’s AI sovereignty.
Falcon (UAE, TII). The Emirati Technology Innovation Institute’s open-source Falcon series represents the Arab world’s first major attempt to build AI infrastructure. When Falcon 180B was released, it was the top open-source model on the HuggingFace leaderboard. The message is clear: the oil monarchies do not want to depend solely on U.S. AI infrastructure.
Qwen (Alibaba, China). The Qwen 2.5 series—including a model family ranging from 1.5B to 72B—is one of the most powerful open-source model families today. China’s open-source AI strategy is partly a countermeasure to Western export restrictions: if NVIDIA’s chip exports are restricted, China can exert influence on the global AI ecosystem by openly publishing its model know-how.
DeepSeek. The performance and open release of DeepSeek-R1 and DeepSeek-V3 in late 2024–early 2025 came as a real shock to the market. It was not just a technical achievement—but a geopolitical message: a Chinese AI lab is performing at the cutting edge and releasing its work as open source. This directly undermines the effectiveness of U.S. chip export restrictions.
EU Open Language Initiative. The European Union is funding the development of its own open-source foundational models. ELSA (European Language and Speech AI) and OpenGPT-X embody the EU’s quest for sovereignty in AI infrastructure.
Regulation as a Geopolitical Weapon
The U.S. chip export restrictions—the ban on exports of NVIDIA A100 and H100 GPUs to China—are one of the clearest moves in AI geopolitics. The message: if you don’t get chips, you can’t develop cutting-edge AI.
DeepSeek’s response was a direct answer to this limitation: if compute is restricted, then efficiency and algorithmic innovation become more valuable. DeepSeek-R1 reached frontier-level performance despite U.S. export restrictions—through cost-effectiveness and efficiency.
The EU AI Act represents a different approach: it turns regulatory requirements, not chip supply, into a weapon. If an AI system must meet strict compliance requirements—which necessitate transparency, auditability, and documentation—then closed frontier models must also comply. Open-source models are inherently more transparent in this regard.
Where has public discourse gone wrong?
“Open = equal access for everyone”
One misconception surrounding open-source AI is that if model weights are public, everyone is on an equal footing.
This is not true. To run and develop open-source models:
- Compute is required: GPUs, which are expensive and have limited availability
- Data is required: the quality of one’s own domain-specific data determines the effectiveness of fine-tuning
- Expertise is required: ML engineering knowledge, evaluation infrastructure, and deployment know-how are not evenly distributed
Open weights are therefore a necessary but not sufficient condition for sovereignty. The full picture: open model + own compute + local expertise + own data + own evaluation.
The EU, for example, has research expertise and data assets, but its compute infrastructure is largely foreign—and this is where European sovereignty falls short.
The Dilemma of Open Source and Security
There is a serious security concern regarding open-source AI: if model weights are public, malicious actors can more easily modify them, create unrestricted versions, or use them for disinformation and manipulation.
This is a genuine concern—but closed models do not offer complete protection either. Jailbreak techniques work on closed models as well. The closed model leaves the enforcement of restrictions to the provider—which is a matter of trust.
In a geopolitical context, the security dilemma runs deeper: a country that relies exclusively on closed, U.S.-based frontier models has no control over what content these models allow, block, or distort. The open-source model at least allows for self-control—even if it does not completely eliminate security risks.
What deeper pattern is emerging?
Infrastructure is always political
When the internet was spreading in the US, many believed that the global communications network would naturally bring about a liberal value system. Then China built its own, controlled internet infrastructure—the Great Firewall.
The geopolitics surrounding AI follow a similar logic. The question is not whether AI is good or bad overall. Rather, it is who controls the inference, who sets the security boundaries, and who extracts the lessons from the training data.
Open-source AI intervenes in this dynamic: it partially decentralizes control over the AI infrastructure. Not completely—the concentration of computing power and chip supply remains—but control over the know-how and model weights is partially unassailable.
Small Models and Sovereignty
It is worth connecting this to the discussion of on-device AI: small, efficiently runnable models are geopolitically important precisely because they can run on local infrastructure.
A Qwen2.5-7B or Mistral-7B model can be run on an average server with minimal infrastructure investment. This means that a healthcare institution, a government agency, or a critical infrastructure operator can become independent of cloud APIs—and run AI capabilities in its own, controlled environment.
This combination of small models and local infrastructure is one of the most practical implementations of sovereignty.
The Relationship Between Data Sovereignty and Models
AI sovereignty applies not only to model weights—but also to data sovereignty.
Anyone who uses only a closed-source model via an API stores their prompts, any fine-tuning data, and their feedback on the provider’s servers. The provider—at least potentially—can use this data.
Running an open-source model on your own infrastructure preserves data sovereignty. This is particularly critical for government data, health data, trade secrets, and legal materials.
What are the strategic implications of this?
The EU and its “digital sovereignty” program
In the EU’s AI strategy, “digital sovereignty” is one of the three pillars (alongside values and competitiveness). This is not an empty slogan: it is backed by a concrete investment program, a regulatory framework, and infrastructure development plans.
Open models are strategic tools in this EU program: the foundation of an AI infrastructure developed in the EU, run in the EU, and GDPR-compliant.
Anyone building an AI business in the EU should understand this: “AI run in the EU, transparent, and open-source-based” is not just an ethical preference—it is a competitive advantage in the regulatory arena.
Localizability as a measure of value
The key to operationalizing AI sovereignty: localizability. To what extent can an AI system be run on one’s own infrastructure, using one’s own data, and under one’s own control?
Closed API: low localizability — everything depends on the provider Open model, cloud infrastructure: moderate localizability — the model can be localized, but the infrastructure cannot Open model, own infrastructure: high localizability — full control
From healthcare institutions to the defense sector, from the education system to critical infrastructure — wherever data sensitivity and system criticality are high, localizability is one of the strongest dimensions of selection criteria.
What should we be watching now?
The geopolitical battles over open-source AI
One of the most interesting fronts in the AI race between the US and China is unfolding around open-source technology. Meta’s Llama series—on which half the world is building—comes from the US and is freely available. Qwen and DeepSeek come from China, also as open-source projects. The EU is trying to position itself as a competitor through Mistral.
This three-way competition in the open AI space is significant—and could become increasingly politicized in the coming years. Export restrictions, licensing terms, and governance structures are all potential geopolitical tools.
Multilateral AI Governance
The discourse surrounding global AI governance—OECD AI Principles, UN-level negotiations, G7 AI agreements—is increasingly touching on the regulation of open-source AI as well. How should openly available, potentially dual-use AI models be handled? This is a question for which there is no established answer yet.
Conclusion
The fact that open-source AI has become a geopolitical issue does not diminish the value of the technology—rather, it expands the context in which it must be interpreted.
Open models are not just cheaper alternatives. They are increasingly tools of sovereignty—enabling organizations, institutions, and countries to develop, run, and apply AI capabilities under their own control.
In the history of technology, infrastructure always becomes politics. With AI, this is happening at a faster pace than with any previous technology.
Those who understand this will not only make better technological decisions—but also better geopolitical and sovereignty decisions.
Related articles on the blog
- On-device AI and personal sovereignty: when intelligence moves back into your pocket
- Own data, open weights: the new corporate formula for AI
- DeepSeek and the cost shock: when efficiency shakes up the market
- Mistral 7B and the power of architecture: it’s not about the number of parameters
- The entry barrier has fallen: what the democratization of AI really means
Key Takeaways
- AI models have become infrastructure — Because AI capabilities are embedded in critical processes (healthcare, finance), controlling access to them poses strategic dependencies and sovereignty risks.
- Open-source AI as a tool for sovereignty — National models (e.g., Mistral, Falcon, Qwen) aim to reduce geopolitical dependence on large-corporation APIs and centralized compute infrastructure.
- Regulation and hardware as geopolitical weapons — U.S. chip export restrictions and the EU AI Act’s transparency requirements are also aimed at extending or maintaining technological dominance.
- Open weights ≠ full sovereignty — Public models alone are not sufficient; true independence also requires one’s own compute, data, expertise, and evaluation infrastructure.
- The security dilemma takes on a political dimension — Oversight of closed models falls to external actors, which poses a reliability risk, while open-source allows for self-oversight, though it does not eliminate the risk of malicious use.
Strategic Synthesis
- Translate the core idea of “Open-Source AI as a Geopolitical Variable” into one concrete operating decision for the next 30 days.
- Define the trust and quality signals you will monitor weekly to validate progress.
- Run a short feedback loop: measure, refine, and re-prioritize based on real outcomes.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.