VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Governance is not a blocker to speed; it is the mechanism that keeps speed survivable. In practice, governance quality is the difference between repeatable learning and repeated incident cycles.
TL;DR
AI governance isn’t about banning ChatGPT; it’s about establishing a framework for its use. The four pillars are: Acceptable Use Policy (what is and isn’t allowed), Model Inventory (what AI systems are used for what purposes, and who is responsible), Output Review Protocol (when human oversight is required), and Incident Response (what happens if something goes wrong). The risk categories in the EU AI Act determine which AI applications require mandatory human oversight. Governance doesn’t slow down AI—it makes it sustainable.
Thursday morning at nine o’clock. The legal department sends an email to the entire company: a colleague used ChatGPT to draft the first version of a contract with a client, and the confidentiality clause was literally an AI-generated template—a text sample taken from a competitor’s case study. The compliance lead calls a meeting. The CEO asks questions that no one can answer: who authorized this, who reviewed it, what are the rules?
This scenario is not an exception in 2026. It repeats on a weekly basis in organizations that have started using AI—but haven’t built a governance framework around it. The problem wasn’t the AI. The problem was that there was no order surrounding it.
Why Isn’t an “AI Policy Document” Enough?
Most organizations’ first reaction to the issue of AI governance is to draft a policy, send it out via email, and think they’re done. This one-page document usually contains something about “responsible AI use” and prohibits “uploading confidential data.” Then nothing changes.
AI governance is not a document—it is a functional framework that has:
- Rules that are actually applied (not just written down, but followed)
- People in charge (not the IT department, but identified decision-makers)
- A feedback mechanism (what happens if something goes wrong)
- Regular reviews (because the AI ecosystem is changing)
Governance does not slow down AI—quite the opposite: organizations that provide a framework for AI use adopt it faster because employees are confident about what is allowed and what is not.
The EU AI Act’s risk categories: the mandatory starting point
The EU AI Act entered into force in 2024, and by 2026, the mandatory compliance deadlines for most high-risk categories will be in effect. The AI Act defines four risk levels:
| Risk Level | Description | Examples |
|---|---|---|
| Unacceptable Risk | Prohibited AI applications | Social scoring, subliminal manipulation |
| High risk | Mandatory human oversight, documentation | HR decisions, credit assessment, medical diagnosis, critical infrastructure |
| Limited risk | Transparency obligations | Chatbots (AI must be disclosed), deepfakes |
| Minimal risk | Freely usable | Spam filters, games, recommendation systems |
From a corporate governance perspective, the most important question is: which AI application falls into the high-risk category? If so, under the EU AI Act, conformity assessment, the incorporation of human oversight, and risk management documentation are mandatory.
For example, recruitment screening assisted by ChatGPT at a Hungarian employer falls into the high-risk category—because it makes automated decisions regarding individuals’ employability. This is not an opinion: it is Article 6 of the AI Act.
The 4 Pillars
Pillar 1: Acceptable Use Policy — what is allowed and what is not
The AUP (Acceptable Use Policy) is the most visible component of AI governance. A list of prohibitions isn’t enough—the AUP works when it specifies what is allowed, for what purpose, and with what data.
The AUP addresses five key dimensions:
Data classification: What kind of data can be given to AI? The most common framework: public data (allowed), internal data (with case-specific permission), confidential data (prohibited from being transferred to AI), personal data (subject to GDPR, requires independent assessment).
Categories of AI tools: Which tool is authorized for what purpose? For example: ChatGPT — internal drafts and brainstorming, without customer data. GitHub Copilot — code assistance, without closed-source code. Enterprise RAG — with all internal documents.
Output review: What content can be released after AI processing without review? (Spoiler: very little.) The AUP makes this explicit.
Confidentiality: AI-generated text is not “owned” by anyone in terms of copyright — but the company is responsible for the content. The AUP establishes this.
Sanctions: What happens if someone violates the AUP? A policy without sanctions is not a policy.
Pillar 2: Model Inventory — Who, What, Why, and Accountability
By 2026, most companies will have at least 5–15 different AI tools running in parallel: ChatGPT, Copilot, internal RAG, AI-assisted analytics, generative imagery, and dozens more. These are not documented, have no designated owner, and lack an update schedule.
The Model Inventory (AI model inventory) addresses this. The following must be recorded for every AI application:
| Field | Content |
|---|---|
| Tool name | e.g., “Corporate RAG Assistant” |
| Vendor / model | e.g., “Qdrant + Llama 3 70B, on-premise” |
| Purpose | e.g., “Answering internal HR policy questions” |
| Data classification | e.g., “Internal — does not contain personal data” |
| Responsible person | e.g., “IT Director + HR Manager” |
| Risk classification | Based on the EU AI Act: minimal / limited / high |
| Last review | date |
| Next audit | date |
This inventory is not a static document—it must be reviewed at least every six months and updated whenever a new AI tool is introduced. The Model Inventory allows you to answer at any time: “What AI does our company use?”
Pillar 3: Output Review Protocol — When Human Review Is Required
Not all types of AI outputs carry the same level of risk. The Output Review Protocol (ORP) determines which AI outputs require mandatory human approval, which are automatically approved, and which are prohibited entirely.
Enterprise AI Safety Lanes:
Green Lane — Automatic Approval: Internal drafts, brainstorming aids, code comments, internal summaries — which do not leave the organization, do not make decisions, and do not interact with customer data.
Yellow tier — mandatory human review before release: Communications sent to customers, draft contracts, marketing copy, proposals, and publicly published content. These can be generated by AI, but a human expert reviews and approves them before release.
Red zone — AI cannot make decisions: Hiring and firing decisions, credit assessments, performance evaluations, legal advice, medical diagnoses — any decision classified as high-risk under the AI Act. AI can assist with analysis, but the decision must be made by a human and documented.
Black List — Prohibited: Misleading customers about the nature of AI, subliminal manipulation, deepfakes, unauthorized processing of personal data.
ORP is critical because AI-generated content appears extremely convincing—even when it is incorrect. The human checkpoint is not a lack of trust in AI, but the exercise of professional responsibility.
Pillar 4: Incident Response — What Happens When Things Go Wrong
You need a plan to handle AI incidents—and there will be some. Incident Response doesn’t mean you’re expecting a disaster; it means you won’t have to improvise when one occurs.
Typical types of AI incidents in an enterprise environment:
- Data leak: Confidential data enters an AI system (e.g., customer data is uploaded to a cloud API)
- Hallucination: The AI generates false information instead of facts, and this reaches customers or decision-makers
- Copyright / compliance: AI-generated text contains infringing content
- Bias / discrimination: An AI-based decision systematically treats a group unfavorably
- Prompt injection: Malicious input manipulates the AI system to leak internal data
The Incident Response Process:
- Detection and Reporting — Who reports it, and where? There should be a clear channel (e.g., Slack #ai-incident or helpdesk ticket)
- Immediate Isolation — If active damage is occurring (e.g., data breach), the affected AI system must be temporarily shut down
- Investigation — What happened, how, and who is affected? The Model Inventory and audit logs will be indispensable here
- Notification — Should authorities be notified? (Under GDPR, the timeframe for reporting a data breach to the NAIH is 72 hours)
- Correction — Fixing the process or tool causing the error, updating policies
- Documentation — Recording the incident for future audits and to update the Model Inventory
How to implement it in 90 days?
Governance is not a one-time project—it is an ongoing operation. But the first 90 days determine whether the framework will remain active or remain on paper.
Days 1–30: Laying the Groundwork
- Inventory of AI tools (Model Inventory V1)
- Mapping existing AI usage (interviews with department heads)
- First draft of the AUP, with input from legal counsel
- Risk classification based on the EU AI Act
Days 31–60: Structure
- Approval and communication of the AUP
- Development of an Output Review Protocol for relevant processes
- Description of the Incident Response process and designation of responsible parties
- One-hour awareness training for affected employees
Days 61–90: System
- Implementation of audit logs (who, when, which AI, what data)
- Initial review and refinement
- Designation of a governance lead (no need for a new position—can be an existing manager)
- Audit and review schedule for the next six months
Key Takeaways
- AI governance is not a compliance document—it is a functional framework with designated responsible parties, rules, feedback mechanisms, and regular reviews
- The risk categories in the EU AI Act (minimal, limited, high, unacceptable) provide a mandatory starting point: human oversight is not optional for high-risk applications
- The 4 pillars (AUP, Model Inventory, Output Review Protocol, Incident Response) together form the minimum required governance framework
- “Corporate AI safety levels” (green/yellow/red/black) provide employees with a practical tool: everyone knows where the checkpoints are
- Governance does not slow down AI adoption—organizations that provide a framework for its use adopt it faster and more securely
Related Thoughts
- Enterprise AI Adoption — How to implement AI at the organizational level, step by step
- 90% of AI project failures are not technological — The organizational and governance reasons behind AI implementations
- AI Governance and RAG Decision Support — How the governance layer is integrated into the RAG architecture
Zoltán Varga - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership AI governance is not about fear. It is a prerequisite for sustainable and responsible AI use.
Strategic Synthesis
- Define decision rights and escalation paths before expanding AI usage.
- Operationalize policy into weekly review loops, not static documentation.
- Tie governance controls to real workflow risk categories and ownership.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.