Skip to content

English edition

AI Policy: Regulate Capability, Accountability, and Use Context

Policy debates often chase headlines instead of risk mechanics. This piece maps what should be governed first: capability thresholds, responsibility chains, and deployment context.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

Through a VZ lens, this is not content for trend consumption - it is a decision signal. Policy debates often chase headlines instead of risk mechanics. This piece maps what should be governed first: capability thresholds, responsibility chains, and deployment context. The real leverage appears when the insight is translated into explicit operating choices.

TL;DR

Most corporate AI policies are 30-page PDFs that no one reads. A truly useful AI policy answers a single question: how do we decide when and how to use AI? It includes four essential elements: data classification, chain of responsibility, hallucination protocol, and an EU AI Act compliance statement. For SMEs, all of this fits on a single page.


The PDF That Was Left in the Drawer

I worked at a law firm last year. They asked the senior partner to draft an AI usage policy—because clients had started asking questions, and regulatory pressure was mounting. The partner hired an outside consultant. Two weeks, three meetings, twenty-eight pages.

The document wasn’t bad, by the way. It covered everything precisely: what’s allowed, what isn’t, how to keep logs, what data can be processed with AI. In the first week, everyone received a printed copy. By the third week, the paralegals were using ChatGPT again for client agreements—client agreements containing secret, confidential data—because they didn’t remember what the document said, and no one was bothered by the fact that they didn’t remember.

The problem wasn’t the content of the policy. The problem was that the policy wasn’t a decision-making tool, but a compliance document. There’s a big difference between the two.

What’s the difference between a compliance document and a decision-making framework?

A compliance document tells you what the rule is. A decision-making framework tells you how to think through the situation and what to refer to when making a decision.

A simple test: if a colleague asks tomorrow, “Can I use AI for this client letter?”, can they find the answer in the document within three minutes? If not, then the document is compliance text, not a usable policy.

A usable AI policy is short, specific, and makes the question actionable. It must answer four questions:

  1. What data is allowed to be sent to AI?
  2. Who is responsible for the output generated by AI?
  3. What should be done if the AI generates incorrect results?
  4. How does the EU AI Act affect us?

The first mandatory element: data classification

Data classification is the cornerstone of the policy. It is not enough to simply say, “Confidential data must not be sent to AI”—because everyone has a different understanding of what “confidential” means.

Three levels are sufficient for most SME-sized organizations:

Public data: Industry news, general templates, non-personal marketing materials. Can be processed with any AI tool, including cloud-based solutions.

Internal data: Internal reports, process descriptions, non-personal business data. Only with approved tools and under specific conditions (e.g., it must not be included in the AI training database).

Confidential data: Personal data (under GDPR), customer contracts, financial data, trade secrets. It may only be fed into the AI if the tool’s data handling terms explicitly cover this and it has been approved by the legal director.

This classification is not a legal document—it is a one-page table containing specific examples: “customer name and email = confidential,” “industry news = public.”

The second mandatory element: chain of responsibility

AI-generated output is not accountable for itself. But then who is?

This is the question to which most organizations have not provided an explicit answer—and which is why AI errors remain unaccounted for.

The policy must provide an explicit answer on two levels:

Operational Responsibility: Anyone who uses AI output in a task is responsible for verifying it. A colleague who uses AI to draft a proposal must review and approve it before sending it. If the proposal contains incorrect information, it is not “the AI’s fault”—it is their fault.

System responsibility: Whoever introduced the AI tool or process into the organization is responsible for ensuring that the tool is suitable for the given task. If a tool not intended for that purpose is used to generate legal documents and produces an incorrect result, the responsibility lies with the person who introduced it.

This dual accountability structure is not punitive. Its purpose is to prevent a vacuum from forming where no one notices the error—because no one knew it was their job to notice it.

The third mandatory element: hallucination protocol

LLMs hallucinate. They invent facts, fabricate sources, and get numbers wrong. This is not a bug that the next model update will fix—it is an intrinsic property of the architecture that can only be mitigated, but not completely eliminated.

Corporate policy must include an explicit protocol for cases where the AI generates false output:

Identification of high-risk areas: What are the types of tasks where the consequences of hallucination are severe? (Legal content, medical advice, financial data, technical specifications.) In these areas, AI output must be verified through mandatory human review—not as an optional step, but as a built-in process step.

Identification of low-risk areas: Where the consequences of hallucination are manageable (e.g., a summary of internal brainstorming), built-in verification would be an excessive burden. These must be explicitly identified so that the team does not treat every AI output with the same level of suspicion.

Incident protocol: If a faulty AI output does slip through, there should be a clear sequence of steps: who to notify, how the output is corrected, and what is documented as lessons learned.

The Fourth Mandatory Element: EU AI Act Declaration of Conformity

The EU AI Act entered into force in August 2024 and is being phased in gradually. By 2026, most of its provisions will be applicable. SMEs do not need to hire a lawyer—but they must know where they stand.

The AI Act is risk-based: it classifies AI systems into risk categories. Most tools used by SMEs (text generation, data analysis, image generation) fall into the “minimal risk” or “general-purpose AI” categories. This means no special permit is required — but there are some basic requirements:

Transparency: If you use AI in direct communication with a customer (e.g., chatbot, AI-generated email), the customer must know they are communicating with AI.

Human oversight: In high-risk decisions (e.g., credit scoring, hiring decisions), AI cannot make decisions independently—human oversight is required.

Prohibited applications: Social scoring, manipulative techniques, prohibited biometric identification—you must be aware of the prohibitions regarding these, even if your company does not plan to use them.

A one-page statement in the policy is sufficient: “The company’s current use of AI falls into these categories, so these requirements must be met.” No more—but anything less than this is not responsible conduct.

One page, not thirty pages

A good AI policy fits on a single A4 page. Not because the topic is simple—but because people won’t follow what they don’t read.

The four mandatory elements:

  1. Data classification table (three levels, with specific examples)
  2. Chain of responsibility (at the operational and system levels)
  3. Hallucination protocol (high/low-risk areas + incident response process)
  4. EU AI Act classification and the resulting specific obligations

These four elements can be put together in a single morning. The next step: don’t send them via email, but integrate them into the onboarding process and update them with a paragraph every six months. An AI policy is not a project—it’s a living document that’s useful only if everyone knows where it is and what’s in it.

Key Takeaways

  • The compliance document and the decision-making framework are two different things—a usable AI policy is the latter
  • Four mandatory elements: data classification, chain of responsibility, hallucination protocol, EU AI Act compliance statement
  • EU AI Act obligations for SMEs are clear — most general-purpose AI tools fall into the low-risk category
  • A good policy fits on one page: what isn’t read won’t be followed


Zoltán Varga - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership What you measure changes what you build.

Strategic Synthesis

  • Translate the core idea of “AI Policy: Regulate Capability, Accountability, and Use Context” into one concrete operating decision for the next 30 days.
  • Define the trust and quality signals you will monitor weekly to validate progress.
  • Run a short feedback loop: measure, refine, and re-prioritize based on real outcomes.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.