Does it apply to your company?

The EU AI Act applies if: (1) your company operates in the EU, or (2) your AI system has impact on EU users (even if you're based elsewhere). It's extraterritorial — similar to GDPR.

Classify your systems

First step: classify each AI system you use into a risk tier. Prohibited (don't use), High risk (the most obligations), Limited risk (transparency), Minimal risk (no specific obligations).

Practical example: customer chatbot = limited risk (disclosure). CV screening for hiring = high risk. Email writing assistant = minimal risk.

Limited risk: most cases

If your system is limited risk (most enterprise applications), only obligation: disclosure. The user must know they're interacting with AI. Implementation: "I'm an AI assistant" message in onboarding, AI badge in chat interface, audio disclosure in voice agents.

High risk: heavier obligations

For high-risk systems, you need: (1) Risk management system documented. (2) Data governance — quality, biases, representativeness. (3) Technical documentation — what model, how trained, evaluation metrics. (4) Human oversight — review processes. (5) Conformity assessment — formal evaluation before deployment.

Real deadlines

February 2025: bans took effect. August 2025: obligations for general-purpose AI models. December 2, 2026: transparency requirements (accelerated). December 2027: high-risk systems (postponed from August 2026). August 2028: AI integrated in products.

Action plan

This month: inventory of all AI systems in your company. Next month: classify by risk tier. Q3 2026: implement transparency in limited-risk systems. 2027: documentation and assessments for high-risk. Continuous: monitoring, updates as systems change.

Errors that cost money

(1) Assume only EU companies are affected. If you have EU users, it applies. (2) Improvise transparency. Must be clear and findable. (3) Underestimate documentation work. Months, not days. (4) Not having an AI risk owner. Without ownership, projects stagnate.

Tools that help

Specialized vendors: Credo AI (governance platform), OneTrust AI Governance, Holistic AI, IBM watsonx.governance. Generalist consultancies (Deloitte, PwC) have specialized teams. For SMBs, the right path: AI risk officer internal + consulting on demand.

Conclusion

The EU AI Act isn't optional — but it isn't terrifying either. Most companies fall in limited risk and need only transparency. High-risk systems require real work but have clear playbook. The right strategy: start now, document gradually, don't improvise when deadlines arrive.