What the EU AI Act is
The European AI Regulation is the first comprehensive law for AI at global scale. Approved in March 2024, takes effect gradually with key deadlines in 2025-2028. Classifies AI systems into 4 risk tiers: prohibited, high risk, limited risk, minimal risk.
Prohibited systems
Real-time biometric identification in public spaces (with exceptions). Social scoring by governments. Cognitive manipulation of vulnerable users. Mass scraping of facial images. Untargeted emotion recognition in workplace or schools.
High risk
Includes systems used in: education (admissions, scoring), employment (CV screening), critical infrastructure, public services, law enforcement, migration, justice. These require: risk assessment, technical documentation, human oversight, data quality, transparency, robustness.
Limited risk: transparency
Chatbots, AI-generated content, deepfakes. Requirement: disclosure. The user must know they're interacting with AI or that the content is AI-generated. Most enterprise applications fall here.
Real deadlines
The May 2026 agreement postponed obligations for high-risk to December 2027. Transparency was accelerated to December 2, 2026. AI in products integrated: August 2028.
Sanctions
Severe fines: up to €35M or 7% of global annual revenue (whichever higher) for use of prohibited systems. Up to €15M or 3% for high-risk non-compliance. €7.5M or 1% for transparency violations.
How to prepare
(1) Inventory: what AI systems do you use, where do they fall in risk tiers. (2) Document: technical specs, data sources, evaluation metrics. (3) Implement transparency: AI disclosure where applicable. (4) Establish governance: AI risk officer, periodic review processes.
Conclusion
The EU AI Act is the global benchmark — even if your company doesn't operate in Europe, it's the framework that other jurisdictions are going to imitate. Starting documentation and governance work now is the right strategy. Those who comply by 2027 will have structural advantage over those who improvise.