What was agreed on May 7

The provisional agreement between Council and Parliament includes two main changes: postpones the deadline to establish regulatory sandboxes until August 2, 2027 and reduces the grace period to implement transparency solutions for AI-generated content from 6 months to 3 months, with new deadline December 2, 2026.

The most important change: the application timeline of certain high-risk system rules moves to December 2, 2027, and for AI systems integrated into products to August 2, 2028.

What "high-risk systems" are

The EU AI Act classifies AI systems into four tiers: prohibited, high risk, limited risk and minimal risk. The high risk category includes systems used in: education (admissions, scoring), employment (CV screening, performance evaluation), critical infrastructure (energy, water, transport), public services (welfare), law enforcement, migration, justice.

Original timeline vs. new

Aug 2026
Original deadline
(now postponed)
Dec 2027
New deadline
high risk
Aug 2028
Products with
integrated AI

Originally, high-risk system obligations took effect August 2, 2026. The industry, including Microsoft, Google and several European business associations, pressed for extensions arguing that technical standards (CEN-CENELEC) were not ready.

Why the postponement?

Three public reasons: (1) Technical standards aren't ready. The harmonized CEN-CENELEC norms companies had to comply with are still in draft. (2) Regulatory capacity lacking. National authorities designated to enforce the Act don't have enough technical staff. (3) Competitive pressure. Europe fears falling behind USA and China if regulations arrive before productive infrastructure.

Caution

The postponement is not automatic. The provisional agreement needs formal adoption and publication in the Official Journal before taking effect. Until then, original deadlines remain legally in force.

What accelerates: transparency

While high-risk obligations are postponed, transparency obligations for AI-generated content are accelerated. The grace period was reduced from 6 to 3 months. New deadline: December 2, 2026.

This implies: deepfakes must be labeled, AI-generated content must be identifiable, watermarking mandatory in generative systems. Applies to OpenAI, Google, Anthropic, Stability AI and any generative system accessible to European users.

Comparison with US

The US has no comprehensive federal AI law. It has framework documents, executive orders and a fragmented political landscape. On March 20, 2026, Rep. Beyer introduced the GUARDRAILS Act, intended to repeal Trump's executive order on AI and prevent state-level moratoriums.

At the state level: many laws passed a few years ago have been modified or delayed as 2026 deadlines approached.

Practical implications

For companies operating in Europe with AI: (1) Don't rest — deadlines were postponed, not eliminated. (2) Prioritize transparency now — the generated content deadline was BROUGHT FORWARD. (3) Start risk management documentation for high risk — deadlines are long but the work is serious.

VuraOS is classified as a "limited risk" system (mandatory transparency but not high-risk classification), applicable to most chatbots and customer service agents. The obligation: clearly indicate the user interacts with AI and allow human escalation. We already comply.

Conclusion

The postponement is a pragmatic victory for the industry, but serious companies shouldn't interpret it as "we have more time to do nothing". Transparency accelerates and standards will arrive — just later. Those who started documenting and auditing their systems now will have the advantage when 2027 arrives.