- Home
- Resources
- EU AI Act Guide
- Timeline
EU AI Act Implementation Timeline
The EU AI Act follows a phased enforcement schedule from August 2024 through August 2027. Each milestone introduces new obligations. Here is every key date and what it means for your organisation.
Entry into Force
The EU AI Act (Regulation 2024/1689) officially entered into force, 20 days after publication in the Official Journal of the EU. The implementation clock started ticking for all provisions.
- Published in the Official Journal on 12 July 2024
- AI Office established within the European Commission
- Member States begin designating national competent authorities
- Voluntary codes of conduct development initiated
Prohibited Practices & AI Literacy
Prohibited AI practices under Article 5 became enforceable. Organisations must have discontinued all banned AI applications. AI literacy obligations under Article 4 also apply.
- Social scoring by public authorities is banned
- Real-time remote biometric identification in public spaces is banned (with narrow exceptions)
- Subliminal manipulation and exploitation of vulnerabilities are banned
- Emotion recognition in workplaces and education institutions is banned
- Untargeted facial image scraping for databases is banned
- Organisations must ensure staff have sufficient AI literacy
- Penalties for violations: up to €35M or 7% of global turnover
GPAI Model Rules
Rules for general-purpose AI models (Articles 51-56) take effect. Providers of foundation models and large language models must comply with transparency, documentation, and copyright obligations.
- All GPAI model providers must prepare technical documentation
- Training data summaries must be published (template from AI Office)
- Copyright Directive compliance required for text and data mining
- Transparency information must be provided to downstream providers
- Systemic risk models face additional obligations (adversarial testing, incident reporting)
- AI Office begins GPAI model oversight and enforcement
- Codes of practice for GPAI providers published
High-Risk AI Requirements
The majority of the regulation becomes applicable, including all requirements for high-risk AI systems listed in Annex III. This is the most significant compliance milestone.
- All Annex III high-risk AI systems must meet Articles 9-15 requirements
- Risk management systems must be operational
- Technical documentation (Annex IV) must be completed
- Quality management systems must be implemented
- Conformity assessments must be completed and CE marking affixed
- High-risk systems must be registered in the EU database
- Deployers must conduct fundamental rights impact assessments
- Post-market monitoring systems must be active
- Penalties for violations: up to €15M or 3% of global turnover
Full Enforcement
Complete enforcement of all provisions. This includes obligations for high-risk AI systems embedded in products already regulated under EU harmonisation legislation (Annex I), such as medical devices, machinery, and vehicles.
- Annex I product-integrated high-risk AI systems must comply
- Includes AI in medical devices, machinery, toys, lifts, and vehicles
- Notified bodies fully operational for third-party conformity assessments
- All delegated and implementing acts in effect
- Harmonised standards available for all high-risk categories
- Full penalty framework enforceable across all provisions
Key Takeaway
If you operate high-risk AI systems, your primary compliance deadline is 2 August 2026. We recommend starting at least 12–18 months in advance. GPAI model providers should already be preparing for the August 2025 deadline.
Check your compliance readinessDon't wait for the deadline
Start your compliance programme today. Haffa.ai helps you prepare for every milestone on the timeline.