- Home
- Resources
- EU AI Act Guide
- Compliance Checklist
EU AI Act Compliance Checklist
A step-by-step checklist covering every requirement for EU AI Act compliance. Work through these 12 items to systematically address your obligations under Regulation (EU) 2024/1689.
1.Inventory All AI Systems
Create a complete register of every AI system your organisation develops, deploys, imports, or distributes. Include internal tools, embedded AI components, third-party solutions, and AI used by contractors. Document the purpose, scope, and data inputs for each system.
2.Classify Each System by Risk Level
Map each AI system against the four risk categories: prohibited (Article 5), high-risk (Articles 6–7, Annex III), limited-risk (Article 50), and minimal-risk. Use the regulation's definitions, not assumptions. This classification determines all subsequent obligations.
3.Discontinue Any Prohibited Practices
If any systems fall under Article 5 prohibited practices — social scoring, real-time biometric identification in public, subliminal manipulation, exploitation of vulnerabilities, workplace emotion recognition, or predictive policing based on profiling — they must be immediately discontinued. This has been enforceable since February 2025.
4.Conduct Fundamental Rights Impact Assessments
For each high-risk AI system, deployers must perform a fundamental rights impact assessment (FRIA) as required by Article 27. Assess potential impacts on non-discrimination, privacy, freedom of expression, human dignity, access to justice, and other fundamental rights. Document findings and mitigation measures.
5.Establish a Risk Management System
For each high-risk system, implement a continuous risk management process (Article 9) that identifies, analyses, estimates, and evaluates risks throughout the system's lifecycle. Define risk tolerance levels, adopt mitigation measures, and establish processes for ongoing risk monitoring.
6.Implement Data Governance
Ensure training, validation, and testing datasets meet the quality criteria specified in Article 10: relevance, representativeness, accuracy, and completeness. Implement bias detection and correction procedures. Document data sources, preparation methods, labelling approaches, and any known gaps or limitations.
7.Prepare Technical Documentation (Annex IV)
Draft comprehensive technical documentation for each high-risk system following the structure defined in Annex IV. Include a general system description, design specifications, development methodology, data governance details, performance metrics, risk assessment results, human oversight measures, and lifecycle management plans.
8.Implement Logging, Transparency & Human Oversight
Build automatic logging capabilities (Article 12) for traceability. Prepare clear instructions for deployers covering capabilities, limitations, and risks (Article 13). Design human oversight mechanisms (Article 14) enabling operators to understand, monitor, and intervene in the system's operation.
9.Validate Accuracy, Robustness & Cybersecurity
Test and document that each high-risk system achieves appropriate accuracy levels (Article 15). Validate resilience against errors and environmental inconsistencies. Implement cybersecurity measures to prevent unauthorised access and manipulation. Maintain test reports as evidence.
10.Build a Quality Management System
Implement a documented QMS (Article 17) covering governance structures, design and development procedures, risk management processes, data management policies, record-keeping systems, reporting procedures, resource management, and accountability mechanisms. This system must be maintained throughout the AI system's lifecycle.
11.Complete Conformity Assessment & Register
Conduct the appropriate conformity assessment (Article 43) — internal assessment (Annex VI) for most systems, or third-party notified body assessment (Annex VII) for biometric identification systems. Upon successful completion, affix the CE marking and register the system in the EU database (Article 71).
12.Establish Post-Market Monitoring
Set up a post-market monitoring system (Article 72) proportionate to the system's risks. Monitor ongoing performance, collect deployer feedback, track incidents, and report serious incidents to national authorities within prescribed timeframes. Update documentation and risk assessments as needed throughout the system's operational life.
Get this checklist as a PDF
Download a printable version with space for notes, responsible persons, and completion dates. We'll also send you updates when the regulation evolves.
No spam. Unsubscribe anytime. We respect your privacy.
Ready to start checking off items?
Our interactive assessment helps you complete steps 1 and 2 in under 10 minutes — classify your AI systems and understand your obligations instantly.
Start Free AssessmentAutomate your EU AI Act compliance
Haffa.ai helps you work through this checklist systematically — with automated risk classification, documentation generation, and conformity management.