EU AI Act: Complete Guide to Compliance

Everything you need to know about Regulation (EU) 2024/1689 — the world's first comprehensive AI law. Risk categories, compliance timeline, high-risk requirements, and a step-by-step guide to achieving compliance before the 2026 deadline.

Last updated: February 2026 · ~15 min read

What Is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament and the Council of the European Union, it entered into force on 1 August 2024 and will be progressively enforced through 2027.

The regulation takes a risk-based approach, meaning that the obligations imposed on an AI system are proportional to the level of risk it poses to health, safety, and fundamental rights. This approach distinguishes between four risk categories — prohibited, high, limited, and minimal — each carrying different compliance requirements.

The AI Act applies to a wide range of actors in the AI value chain, including:

  • Providers (developers) who place AI systems on the EU market or put them into service
  • Deployers (users) who use AI systems under their authority within the EU
  • Importers who bring AI systems from third countries into the EU
  • Distributors who make AI systems available on the EU market

Crucially, the AI Act has extraterritorial reach: it also applies to providers and deployers located outside the EU if the output produced by their AI system is used within the EU. This means that companies worldwide must assess whether the regulation applies to them.

The AI Act complements existing EU legislation including the GDPR, the Product Liability Directive, and sector-specific regulations. Together, these create a comprehensive governance framework for trustworthy AI in Europe.

Who Does the EU AI Act Apply To?

The EU AI Act casts a wide net over the AI ecosystem. Understanding whether your organisation falls within its scope is the essential first step toward compliance.

Providers (Developers)

Any natural or legal person that develops an AI system or a general-purpose AI model, or that has an AI system developed on its behalf, and places it on the market or puts it into service under its own name or trademark. This includes companies building AI systems in-house for commercial use as well as those developing AI products for third parties.

Deployers (Users)

Any natural or legal person that uses an AI system under its authority, except where the AI is used for purely personal, non-professional activity. If your organisation purchases or licenses an AI tool and applies it in business operations, you are a deployer and carry specific obligations — particularly for high-risk systems.

Importers and Distributors

Importers bring AI systems developed outside the EU into the single market. Distributors make AI systems available in the supply chain. Both must verify that the AI system carries the required CE marking, is accompanied by proper documentation, and that the provider has completed the applicable conformity assessment.

Extraterritorial Application

The regulation applies to providers and deployers located in third countries where the output produced by the AI system is used within the EU. A company based in the United States that provides an AI-powered hiring tool used by a German enterprise must comply with the AI Act.

Exemptions

The AI Act does not apply to AI systems used exclusively for:

  • Military and defence purposes
  • Purely personal, non-professional activities
  • Research and development before the system is placed on the market or put into service (though transparency obligations may still apply during testing)
  • AI systems released under free and open-source licences, unless they are high-risk or prohibited

The Four Risk Categories

The EU AI Act classifies AI systems into four tiers of risk, each with different regulatory obligations. This risk-based pyramid is the cornerstone of the regulation.

Prohibited AI Practices (Article 5)

Certain AI applications are considered an unacceptable risk to fundamental rights and are outright banned. These include:

  • Social scoring by public authorities that leads to detrimental treatment
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for serious crimes)
  • Subliminal manipulation techniques that cause or are likely to cause harm
  • Exploitation of vulnerabilities of specific groups (age, disability, social situation)
  • Emotion recognition in workplace and educational settings (with limited exceptions)
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Predictive policing based solely on profiling or personality traits

High-Risk AI Systems (Articles 6–7, Annex III)

AI systems that pose significant risk to health, safety, or fundamental rights are classified as high-risk. These must meet stringent requirements before entering the market. High-risk categories include biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services (credit scoring, insurance), law enforcement, migration and border control, and administration of justice.

Limited-Risk AI Systems (Article 50)

These AI systems carry specific transparency obligations. Users must be informed when they are interacting with an AI system. This category includes:

  • Chatbots and conversational AI (must disclose AI interaction)
  • Deepfakes and AI-generated content (must be labelled as artificially generated)
  • Emotion recognition systems outside prohibited contexts (must inform subjects)
  • Biometric categorisation systems (must inform subjects)

Minimal-Risk AI Systems

All other AI systems — such as AI-enabled spam filters, video games, or inventory management tools — fall into the minimal-risk category. There are no mandatory requirements under the AI Act, though providers are encouraged to voluntarily adopt codes of conduct and best practices.

Not sure where your AI systems fall?

Take our free risk classification assessment — no signup required.

Start free assessment

Implementation Timeline

The EU AI Act follows a phased implementation schedule, giving organisations time to prepare for each set of obligations. Here are the key dates:

  • 1 August 2024 — The AI Act enters into force, twenty days after publication in the Official Journal of the EU.
  • 2 February 2025 — Prohibited AI practices (Article 5) become enforceable. Organisations must cease use of banned AI applications by this date. AI literacy obligations (Article 4) also apply.
  • 2 August 2025 — Rules for general-purpose AI (GPAI) models take effect. Providers of foundation models must comply with transparency, documentation, and copyright obligations. The AI Office begins oversight.
  • 2 August 2026 — The majority of the regulation becomes applicable, including all requirements for high-risk AI systems listed in Annex III. Conformity assessments, technical documentation, and post-market monitoring must be in place.
  • 2 August 2027 — Full enforcement, including obligations for high-risk AI systems that are safety components of products already covered by EU harmonisation legislation (Annex I).

The European Commission will also establish the AI Office and publish implementing acts, delegated acts, and harmonised standards throughout this period. National competent authorities in each Member State will be designated to supervise compliance.

Key takeaway: If you operate high-risk AI systems, your compliance deadline is 2 August 2026. We recommend starting your compliance programme at least 12–18 months in advance to allow time for risk assessment, documentation, and conformity procedures.

Key Requirements for High-Risk AI Systems

High-risk AI systems face the most demanding obligations under the EU AI Act. Providers must meet all of the following requirements before placing a system on the market:

Risk Management System (Article 9)

Establish and maintain a continuous, iterative risk management process throughout the AI system's lifecycle. This includes identifying known and foreseeable risks, estimating and evaluating those risks, adopting mitigation measures, and testing for residual risks.

Data Governance (Article 10)

Training, validation, and testing datasets must meet quality criteria including relevance, representativeness, accuracy, and completeness. Providers must address potential biases, especially when special categories of personal data are involved.

Technical Documentation (Article 11, Annex IV)

Comprehensive technical documentation must be prepared before the system enters the market. This includes a general description of the system, design specifications, development process, risk management details, data governance practices, and performance metrics.

Record-Keeping and Logging (Article 12)

High-risk systems must include automatic logging capabilities to ensure traceability. Logs must be retained for a period appropriate to the intended purpose, and made available to authorities upon request.

Transparency and Information to Deployers (Article 13)

Providers must supply deployers with clear instructions for use, including the system's capabilities, limitations, intended purpose, accuracy levels, and any known risks.

Human Oversight (Article 14)

High-risk AI systems must be designed to allow effective oversight by natural persons during use. This includes the ability to understand the system's outputs, to decide not to use it, and to override or reverse its outputs.

Accuracy, Robustness, and Cybersecurity (Article 15)

Systems must achieve appropriate levels of accuracy, be resilient to errors and inconsistencies, and be protected against unauthorised access or manipulation.

Quality Management System (Article 17)

Providers must implement a QMS covering risk management, post-market monitoring, data management, record-keeping, and resource management.

Conformity Assessment (Article 43)

Before placing a high-risk AI system on the market, providers must complete a conformity assessment — either through internal control or through a third-party notified body, depending on the system's domain. A CE marking is affixed upon successful completion.

Compliance Steps — What You Need to Do

Achieving EU AI Act compliance is a structured process. Follow these eight steps to systematically address your obligations:

Step 1: Inventory All AI Systems

Create a complete register of every AI system your organisation develops, deploys, imports, or distributes. Include internal tools, third-party solutions, and embedded AI components. Many organisations discover AI systems they were not tracking during this exercise.

Step 2: Classify Each System by Risk Level

Map each system against the four risk categories defined in Articles 5, 6, and Annex III. Determine whether any fall into the prohibited, high-risk, limited-risk, or minimal-risk tiers. This classification drives all subsequent obligations.

Step 3: Conduct Fundamental Rights Impact Assessments

For high-risk AI systems, deployers must conduct a fundamental rights impact assessment (FRIA) as required by Article 27. This assesses how the system may affect individuals' fundamental rights, including non-discrimination, privacy, and access to justice.

Step 4: Prepare Technical Documentation

Draft Annex IV–compliant technical documentation for each high-risk system. This includes system architecture, training data descriptions, performance benchmarks, risk assessments, and intended use documentation.

Step 5: Implement Required Management Systems

Put in place a quality management system (QMS) as specified in Article 17. This covers governance structures, risk management processes, data governance frameworks, incident reporting mechanisms, and post-market monitoring procedures.

Step 6: Complete Conformity Assessment

Conduct the appropriate conformity assessment procedure for each high-risk system under Article 43. For most Annex III systems, an internal assessment suffices; for biometric systems and certain others, a third-party notified body assessment is required.

Step 7: Register in the EU Database

High-risk AI systems must be registered in the EU-wide database established under Article 71 before being placed on the market. This database is publicly accessible and contributes to transparency and oversight.

Step 8: Set Up Post-Market Monitoring

Establish a post-market monitoring system proportionate to the nature and risks of the AI system (Article 72). This includes monitoring performance, collecting user feedback, reporting serious incidents to national authorities within prescribed timeframes, and updating documentation as needed.

Ready to start your compliance journey?

Our platform automates steps 1–6 above — risk classification, documentation generation, and conformity management.

View pricing

GPAI Model Obligations

The EU AI Act introduces dedicated rules for General-Purpose AI (GPAI) models — such as large language models and foundation models — under Articles 51 to 56. These apply from 2 August 2025.

All GPAI Models

Every provider of a GPAI model must meet baseline obligations:

  • Technical documentation detailing the model's training process, capabilities, and limitations
  • Transparency information for downstream providers who integrate the GPAI model into their own AI systems
  • Copyright compliance, including adherence to the EU Copyright Directive's text and data mining provisions
  • A sufficiently detailed summary of training data, following a template provided by the AI Office

GPAI Models with Systemic Risk

Models that exceed a certain computational threshold (10^25 FLOPs for training) or are designated by the Commission are classified as posing systemic risk. These carry additional obligations:

  • Model evaluation and adversarial testing, including red-teaming, to identify and mitigate systemic risks
  • Tracking, documenting, and reporting serious incidents to the AI Office and relevant national authorities
  • Ensuring adequate cybersecurity protections for the model and its infrastructure
  • Reporting the model's energy consumption and computational resources used during training

The AI Office within the European Commission will oversee GPAI compliance and may issue binding guidelines, request documentation, and conduct evaluations. Providers may demonstrate compliance through approved codes of practice developed with industry stakeholders.

Penalties for Non-Compliance

The EU AI Act imposes significant financial penalties for non-compliance, structured in three tiers based on the severity of the violation:

Prohibited AI Violations

Deploying or providing AI systems that fall under the prohibited practices of Article 5 can result in fines of up to €35 million or 7% of total worldwide annual turnover, whichever is higher. This is the highest penalty tier in the regulation.

High-Risk and Other Violations

Non-compliance with the requirements for high-risk AI systems (including failure to meet risk management, data governance, documentation, or conformity assessment obligations) can result in fines of up to €15 million or 3% of global annual turnover.

Providing Incorrect Information

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities can be fined up to €7.5 million or 1.5% of global annual turnover.

Reduced Caps for SMEs and Startups

The regulation recognises the burden on smaller organisations. SMEs and startups benefit from reduced maximum fines — the lower of the two amounts (absolute cap or turnover percentage) applies. The AI Act also encourages Member States to provide guidance and support channels for small businesses.

Enforcement

Enforcement is carried out by national market surveillance authorities in each EU Member State. The AI Office oversees GPAI model compliance at the EU level. Penalties must be effective, proportionate, and dissuasive. Authorities will consider factors such as the nature, gravity, and duration of the infringement, the number of affected persons, and any mitigating actions taken.

Frequently Asked Questions

Below are answers to the most common questions we receive about EU AI Act compliance. For detailed guidance on your specific situation, use our free assessment tool.

Does the EU AI Act apply to AI systems already in use?
Yes. Existing AI systems that fall within scope must comply by the applicable deadlines. High-risk systems already on the market must meet all requirements by 2 August 2026. There is no blanket exemption for legacy systems — providers and deployers must assess and bring existing systems into compliance.
How does the EU AI Act treat open-source AI?
AI systems released under free and open-source licences are generally exempt from most obligations, unless they are classified as high-risk or prohibited. Additionally, GPAI models released under open-source licences still must comply with copyright obligations and must publish a training data summary. Open-source GPAI models with systemic risk must meet all GPAI obligations.
How does the EU AI Act interact with the GDPR?
The EU AI Act complements the GDPR rather than replacing it. If your AI system processes personal data, you must comply with both regulations simultaneously. The AI Act adds AI-specific requirements such as risk management, data governance quality criteria, and bias testing that go beyond GDPR's scope. Where there is overlap — such as transparency — the stricter requirement applies.
What is the role of national competent authorities?
Each EU Member State must designate one or more national competent authorities responsible for supervising the application and implementation of the AI Act. These authorities handle market surveillance, process complaints, conduct investigations, and impose penalties. The AI Office at EU level oversees GPAI model compliance and coordinates between national authorities.
Do I need a dedicated AI compliance officer?
The AI Act does not explicitly require a designated AI compliance officer (unlike the GDPR's Data Protection Officer). However, for organisations operating high-risk AI systems, establishing a dedicated compliance function or assigning AI Act responsibilities to existing governance roles is strongly recommended to ensure ongoing compliance.
What is the conformity assessment process?
The conformity assessment is a procedure to verify that a high-risk AI system meets all applicable requirements before it enters the market. For most high-risk systems listed in Annex III, providers can conduct an internal conformity assessment. However, certain categories — including remote biometric identification — require assessment by an independent third-party notified body. Upon successful completion, the provider affixes the CE marking.
Can I sell an AI system in the EU if my company is based outside Europe?
Yes, but you must comply with the AI Act. The regulation has extraterritorial scope: it applies to providers and deployers outside the EU if the AI system is placed on the EU market or its output is used within the EU. You must also appoint an authorised representative established in the EU.
What are the documentation requirements for high-risk AI systems?
High-risk AI systems require comprehensive technical documentation as specified in Annex IV. This includes a general description of the system, detailed information about the development process, training and testing data, risk management measures, accuracy and robustness metrics, human oversight measures, and information about the monitoring and updating process. Documentation must be kept up to date throughout the system's lifecycle.
Are there any grace periods or transition provisions?
The AI Act provides a phased implementation timeline that functions as a transition period. Prohibited practices apply from February 2025, GPAI rules from August 2025, and most high-risk requirements from August 2026. AI systems that are components of large-scale EU IT systems in areas of freedom, security, and justice have an extended deadline until 2030.
How much does EU AI Act compliance cost?
Compliance costs vary significantly depending on the risk classification and complexity of your AI systems. For minimal and limited-risk systems, costs are modest — primarily transparency measures. For high-risk systems, costs include risk management, technical documentation, conformity assessment, and ongoing monitoring. The European Commission estimates compliance costs at €6,000–€7,000 per high-risk system for SMEs using existing quality management tools. Platforms like Haffa.ai help reduce these costs through automation.

Next Steps

Now that you understand the EU AI Act's framework, it's time to take action. Here's how to get started:

  1. Run our free AI system assessment — In under 10 minutes, classify your AI systems and understand your obligations. No account required. Start the assessment →
  2. Read the sub-guides — Dive deeper into risk categories, the implementation timeline, high-risk system requirements, and the compliance checklist.
  3. Explore the platform — Haffa.ai automates risk classification, generates Annex IV documentation, manages conformity assessments, and monitors compliance continuously. See all features →
  4. Talk to a compliance expert — Have questions specific to your organisation? Get in touch for a personalised consultation.

The compliance deadlines are approaching. Organisations that start early will have the advantage of time to build robust processes, train teams, and address gaps without the pressure of last-minute enforcement. Begin today.

Download the EU AI Act Compliance Checklist

A printable, step-by-step checklist covering every requirement for high-risk AI systems.