Compliance

How to Classify Your AI System Under the EU AI Act

January 28, 2026Haffa.ai Editorial11 min read

Understanding the EU AI Act's Risk-Based Approach

The EU AI Act (Regulation 2024/1689) takes a risk-based approach to AI regulation. Rather than applying the same rules to every AI system, it categorizes systems into four risk tiers, with obligations proportional to the level of risk. Getting your classification right is the single most important step in your compliance journey — it determines everything that follows.

This guide walks you through the classification process step by step, covering the legal framework, practical decision points, and common pitfalls. By the end, you should be able to confidently classify any AI system in your organization.

The Four Risk Categories

The EU AI Act defines four risk levels:

1. Prohibited (Unacceptable Risk)

AI practices that are outright banned because they pose an unacceptable risk to fundamental rights. These are defined in Article 5 and include social scoring, manipulative AI, and certain uses of biometric identification. There are no compliance pathways for prohibited systems — they simply cannot be deployed in the EU.

2. High-Risk

AI systems that pose significant risks to health, safety, or fundamental rights. These must meet extensive requirements before being placed on the market or put into service. The requirements include risk management, data governance, technical documentation, transparency, human oversight, and more. High-risk systems are defined through two pathways in Articles 6 and 7 and Annex III.

3. Limited Risk

AI systems with specific transparency obligations. Users must be informed they are interacting with an AI system. This covers chatbots, emotion recognition systems, and AI systems that generate or manipulate content (deepfakes). Defined primarily in Article 50.

4. Minimal Risk

AI systems that pose negligible risk. No mandatory requirements apply, though voluntary codes of conduct are encouraged under Article 95. This covers the majority of AI applications, such as spam filters, AI-powered video games, and inventory management systems.

Step 1: Check for Prohibited Practices (Article 5)

Your first step should always be to check whether your AI system falls under the prohibited practices defined in Article 5. Ask yourself:

  • Does the system use subliminal, manipulative, or deceptive techniques to materially distort a person's behavior?
  • Does it exploit any vulnerabilities of a specific group due to age, disability, or social or economic situation?
  • Does it perform social scoring — evaluating or classifying people based on their social behavior or personal characteristics, leading to detrimental treatment?
  • Does it perform real-time remote biometric identification in publicly accessible spaces for law enforcement purposes?
  • Does it create or expand facial recognition databases through untargeted scraping?
  • Does it infer emotions in the workplace or educational institutions?
  • Does it categorize people based on biometric data to infer sensitive attributes like race, political opinions, or sexual orientation?

If the answer to any of these is yes, your system is prohibited. Some narrow exceptions exist (for example, law enforcement biometric identification under strict conditions), but these require careful legal analysis.

Step 2: Check for High-Risk Classification (Articles 6 and 7, Annex III)

If your system is not prohibited, the next question is whether it is high-risk. There are two pathways to high-risk classification:

Pathway A: Safety Component of Regulated Products (Article 6(1))

An AI system is high-risk if it is:

  1. Intended to be used as a safety component of a product, or is itself a product, covered by EU harmonized legislation listed in Annex I (such as medical devices, machinery, toys, elevators, radio equipment), AND
  2. Required to undergo a third-party conformity assessment under that legislation.

For example, an AI system embedded in a medical device that requires CE marking through a notified body would be high-risk under Pathway A.

Pathway B: Annex III Use Cases (Article 6(2))

An AI system is high-risk if it falls under one of the use case areas listed in Annex III. These eight areas are:

  1. Biometrics — Remote biometric identification (not real-time for law enforcement, which is prohibited), biometric categorization, and emotion recognition systems.
  2. Critical infrastructure — AI used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity.
  3. Education and vocational training — AI that determines access to or admission to educational institutions, evaluates learning outcomes, assesses the appropriate level of education, or monitors exam behavior.
  4. Employment, workers management and access to self-employment — AI for recruitment, screening, filtering job applications, making promotion and termination decisions, allocating tasks based on personal traits, and monitoring employee performance.
  5. Access to essential services — AI that evaluates eligibility for public assistance benefits, creditworthiness for loans, risk assessment in life and health insurance, and emergency services dispatch.
  6. Law enforcement — AI used for risk assessment of individuals, polygraphs, evaluation of evidence reliability, profiling in criminal investigations, and crime analytics.
  7. Migration, asylum and border control — AI for risk assessment in migration and border control, document authentication verification, and examination of asylum applications.
  8. Administration of justice and democratic processes — AI used to assist judicial authorities in researching and interpreting facts and law, and applying it to specific cases.

However, there is an important exception in Article 6(3): an AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This applies when the AI system performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment.

If you rely on this exception, you must document your reasoning and notify the national competent authority before placing the system on the market. The authority can disagree with your assessment.

Step 3: Check for Limited-Risk Transparency Obligations (Article 50)

If your system is not prohibited or high-risk, check whether it triggers transparency obligations under Article 50. These apply to:

  • AI systems that interact with people — Users must be informed they are interacting with an AI system (e.g., chatbots must identify themselves as AI).
  • Emotion recognition systems — People subjected to emotion recognition must be informed.
  • Biometric categorization systems — People subjected to biometric categorization must be informed.
  • AI-generated content — Content that is artificially generated or manipulated (including deepfakes) must be machine-readably labeled as such.

Note that these transparency obligations can apply in addition to high-risk requirements. A high-risk AI system that also interacts with users must satisfy both sets of obligations.

Step 4: Classify as Minimal Risk

If your AI system is not prohibited, not high-risk under either pathway, and does not trigger limited-risk transparency obligations, it is minimal risk. No mandatory requirements apply, but you are encouraged to voluntarily adopt codes of conduct (Article 95) and ensure AI literacy within your organization (Article 4).

Common Classification Mistakes

Based on our experience helping hundreds of organizations classify their systems, here are the most frequent errors:

Mistake 1: Ignoring Embedded AI

Many organizations focus on standalone AI applications but forget about AI components embedded in larger products or services. A recommendation engine in an HR platform, for example, could be high-risk even if the platform vendor does not market it as an "AI product."

Mistake 2: Misapplying the Article 6(3) Exception

The exception for AI systems in Annex III that do not pose significant risk is narrower than many organizations assume. The four conditions (narrow procedural task, improvement of completed activity, detection without replacement, preparatory task) are specific. Document your reasoning carefully and be prepared for regulatory scrutiny.

Mistake 3: Confusing Provider and Deployer Roles

Your classification obligations differ based on whether you are a provider or deployer. If you significantly modify a high-risk AI system, you may become its new provider — inheriting all provider obligations, including the conformity assessment.

Mistake 4: Overlooking GPAI Model Integration

If your AI system uses a general-purpose AI model (like a large language model), the GPAI model itself has separate obligations under Chapter V. However, if you build a high-risk application on top of a GPAI model, the high-risk obligations still apply to your system.

Mistake 5: Static Classification

Classification is not a one-time exercise. Changes to your AI system's purpose, functionality, or deployment context can change its risk level. Implement ongoing monitoring and re-classification processes.

Practical Tips for Classification

Tip 1: Start with an AI Inventory

You cannot classify what you have not identified. Before starting classification, create a comprehensive inventory of all AI systems in your organization — including third-party tools and embedded AI components.

Tip 2: Involve the Right Stakeholders

Classification requires input from legal, technical, and business teams. The legal team understands the regulatory framework. The technical team knows what the system actually does. The business team knows how it is deployed and who it affects.

Tip 3: Document Everything

Regardless of the classification outcome, document your reasoning. Regulators will expect to see evidence of a thorough, systematic classification process — especially if you classify a system as non-high-risk.

Tip 4: Use Tooling

Manual classification with spreadsheets does not scale and is error-prone. Use structured assessment tools that guide you through the decision tree systematically. Our free risk classification wizard walks you through each step with contextual help and instant results.

Tip 5: Plan for Re-Classification

Build re-classification triggers into your AI governance process. Any change to the system's intended purpose, the population it affects, or its deployment context should trigger a review.

What Comes After Classification?

Classification determines your compliance roadmap:

  • Prohibited — Cease deployment or redesign fundamentally.
  • High-risk — Implement the full set of obligations: risk management, data governance, Annex IV documentation, logging, transparency, human oversight, accuracy and robustness, quality management, conformity assessment, EU database registration, and post-market monitoring.
  • Limited risk — Implement transparency measures per Article 50.
  • Minimal risk — Consider voluntary codes of conduct and ensure AI literacy.

Our platform provides the tools for each scenario. Start your free classification now and see exactly which obligations apply to your AI systems. For a complete overview of the regulation, see our complete EU AI Act guide.


Stay Updated

Subscribe to our newsletter for weekly EU AI Act insights