High Risk

High-Risk AI Systems Under the EU AI Act

High-risk AI systems face the most demanding compliance obligations under the EU AI Act. This guide covers the eight Annex III categories, the full set of requirements under Articles 9–17, and the conformity assessment process.

Annex III — High-Risk Categories

AI systems are classified as high-risk if they fall into one of the following eight domains defined in Annex III of the regulation.

1

Biometric Identification & Categorisation

AI systems intended for remote biometric identification (not in real-time in public spaces), biometric categorisation using sensitive attributes such as race, political opinions, or sexual orientation, and biometric verification systems.

Examples

  • Post-event facial recognition for law enforcement
  • Biometric categorisation systems inferring sensitive attributes
  • Identity verification using facial recognition or fingerprints
2

Critical Infrastructure

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.

Examples

  • AI controlling electricity grid load balancing
  • Traffic management systems in autonomous vehicles
  • AI monitoring water supply safety parameters
  • Predictive maintenance for gas pipeline infrastructure
3

Education & Vocational Training

AI systems that determine access to educational institutions, assess students, assign learners to groups based on ability, or monitor examination behaviour.

Examples

  • University admissions screening algorithms
  • Automated essay grading and student assessment
  • AI-driven proctoring and exam surveillance
  • Adaptive learning platforms that determine educational pathways
4

Employment & Worker Management

AI systems used in recruitment, selection, hiring decisions, promotion and termination, task allocation based on behaviour or traits, and monitoring or evaluating employee performance.

Examples

  • CV screening and candidate ranking tools
  • AI-powered video interview analysis
  • Automated performance review and promotion systems
  • Worker productivity monitoring and task allocation
5

Access to Essential Services

AI systems that evaluate creditworthiness, set insurance premiums based on risk profiling, assess eligibility for public benefits, or prioritise emergency services dispatch.

Examples

  • Credit scoring algorithms for loan decisions
  • Insurance risk assessment and premium calculation
  • AI determining eligibility for social security benefits
  • Emergency service call triage and dispatch prioritisation
6

Law Enforcement

AI systems used to assess the risk of individuals becoming victims or offenders, as polygraph or lie detection tools, to evaluate the reliability of evidence, or for crime analytics predicting occurrence or reoccurrence.

Examples

  • Risk assessment tools for recidivism prediction
  • AI-powered evidence analysis and evaluation
  • Crime hotspot prediction and analytics
  • Victim risk profiling for protection measures
7

Migration & Border Control

AI systems used for processing visa and residence permit applications, border surveillance and screening, and assessing security, irregular migration, or health risks.

Examples

  • Automated visa application screening
  • AI-based asylum claim assessment tools
  • Border surveillance and anomaly detection
  • Health risk screening at points of entry
8

Administration of Justice & Democracy

AI systems used to assist judicial authorities in researching and interpreting facts and the law, and in applying the law to concrete facts. Also includes AI used to influence the outcome of elections or referendums.

Examples

  • Legal research and case law analysis tools
  • AI systems that recommend sentencing or judicial outcomes
  • AI tools for interpreting and applying regulations
  • Systems that influence voting behaviour or electoral processes

Compliance Requirements

Every high-risk AI system must meet the following requirements before being placed on the EU market. These obligations apply to providers (developers) and, in many cases, to deployers (users) as well.

Art. 9

Risk Management System

Establish a continuous, iterative risk management process throughout the AI system's entire lifecycle. Identify and analyse known and reasonably foreseeable risks. Estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse. Adopt suitable risk management measures and test for residual risks.

Art. 10

Data Governance

Training, validation, and testing datasets must meet quality criteria including relevance, representativeness, accuracy, and completeness. Implement data governance practices that address bias detection and correction, especially when processing special categories of personal data. Document data collection processes, preparation, labelling, and any data gaps or shortcomings.

Art. 11

Technical Documentation

Prepare comprehensive technical documentation before the system enters the market, following the structure defined in Annex IV. This includes a general system description, design specifications, development methodology, data governance details, performance metrics, risk management documentation, and a description of changes throughout the lifecycle.

Art. 12

Record-Keeping & Logging

Build automatic logging capabilities into the system to ensure traceability of its functioning. Logs must record events relevant to identifying risks, substantial modifications, and facilitate post-market monitoring. Retain logs for an appropriate period, making them available to national authorities upon request.

Art. 13

Transparency & Information to Deployers

Provide deployers with clear, comprehensive instructions for use. This includes the system's intended purpose, capabilities and limitations, accuracy levels, known risks, human oversight measures, expected lifetime, and maintenance requirements. Information must be accessible and understandable.

Art. 14

Human Oversight

Design the system to be effectively overseen by natural persons during use. Human oversight measures must enable the person overseeing the system to understand its capacities and limitations, monitor operation, detect anomalies, interpret outputs correctly, decide not to use the system, and intervene or stop the system when necessary.

Art. 15

Accuracy, Robustness & Cybersecurity

Achieve appropriate levels of accuracy for the intended purpose. Build resilience against errors, faults, and inconsistencies in the environment. Implement cybersecurity measures to protect against attempts by unauthorised third parties to exploit vulnerabilities and manipulate the system's behaviour or outputs.

Art. 17

Quality Management System

Implement a documented quality management system covering governance structures, design and development processes, risk management, data management, record-keeping, reporting obligations, resource management, accountability framework, and post-market monitoring procedures.

Art. 43

Conformity Assessment

Complete the appropriate conformity assessment procedure before placing the system on the market. For most Annex III systems, an internal conformity assessment (Annex VI) suffices. For real-time and post-event remote biometric identification, assessment by an independent notified body (Annex VII) is required. Upon successful completion, affix the CE marking.

Need help meeting these requirements?

Haffa.ai automates risk management documentation, generates Annex IV technical documentation, and guides you through the conformity assessment process.

Simplify high-risk AI compliance

From risk management to CE marking — Haffa.ai provides the tools, templates, and guidance you need.