Compliance

FRIA under the EU AI Act: Complete Article 27 Guide [2026]

March 15, 2026Haffa.ai Editorial12 min read

The EU AI Act's August 2026 deadline is five months away, and most compliance teams still haven't heard of the FRIA. That's a problem.

A Fundamental Rights Impact Assessment is one of the mandatory obligations for deployers of high-risk AI systems. Unlike the technical documentation that providers handle, this one falls on the organization actually using the AI. You, in other words.

If you're a compliance officer, DPO, or legal counsel responsible for AI systems in your organization, you're probably wondering what exactly a FRIA requires, whether your existing DPIA covers it, and how to get one done before August. This guide covers all of it: who must conduct a FRIA under Article 27, what the assessment includes, how it overlaps (and doesn't) with the DPIA you already know, and a practical five-phase process for completing yours.

What is a FRIA and why does Article 27 matter?

A Fundamental Rights Impact Assessment is a systematic process for evaluating how a high-risk AI system might affect people's fundamental rights. Think of it as the human-side counterpart to a conformity assessment. The provider worries about technical accuracy and robustness. You, the deployer, have to figure out what happens to actual people when this system makes or influences decisions about them.

Article 27 of the EU AI Act establishes this obligation. It requires certain deployers of high-risk AI systems to complete a FRIA before putting the system into use, and to notify the relevant market surveillance authority of the results.

The rights in scope go well beyond data protection: human dignity, non-discrimination, freedom of expression, the right to an effective remedy, workers' rights, children's rights. A DPIA only covers data privacy risks. The FRIA is broader.

Here's the part that trips people up: the EU AI Office is supposed to publish an official FRIA template questionnaire under Article 27(5), but as of March 2026, it hasn't appeared. Organizations need to build their own assessment framework, or use a tool that generates one.

Before we go further: if you want to see what a FRIA looks like in practice, run a free AI system assessment on Haffa.ai and we'll generate a preliminary rights impact analysis for your system. It takes about five minutes.

Who must conduct a FRIA?

Not every AI deployer is subject to Article 27. The obligation is scoped to two specific categories.

Category 1: Public bodies and public service providers

The first group includes bodies governed by public law (government agencies, municipal authorities, public healthcare providers) plus any private company providing public services. If you're a private insurer processing claims for a public insurance scheme, or a private company running a public employment service, you're in this group.

These deployers must conduct a FRIA when deploying any high-risk AI system listed in Annex III, with one exception: AI systems used as safety components in critical infrastructure management (road traffic systems, water/gas/heating/electricity supply, and critical digital infrastructure). Those are exempt.

Category 2: Credit scoring and insurance underwriting

The second group is narrower. It covers deployers using high-risk AI to evaluate the creditworthiness of individuals or establish credit scores, and deployers using high-risk AI for risk assessment and pricing in life and health insurance.

There's a carve-out here too: AI systems used specifically for detecting financial fraud don't trigger the FRIA obligation.

A common misconception

Katarina, a DPO at a mid-size Polish fintech, told me her team initially assumed the FRIA applied only to government agencies. They'd classified their credit scoring AI as high-risk and completed the provider's conformity requirements, but hadn't considered the deployer-side FRIA obligation. When their legal counsel flagged Article 27 in January, they realized they had seven months to build an assessment process from scratch. They're not alone — most private-sector deployers in credit and insurance haven't started either.

If you're unsure whether your AI system qualifies as high-risk, start with how to classify your AI system under the EU AI Act.

The three sections of a FRIA

Article 27 structures the assessment around three sections. Each one serves a different purpose, and they build on each other.

Section 1: Description

Start with the facts. You document:

  • The AI system's intended purposes and how your organization specifically uses it
  • The processes the AI feeds into (what decisions does it influence or automate?)
  • The timeframe and frequency of use (continuous? batch processing? seasonal?)
  • The categories of people affected (applicants, employees, citizens, patients, customers)
  • The specific groups that may be disproportionately impacted (vulnerable populations, minorities, children)

Be specific here. "We use AI for hiring" isn't enough. "We use [System X] to screen CVs for software engineering roles, filtering approximately 2,000 applications per quarter, with automated rejections for candidates scoring below threshold" — that's what the assessment needs.

Section 2: Assessment

This is where the real work happens. For each group of affected individuals, you evaluate the specific risks of harm to their fundamental rights. The rights to consider include:

  • Right to non-discrimination (Article 21, EU Charter)
  • Right to privacy and data protection (Articles 7 and 8)
  • Human dignity (Article 1)
  • Freedom of expression and information (Article 11)
  • Right to an effective remedy (Article 47)
  • Workers' rights (Articles 27-32)
  • Rights of the child (Article 24)
  • Right to good administration (Article 41)

For each identified risk, assess the likelihood and severity. A credit scoring model that rejects loan applications has different risk characteristics than a chatbot that routes customer service queries.

Section 3: Mitigation

The final section documents what you're doing about the risks you identified:

  • Human oversight measures: who reviews the AI's outputs, how often, and with what authority to override?
  • Internal governance: what escalation paths exist when the system produces a questionable result?
  • Complaint mechanisms: how can an affected individual challenge a decision?
  • Remediation processes: what happens when something goes wrong?
  • Monitoring arrangements: how do you detect emerging risks after deployment?

This is where most assessments fall short. Writing "a human reviews all decisions" isn't mitigation if that human has no training, no time, and no real authority to override the system. More on this in the common mistakes section below.

FRIA vs. DPIA: what you can reuse

If your organization processes personal data through the AI system (which is almost always the case), you've likely completed a DPIA under GDPR Article 35. Article 27(4) of the AI Act explicitly allows you to use your existing DPIA as a starting point. Don't waste that.

A thorough DPIA covers roughly 30-40% of what a FRIA requires. Here's the overlap:

Element DPIA covers it? FRIA adds...
System description and purpose Yes Deployment-specific use cases and frequency
Data processing risks Yes
Privacy and data protection rights Yes
Non-discrimination analysis Partially Detailed assessment across all protected characteristics
Human dignity No Full assessment required
Freedom of expression No Full assessment required
Workers' rights No Full assessment required if employment context
Children's rights No Full assessment required if minors affected
Right to effective remedy Partially AI-specific complaint and challenge mechanisms
Human oversight measures No Detailed oversight documentation
Complaint mechanisms for AI decisions No Full documentation required

The practical approach: don't start from zero. Pull your DPIA into your FRIA framework, mark the sections that carry over, and put your effort into the gaps. Non-discrimination analysis, human oversight documentation, and complaint mechanisms are the three biggest ones.

For a deeper comparison between GDPR and AI Act requirements, see EU AI Act vs GDPR: how they work together.

Already completed a DPIA for your AI system? Haffa.ai can map your existing DPIA to FRIA requirements automatically, showing exactly which gaps remain. Free for up to 2 AI systems.

How to conduct a FRIA in five phases

The EU AI Office hasn't published its official template yet. But the ECNL and the Danish Institute for Human Rights have published a framework that tracks closely with Article 27's requirements. What follows is a five-phase approach adapted from that guidance.

Phase 1: Scope and context mapping (1-2 days)

Assemble your assessment team. You need someone from legal/compliance, someone who understands the AI system technically, and someone from the business unit that uses it. A three-person team works for most assessments.

Document the deployment context: what the system does, how it's used, who it affects, and what decisions it feeds into. Interview the business users, the people who interact with the system daily. They'll know things the procurement documents don't mention.

Phase 2: Rights mapping (2-3 days)

Go through the Charter of Fundamental Rights article by article and identify which rights are potentially affected by your specific deployment. Not every right will be relevant. A credit scoring model won't implicate freedom of assembly, but it will implicate non-discrimination, privacy, human dignity, and the right to an effective remedy.

For each relevant right, describe the specific mechanism of impact. How, concretely, could this system affect this right for this group of people?

Phase 3: Risk evaluation (2-3 days)

For each right-impact pair, assess the likelihood and severity. Use a simple matrix:

  • Likelihood: rare, unlikely, possible, likely, almost certain
  • Severity: negligible, minor, moderate, major, critical

Multiply them (or use a 5x5 heat map) to prioritize. Focus your mitigation effort on the high-likelihood/high-severity combinations.

This phase is where the assessment gets specific. "Potential discrimination risk" is not an evaluation. Compare that with: "The model was trained on historical lending data from 2015-2022, which reflects documented disparities in approval rates across ethnic groups in the Polish market. Without mitigation, these patterns will likely persist in automated decisions." The second version is an evaluation. The first is a placeholder.

Phase 4: Mitigation design (3-5 days)

For each high-priority risk, design specific countermeasures:

  • Technical: bias testing, fairness constraints, threshold adjustments
  • Procedural: human review protocols, override authorities, escalation paths
  • Organizational: training programs, governance committees, audit schedules
  • External: complaint channels, transparency notices, appeal processes

Document each measure with enough detail that an auditor could verify whether it's actually being implemented.

For a broader view of all deployer obligations, check the EU AI Act compliance checklist.

Phase 5: Documentation and notification (1-2 days)

Compile everything into a structured FRIA document. Article 27(1) requires you to notify the market surveillance authority of the results. Keep the document audit-ready, because regulators can and will request it.

Then set a review schedule. Article 27 requires the FRIA for first use, but if the system changes, the context changes, or new risks emerge, you should update it.

The timeline reality

Nine to fifteen working days sounds manageable. It is, for a single system. But most organizations have multiple AI systems that qualify. Marek, a compliance lead at a Warsaw-based insurance company, estimated his team needs to complete FRIAs for eleven high-risk AI systems before August 2026. Doing them one by one would eat most of the remaining calendar. His team is batching systems by risk category and running phases 1-3 in parallel for similar systems, which cut the total timeline roughly in half.

Common mistakes to avoid

I've reviewed dozens of early-stage FRIA attempts across European organizations, and the same problems keep showing up.

The most common is treating it as a checkbox exercise. Filling in "low risk" across every category won't survive regulatory scrutiny. If your credit scoring AI genuinely has no discrimination risk, explain why. Don't just mark the box.

Another frequent gap: ignoring indirect effects. A CV screening tool doesn't just affect candidates who get rejected. It also affects candidates who never apply because the job posting was targeted using the same AI. You have to think through the full chain of impact.

Teams also confuse the provider's conformity assessment with the FRIA. The provider handles technical documentation, accuracy testing, and CE marking. Your FRIA is about what happens when your organization deploys that system in your context, with your data, affecting your users. Different obligations, different parties.

And then there's the human oversight question. Nearly every FRIA I've seen claims "a human reviews all decisions." Very few organizations have checked whether those humans have the training, time, authority, and willingness to actually override the system when it's wrong.

What happens next

The August 2, 2026 deadline applies to all high-risk AI systems covered by Annex III. If you're deploying one of these systems in either category described above, your FRIA needs to be done by that date. Systems already running on August 2 aren't exempt. There's no grandfather clause.

Market surveillance authorities can request your FRIA documentation at any time. The EU AI Act doesn't specify a standalone penalty for a missing FRIA the way GDPR does for a missing DPIA, but non-compliance with deployer obligations can trigger fines of up to 15 million euros or 3% of worldwide annual turnover. That's enough to concentrate the mind.

Scope your AI systems. Figure out which ones trigger the obligation. Get your assessment team in a room. Five months is shorter than it sounds.

Start your FRIA with Haffa.ai — run a free assessment for up to 2 AI systems. No credit card, no commitment. The platform maps your system to Article 27 requirements automatically and generates a structured FRIA document you can submit to regulators.


Stay Updated

Subscribe to our newsletter for weekly EU AI Act insights

We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. By clicking "Accept all", you consent to our use of cookies.