Regulation

EU AI Act 2026: What You Need to Know Before August

January 15, 2026Haffa.ai Editorial9 min read

August 2026 is six months away

The EU Artificial Intelligence Act (Regulation 2024/1689) entered into force on 1 August 2024, but the rules that will affect most organizations, the obligations for high-risk AI systems, apply from 2 August 2026. That is less than six months out. Most organizations are not ready.

This article covers which provisions are already in force, what changes in August 2026, who is affected, and what you should be doing now. If you deploy, develop, or distribute AI systems in the European Union, this regulation deserves the same attention you once gave GDPR.

Background

The European Commission proposed the AI Act in April 2021. After negotiations between the European Parliament and the Council of the EU, political agreement was reached in December 2023. The regulation was formally adopted in June 2024, published in the Official Journal on 12 July 2024, and entered into force twenty days later.

Unlike a directive, the AI Act is a regulation. It applies directly in all 27 EU Member States without national implementing legislation. Member States must designate national competent authorities and market surveillance authorities by 2 August 2025, which creates the enforcement infrastructure.

Timeline: four phases

The AI Act follows a phased rollout. Different provisions become applicable at different times:

Phase 1: February 2025 — Prohibited practices

As of 2 February 2025, the provisions on prohibited AI practices (Article 5) are already in force. The following AI applications are banned in the EU:

  • AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior

  • AI systems that exploit vulnerabilities of specific groups (age, disability, social or economic situation)

  • Social scoring systems by public authorities

  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)

  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases

  • Emotion recognition in the workplace and educational institutions (with narrow exceptions)

  • Biometric categorization systems that infer sensitive attributes (race, political opinions, religious beliefs, sexual orientation)

If your organization operates any AI system that falls into these categories, you are already in breach. Our free risk assessment tool can help you check.

Phase 2: August 2025 — Governance and general-purpose AI

By 2 August 2025, the following provisions apply:

  • Member States must designate national competent authorities

  • The AI Office within the European Commission becomes fully operational

  • Obligations for providers of general-purpose AI models (GPAI) take effect (Chapter V)

  • AI literacy requirements (Article 4) become applicable

  • Penalties framework comes into force

Worth flagging: Article 4 requires all providers and deployers of AI systems to ensure their staff have sufficient AI literacy. This applies regardless of your system's risk level, which makes it one of the broadest obligations in the entire regulation.

Phase 3: August 2026 — High-risk AI systems

On 2 August 2026, the full obligations for high-risk AI systems become applicable. For most organizations, this is the date that actually changes day-to-day operations. The requirements include:

  • Risk management systems (Article 9)

  • Data and data governance requirements (Article 10)

  • Technical documentation per Annex IV (Article 11)

  • Record-keeping and automatic logging (Article 12)

  • Transparency and provision of information to deployers (Article 13)

  • Human oversight requirements (Article 14)

  • Accuracy, robustness, and cybersecurity standards (Article 15)

  • Quality management systems (Article 17)

  • Conformity assessments (Article 43)

  • EU database registration (Article 49)

  • Fundamental Rights Impact Assessments for deployers (Article 27)

  • Post-market monitoring obligations (Article 72)

  • Transparency obligations for limited-risk systems (Article 50)

The scope is wide. Any organization deploying AI in areas listed in Annex III (healthcare, employment, education, law enforcement, critical infrastructure, financial services) must have all these systems in place by this date.

Phase 4: August 2027 — Existing systems and safety components

By 2 August 2027, the obligations extend to high-risk AI systems that are safety components of products already covered by EU harmonized legislation (such as medical devices, machinery, and automotive systems). This gives manufacturers of these products an additional year to bring existing systems into compliance.

Who is affected?

The AI Act assigns obligations through four roles.

Providers are organizations that develop an AI system (or have one developed) and place it on the market under their own name. They carry the heaviest obligations. Deployers are organizations that use an AI system under their authority. If you purchase or license AI software for your operations, you are a deployer. Importers bring AI systems from third countries into the EU market, and distributors make AI systems available on the EU market without being providers or importers.

One thing to note: the AI Act has extraterritorial reach. If your AI system's output is used in the EU, you may be subject to the regulation even if your organization is based outside Europe, much like GDPR.

Penalties

The fines are large:

  • Prohibited practices: up to €35 million or 7% of annual global turnover (whichever is higher)

  • High-risk obligations: up to €15 million or 3% of annual global turnover

  • Incorrect information to authorities: up to €7.5 million or 1% of annual global turnover

Proportionate caps apply for SMEs and startups, but even the reduced amounts are not trivial.

What to do now

Six months is not a lot of time. Here is a practical sequence:

1. Inventory your AI systems

You cannot comply with what you have not identified. Build a registry of every AI system your organization develops, deploys, or distributes. Include third-party AI tools and embedded AI components; these are easy to miss.

2. Classify each system's risk level

Run each system through a risk classification based on Articles 5, 6, and Annex III. Our free risk classification tool can help you do this quickly.

3. Prioritize high-risk systems

Concentrate compliance work on systems classified as high-risk. They carry the heaviest obligations and the largest fines.

4. Start documentation now

Annex IV technical documentation is extensive. It cannot be assembled in a few weeks. Start now. Our document generator can auto-populate much of it from your system registry data.

5. Address AI literacy

Article 4 AI literacy requirements have been applicable since August 2025. If you have not addressed this yet, set up training programs.

Make sure your legal, compliance, and data protection teams understand the AI Act's requirements and know what is expected of them.

How Haffa.ai can help

We built Haffa.ai for this. The platform covers guided risk classification, automated documentation generation, compliance dashboards, and access to certified expert consultants. It is designed to reduce the time between "we need to comply" and actually being compliant.

See our plans or start with a free risk assessment.


Stay Updated

Subscribe to our newsletter for weekly EU AI Act insights

We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. By clicking "Accept all", you consent to our use of cookies.