Regulation Guide·12 min read

EU AI Act Compliance Guide 2026: What Companies Must Do Now

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. With prohibitions already in force and high-risk obligations taking effect in August 2026, compliance teams can no longer treat this as a future concern. This guide breaks down the risk categories, deadlines, and concrete steps your organization needs to take now.

What Is the EU AI Act?

The EU AI Act entered into force on 1 August 2024 after being published in the Official Journal of the European Union on 12 July 2024. It establishes a horizontal, risk-based regulatory framework that applies to AI systems placed on the market, put into service, or used within the European Union—regardless of where the provider is established.

The regulation follows a phased enforcement timeline. Provisions banning unacceptable-risk AI practices became applicable on 2 February 2025. Obligations for general-purpose AI (GPAI) models and the governance framework provisions apply from 2 August 2025. The most significant tranche—rules for high-risk AI systems listed in Annex III—takes effect on 2 August 2026.

Unlike sectoral regulations such as the Medical Device Regulation or DORA, the AI Act applies across all industries. If your organization develops, deploys, imports, or distributes an AI system that operates within the EU, you are likely within scope.

The Four-Tier Risk Classification System

The AI Act categorizes AI systems into four risk levels. Your compliance obligations depend entirely on which tier your system falls into.

Unacceptable Risk (Prohibited)

These AI practices have been banned since 2 February 2025. Violations carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. Prohibited systems include:

  • Social scoring by public authorities — systems that evaluate or classify individuals based on social behaviour or personality characteristics, leading to detrimental treatment unrelated to the context in which the data was collected.
  • Real-time remote biometric identification in public spaces — for law enforcement purposes, except in narrowly defined circumstances (serious crime, imminent threats) with prior judicial authorization.
  • Emotion recognition in workplaces and schools — AI systems inferring emotions of employees or students, except for medical or safety reasons.
  • Subliminal manipulation — techniques that deploy subliminal components beyond a person’s consciousness to materially distort behaviour in a way that causes significant harm.
  • Untargeted scraping for facial recognition databases — building or expanding facial recognition datasets through untargeted scraping of images from the internet or CCTV footage.

High Risk (Annex III)

High-risk AI systems face the most extensive obligations, taking full effect on 2 August 2026. These include AI systems used in:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure (energy, transport, water, digital)
  • Education and vocational training (admissions, assessment, monitoring)
  • Employment, worker management, and access to self-employment (recruitment, promotion, task allocation, performance monitoring)
  • Access to essential services (credit scoring, insurance risk, emergency dispatch)
  • Law enforcement (risk assessment, polygraph, evidence evaluation)
  • Migration, asylum, and border control (risk assessment, document verification)
  • Administration of justice and democratic processes

Providers of high-risk systems must implement a quality management system, maintain technical documentation, conduct conformity assessments, register in the EU database, and ensure ongoing post-market monitoring. These are not optional best practices—they are legal requirements with penalties for non-compliance of up to €15 million or 3% of global turnover.

Limited Risk (Transparency Obligations)

AI systems that interact with humans, generate synthetic content (deepfakes, AI-generated text and images), or perform emotion recognition or biometric categorisation must meet transparency requirements. Users must be informed that they are interacting with an AI system. AI-generated or manipulated content (images, audio, video) must be labelled as such in a machine-readable format. Chatbots must disclose their artificial nature unless this is obvious from the context.

Minimal Risk

The majority of AI systems—spam filters, AI-enabled video games, recommendation algorithms for non-critical uses—fall into this category. They are not subject to mandatory obligations under the AI Act, though the European Commission encourages voluntary codes of conduct.

Key Timelines and Enforcement Dates

The phased rollout is deliberate, giving organizations time to prepare. However, several deadlines have already passed or are imminent:

  • 1 Aug 2024AI Act enters into force.
  • 2 Feb 2025Prohibitions on unacceptable-risk AI practices apply. AI literacy obligations for providers and deployers begin.
  • 2 Aug 2025Rules for general-purpose AI (GPAI) models apply. Governance structure (AI Office, AI Board, advisory forum) becomes fully operational. Codes of practice for GPAI finalized.
  • 2 Aug 2026High-risk AI system obligations under Annex III apply. National market surveillance authorities must be designated. Penalties for non-compliance with high-risk provisions enforceable.
  • 2 Aug 2027Obligations for high-risk AI systems that are safety components of products covered by EU harmonisation legislation (Annex I, e.g. medical devices, machinery, toys, vehicles) apply.

Providers vs. Deployers: Understanding Your Role

The AI Act distinguishes between providers (organizations that develop or commission an AI system and place it on the market or put it into service under their own name) and deployers (organizations that use an AI system under their authority, excluding personal non-professional use).

Provider Obligations (High-Risk Systems)

  • Establish and maintain a quality management system covering risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
  • Produce and keep current technical documentation demonstrating conformity with the regulation before the system enters the market.
  • Conduct a conformity assessment (self-assessment for most Annex III systems, or third-party audit for biometric identification and critical infrastructure).
  • Register the AI system in the EU database maintained by the AI Office before placing it on the market.
  • Implement post-market monitoring and report serious incidents to the relevant market surveillance authority.
  • Affix the CE marking once conformity is demonstrated.

Deployer Obligations (High-Risk Systems)

  • Use the system in accordance with the instructions of use provided by the provider.
  • Ensure that input data is relevant and representative for the intended purpose.
  • Assign human oversight to individuals who have the competence, training, authority, and resources to fulfil that role effectively.
  • Monitor the system’s operation for risks and report incidents or malfunctions to the provider and, where applicable, to the market surveillance authority.
  • Conduct a fundamental rights impact assessment before deploying high-risk AI systems—this is mandatory for public bodies and for private entities operating in certain domains (credit, insurance, recruitment).
  • Keep logs generated by the AI system for at least six months (or longer where required by sector-specific law).

An important nuance: if a deployer substantially modifies a high-risk system or puts it on the market under its own name, it becomes a provider and assumes all provider obligations.

General-Purpose AI Models

The AI Act introduced specific rules for GPAI models (e.g., large language models) applicable since 2 August 2025. All GPAI providers must maintain technical documentation, provide information and documentation to downstream providers integrating the model, establish a policy to respect copyright (including compliance with the text and data mining opt-out under the Copyright Directive), and publish a sufficiently detailed summary of training data content.

GPAI models classified as posing systemic risk (currently defined by a computational threshold of 10^25 FLOPs for training, though this may be updated) face additional obligations: performing model evaluations including adversarial testing, assessing and mitigating systemic risks, reporting serious incidents to the AI Office, and ensuring adequate cybersecurity protections. The AI Office can also designate a model as systemic risk based on its capabilities, reach, or other criteria regardless of the FLOP threshold.

Seven Steps Companies Should Take Now

With the August 2026 deadline for high-risk obligations approaching, here is a pragmatic compliance roadmap:

1. Conduct an AI System Inventory

Map every AI system in your organization—purchased, built in-house, or embedded in third-party products. For each system, document its purpose, the data it processes, who it affects, and where it is deployed. Many organizations are surprised by the breadth of AI use once they begin inventorying; even routine tools like automated CV screening or credit scoring algorithms qualify.

2. Classify Each System by Risk Tier

Compare each inventoried system against the prohibited practices list and the Annex III high-risk categories. Pay close attention to the intended purpose: a general chatbot may be minimal risk, but the same underlying model used for employment decisions becomes high-risk. Document your classification rationale thoroughly—regulators will expect to see it.

3. Perform a Gap Analysis

For every high-risk system, map current practices against the Article 9-15 requirements: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency and provision of information (Art. 13), human oversight (Art. 14), and accuracy, robustness, and cybersecurity (Art. 15). Identify what you already have, what needs improvement, and what is entirely missing.

4. Build Your Risk Management System

Article 9 requires an iterative risk management process throughout the entire lifecycle of the AI system. This means identifying foreseeable risks, implementing mitigation measures, testing for residual risk, and documenting the entire process. Risk management must be continuous, not a one-time compliance exercise.

5. Establish Data Governance Practices

High-risk AI systems must be developed using training, validation, and testing datasets that meet specific quality criteria: relevance, representativeness, error-free to the extent possible, completeness, and statistical properties appropriate to the intended purpose. Document your data provenance, preprocessing steps, and any bias detection and correction measures.

6. Invest in AI Literacy

Since 2 February 2025, all providers and deployers must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This is not limited to high-risk systems—it applies across all risk categories. Design training programmes that are proportionate to the context, the technical knowledge of the individuals, and the specific AI systems they interact with.

7. Monitor Regulatory Developments Continuously

The AI Act is not static. The European Commission is empowered to adopt delegated acts and implementing acts that will further specify requirements. Harmonised standards from CEN/CENELEC are still under development and will provide the detailed technical specifications for conformity. The AI Office is publishing guidelines, codes of practice, and regulatory sandboxes. National transposition may introduce additional requirements. Staying current with these developments is essential to maintaining compliance.

Penalties for Non-Compliance

The enforcement regime is designed to be proportionate but significant. Penalties are structured in three tiers:

  • €35 million or 7% of global turnover for violations related to prohibited AI practices.
  • €15 million or 3% of global turnover for non-compliance with high-risk AI system obligations or GPAI model requirements.
  • €7.5 million or 1% of global turnover for providing incorrect, incomplete, or misleading information to authorities.

For SMEs and startups, the regulation specifies that fines should be proportionate to their size and economic viability. National authorities will handle enforcement for most provisions, while the AI Office enforces GPAI model rules directly.

Keeping Up with AI Act Developments

One of the practical challenges compliance teams face is the volume and fragmentation of AI Act-related updates. Between the European Commission’s delegated acts, CEN/CENELEC standardisation work, AI Office guidance documents, European Parliament resolutions, and 27 national transposition processes, relevant developments are published across dozens of sources daily.

Polzia monitors over 200 regulatory sources across 21 European markets, including EUR-Lex, national gazette publications, and regulatory authority websites. When the AI Office publishes new guidance or a member state announces its national competent authority for AI Act enforcement, Polzia surfaces it automatically, classified by topic, jurisdiction, and severity. For teams managing AI Act compliance alongside DORA, CSRD, NIS2, and other frameworks, having a single feed that covers all relevant developments can replace hours of manual scanning each week.

Conclusion

The EU AI Act represents a fundamental shift in how AI systems are governed. Organizations that start compliance work now—inventorying systems, classifying risk, building internal governance—will be in a far stronger position when high-risk enforcement begins in August 2026 than those who wait for harmonised standards to be finalized.

The regulation rewards a proactive approach. Conformity assessments, technical documentation, and risk management systems take months to build properly. The organizations that treat the AI Act as an opportunity to build trustworthy AI practices—rather than a checkbox exercise—will find themselves with a competitive advantage as customers, partners, and regulators increasingly demand demonstrable AI governance.

Start monitoring EU AI Act developments for free

Track enforcement deadlines, delegated acts, national transposition updates, and CEN/CENELEC standards—all in one regulatory intelligence feed.

Get Started — It’s Free