Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

EU AI ACT, EU AI ACT Guide
EU AI ACT COMPLIANCE HUB

The EU AI Act Takes
Full Effect in

--- Days
:
-- Hours
:
-- Minutes

The world's first binding AI regulation, with enforcement deadlines already live and high-risk obligations taking effect August 2, 2026. Where does your organization stand?

4 Risk Tiers
€35M Max Fine
27 Member States Enforcing

WHAT IS THE EU AI ACT & WHY IT MATTERS

The EU AI Act (Regulation (EU) 2024/1689) is the first binding AI law anywhere in the world. Published in the Official Journal on July 12, 2024, it creates a risk-based regulatory framework that classifies AI systems into four tiers and assigns obligations based on how much harm a system can cause.

That’s not a future problem.

Prohibited AI practices have been enforceable since February 2, 2025. GPAI model providers have been subject to transparency and documentation obligations since August 2, 2025. The next major deadline, August 2, 2026, activates the full weight of high-risk obligations for systems classified under Annex III. That includes AI used in employment decisions.

Worth pausing on that. Annex III, Section 4 specifically flags AI systems used in recruitment, candidate screening, performance evaluation, and decisions about promotion or termination. If your organization uses AI anywhere in the employee lifecycle, you’re likely looking at high-risk classification. Not possibly. Likely.

The scope assessment above helps you figure out whether the Act applies to your organization at all. The risk tiers below answer a different question: how does it apply? Four categories, four levels of regulatory burden, four different sets of consequences for getting it wrong. The classification your system receives determines everything that follows, from documentation requirements to penalty exposure.

Start with the tier that matches your situation.

SCOPE & APPLICABILITY

Does the EU AI Act
Apply to You?

The world's first binding AI regulation, and its reach extends well beyond EU borders. Answer three questions to find out where you stand.

Step 1 of 3

What's your organization's role with AI?

Step 2 of 3

Does your AI reach the EU?

Step 3 of 3

Does your AI touch any of these high-risk areas?

The EU AI Act classifies specific AI use cases as high-risk under Annex III. These carry the heaviest obligations. Select any that apply.

EU AI ACT Article Tracker Reference - Example - Demo Only

Disclaimer: This tracker is an illustrative example designed to demonstrate how the spreadsheet can be used. Columns such as Time to Complete, Completion %, ROI Timeline, Cost Implication, Order, and Implementation are pre-filled with sample data to show the tracker’s functionality. If you choose to use this tracker, you’ll need to verify and update each column to reflect your organization’s specific use case, requirements, and compliance obligations.

EU AI ACT Risk Categories

RISK CLASSIFICATION FRAMEWORK

Four Tiers. One Framework.

The EU AI Act classifies every AI system into one of four risk tiers. Each carries different obligations and penalties, with enforcement dates already rolling. Select a tier to see what applies.

Most of the compliance work lands in one place. High-risk.

If your AI system falls under Annex III or qualifies as a safety component in regulated products under Annex I, the Act assigns a specific set of obligations based on your role in the value chain. Provider, deployer, importer, distributor. Each carries different responsibilities, different documentation requirements, and different liability exposure. (If any of those terms are unfamiliar, the EU AI Act Glossary breaks them down.)

The question isn’t whether obligations exist. It’s which ones apply to you.

That depends on two things: what your system does and what role your organization plays. A CTO building an AI-powered hiring tool faces different requirements than a procurement lead sourcing one from a third party. The obligations overlap, but they don’t match. Understanding where your role fits in the governance structure is what separates a compliance plan from a compliance gap.

The explorer below maps it out. Select your role or pick an obligation type to see exactly what the Act requires, which articles apply, and when enforcement begins.

OBLIGATIONS EXPLORER

What Does the Act Require from You?

Obligations depend on two things: what your AI system does and what role your organization plays. Pick your role to see what applies, or start from a specific obligation to see who owns it.

COMPLIANCE TIMELINE

Where You Are.
What's Next.

The EU AI Act rolls out in phases from 2024 through 2030. Some obligations are already enforceable. Others depend on guidance that hasn't arrived yet. Here's the full picture, and where the pressure points sit right now.

Main enforcement in -- days

Regulation (EU) 2024/1689 published in the Official Journal and entered into force. At this stage, none of the Act's substantive requirements applied. They phase in over the following two years.

What you should have done:

Begun an AI system inventory across your organization. Identified which teams develop, deploy, or procure AI. Started familiarizing leadership with the risk-based framework.

Art. 113, Regulation (EU) 2024/1689

Eight categories of AI systems became banned outright. That includes behavioral manipulation causing harm, exploitation of vulnerabilities, social scoring by public authorities, and predictive policing based solely on profiling. It also covers untargeted facial recognition scraping, emotion recognition in workplaces and schools, biometric categorization inferring protected characteristics, and real-time remote biometric ID in public spaces. Law enforcement gets narrow exceptions on that last one. AI literacy obligations also took effect. Providers and deployers must ensure staff have sufficient understanding of the AI systems they work with.

What you should have done:

Audited all AI systems against the eight prohibited categories. Discontinued or redesigned any system that crosses the line. Rolled out AI literacy training for staff who interact with AI systems. Documented your assessment.

Art. 5 (prohibitions), Art. 4 (AI literacy), Art. 113(a)

The AI Office finalized the General-Purpose AI Code of Practice. This voluntary framework helps GPAI model providers demonstrate compliance with their obligations until harmonized European standards are published. Providers who follow the Code get a presumption of conformity.

What you should have done:

If you provide a GPAI model, reviewed the Code of Practice and assessed whether to adopt it or prepare alternative compliance documentation.

Art. 56, Art. 113

Full GPAI obligations took effect. All GPAI model providers must now maintain technical documentation, provide a public summary of training content, comply with EU copyright rules, and notify the Commission if their model meets the systemic risk threshold (1025 FLOPs). Providers of systemic-risk models face additional requirements: adversarial testing, systemic risk assessment, incident reporting to the AI Office, and adequate cybersecurity protections. Governance provisions also activated: the AI Office, the European AI Board, and national competent authorities became operational.

What you should have done:

GPAI model providers: completed technical documentation, published training data summaries, implemented copyright compliance measures. Systemic-risk providers: established adversarial testing programs and incident reporting protocols. All organizations: identified your national competent authority and AI regulatory sandbox opportunities.

Arts. 51–55 (GPAI), Art. 113(b), Art. 101 (fines)
You are here

The Commission was required to publish practical guidelines on how Article 6 classification works, including a full list of examples showing which AI use cases qualify as high-risk and which don't. This deadline was missed. The Commission has indicated it is integrating feedback and expects to publish a draft for further consultation, with final adoption potentially in March or April 2026.

Why this matters

Without these guidelines, organizations classifying their AI systems as high-risk (or not) are working without official Commission guidance. The post-market monitoring plan template, also due by this date, has not been published either. This creates a compliance gap: obligations are approaching in August 2026, but the tools to determine whether those obligations apply to your system are delayed.

What you should be doing now:

Don't wait for the guidelines to start your classification work. Use the Article 6 criteria and Annex III use-case list directly. Document your reasoning. If the guidelines change your classification when published, you'll have a defensible paper trail showing you acted in good faith.

Art. 6(5), Art. 72(3), Art. 113

This is the headline date. The remainder of the AI Act starts to apply. High-risk AI systems listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, immigration, justice) must meet the full set of provider and deployer obligations. That means risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, quality management, conformity assessment, and registration. Transparency obligations under Article 50 also take effect. Chatbots, deepfakes, and emotion recognition systems must disclose their AI nature. Member States must have at least one operational AI regulatory sandbox.

What you need in place by this date:

A complete risk management system (Art. 9). Data governance documentation (Art. 10). Technical documentation and record-keeping infrastructure (Arts. 11–12). Transparency mechanisms for user-facing systems (Art. 13). Human oversight protocols (Art. 14). Accuracy, robustness, and cybersecurity testing (Art. 15). A quality management system (Art. 17). Conformity assessment completed and EU declaration of conformity filed (Arts. 43, 47). Registration in the EU database (Art. 49). Post-market monitoring plan (Art. 72).

Proposed Amendment Digital Omnibus on AI

On November 19, 2025, the European Commission proposed the Digital Omnibus on AI (COM/2025/837). If adopted, it would defer high-risk obligations until compliance support tools (harmonized standards, common specifications, and Commission guidelines) are confirmed available. Long-stop deadlines:

Annex III (standalone high-risk) No later than Dec 2, 2027
Annex I (product-embedded) No later than Aug 2, 2028
Not affected by Omnibus: Prohibited practices, GPAI obligations, AI literacy, transparency requirements
Current status: Proposed. Under ordinary legislative procedure. Being examined by European Parliament and Council. Adoption possible by mid-2026.
Recommendation: Plan for August 2, 2026. The Omnibus is a proposal, not law. Even if adopted, the core compliance architecture stays the same. Only the enforcement date shifts.
Source: European Commission, COM/2025/837, 2025/0360 (COD), published Nov 19, 2025
Art. 113, Arts. 6–17, 43, 47, 49–50, 72

High-risk AI systems used as safety components in regulated products (medical devices, vehicles, aviation, machinery, all covered under Annex I harmonization legislation) must comply. These get an extra year beyond Annex III systems. Separately, GPAI models already on the market before August 2, 2025 must be fully compliant by this date.

What to plan for:

If your AI is embedded in a product covered by Annex I legislation (such as a medical device or automotive safety system), your full compliance package must be complete by this date. That includes third-party conformity assessment. GPAI providers who were operational before August 2025 should be well into their compliance programs by now.

Art. 6(1), Art. 111, Art. 113

The Commission evaluates the functioning of the AI Office and assesses the impact and effectiveness of voluntary codes of conduct. This is also the review window for potential amendments to the governance and supervision framework. If the Digital Omnibus is adopted, this date also serves as the long-stop deadline for product-embedded (Annex I) high-risk system compliance.

What to plan for:

Expect the regulatory landscape to evolve based on the Commission's review findings. Organizations should treat this as a checkpoint for governance maturity. Your compliance framework should be operating, documented, and generating evidence by this point.

Art. 112, Art. 113

The final compliance wave. High-risk AI systems operated by public authorities that were placed on the market or put into service before August 2, 2026 must be brought into compliance. Large-scale IT systems listed in Annex X (components of EU freedom, security, and justice systems) have a separate deadline of December 31, 2030.

What to plan for:

Public sector organizations and operators of large-scale IT systems (border management, asylum processing, criminal records databases) should use the intervening years to plan and execute their compliance programs. The extended timeline reflects the scale and complexity of these deployments, not a lower standard of compliance.

Art. 111(2), Annex X, Art. 113

GPAI & the Digital Omnibus

General-purpose AI models don’t fit neatly into the four risk tiers. The Act treats them as a separate category entirely, with their own obligations under Articles 52 through 56.

Here’s what catches people off guard: these rules aren’t coming. They took effect on August 2, 2025. If you provide or integrate a GPAI model, you’re already subject to documentation, copyright compliance, and transparency requirements. Models that cross the systemic risk threshold (10²⁵ FLOPs or Commission designation) carry additional requirements on top of that.

One more thing worth tracking. The European Commission proposed the Digital Omnibus on AI on November 19, 2025. It’s a proposed amendment (not enacted law) that would tie certain high-risk enforcement deadlines to the availability of harmonized standards, potentially extending some timelines to December 2027 or August 2028. It doesn’t affect GPAI obligations or prohibited practices. But if you’re planning around the August 2026 high-risk deadline, the Omnibus is something your legal team should be watching.

The widget below breaks down exactly what GPAI providers owe under the current, enforceable rules.

GENERAL-PURPOSE AI OBLIGATIONS

GPAI Rules Are Already Live

Since August 2, 2025, every provider of a general-purpose AI model has obligations under the EU AI Act. Models that cross the systemic risk threshold face additional requirements. Here's what applies now.

GPAI Obligations: In Effect
Systemic Risk Rules: In Effect
Code of Practice: Published

Code of Practice

Published May 2025
Legal basis Art. 56
Status Voluntary
Benefit Presumption of conformity until harmonized standards are published
Alternative Providers can demonstrate compliance through other adequate means
EUR 15M or 3% worldwide annual turnover

Whichever is higher. For SMEs and startups, the lower amount applies (Art. 99(6)). Full penalty breakdown in the Fines section below.

Art. 101
Digital Omnibus note: The proposed Digital Omnibus (COM/2025/837) does not affect GPAI obligations or systemic risk requirements. It would extend transparency marking deadlines for pre-existing systems from August 2026 to February 2027. See the Compliance Timeline above for full Omnibus status and legislative tracking.

EU AI ACT Fines & Enforcement

FINES & ENFORCEMENT

What Non-Compliance Costs

The EU AI Act uses a three-tier penalty structure. Fines scale with the severity of the violation and the size of the organization. The highest penalties target prohibited AI practices.

Tier 1
Prohibited practices (Art. 5) Art. 99(3)
EUR 35M
or
7% worldwide annual turnover
Tier 2
Other requirements + GPAI violations Art. 99(4)
EUR 15M
or
3% worldwide annual turnover
Tier 3
Incorrect or misleading information Art. 99(5)
EUR 7.5M
or
1% worldwide annual turnover

Whichever amount is higher applies.

Which tier applies to you?

Select your situation to see applicable penalties.

SME & Startup Protection

For SMEs and startups, the lower of the two amounts (percentage or fixed) applies. This cap exists to prevent disproportionate impact on smaller organizations.

Art. 99(6)

EU Institutions

EU institutions, bodies, and agencies face a separate fine structure: EUR 1.5M for prohibited practices violations, EUR 750K for all other violations.

Art. 100

EU AI ACT NEWS

LATEST UPDATES

EU AI Act News

Recent enforcement actions, regulatory guidance, and compliance developments.

Updated February 2026
Feb 2, 2026 Enforcement

Commission Misses Article 6 High-Risk Classification Deadline

The European Commission did not publish its Article 6(5) guidelines on high-risk AI classification or the Article 72(3) post-market monitoring template by the February 2 statutory deadline. A draft is expected by end of February, with final adoption targeted for March or April 2026.

Feb 12, 2026 Guidance

Parliament Publishes Omnibus Analysis: Up to 16-Month Delay Possible

The European Parliament Research Service published its legislative briefing on the Digital Omnibus, confirming that high-risk obligations could be deferred to December 2, 2027 for Annex III systems. The proposal must be adopted before August 2, 2026 for any delay to take effect. Trilogue negotiations expected in spring 2026.

Feb 4, 2026 Enforcement

Ireland Publishes AI Enforcement Bill with 15 Sectoral Regulators

Ireland's AI Bill establishes a distributed enforcement model with 15 sectoral regulators and creates the AI Office of Ireland. Powers include source code access for high-risk systems and fines up to 7% of worldwide turnover. The AI Office must be operational by August 1, 2026.

Feb 19, 2026 International

OECD Releases Cross-Regime AI Compliance Mapping Tool

The OECD published Due Diligence Guidance for Responsible AI with explicit mapping across the EU AI Act, NIST AI RMF, and ISO 42001. Backed by all OECD members plus 17 partner governments and the EU, it's the first government-endorsed tool for multi-jurisdictional AI compliance.

Jan 2026 Standards

Harmonized Standards Portfolio Now Targeted for Q4 2026

CEN-CENELEC's full standards portfolio won't be ready until Q4 2026, well past the August enforcement deadline. The first standard (prEN 18286 on quality management) just closed public enquiry. Fast-track measures may accelerate delivery, but JTC 21 members warn they could undermine consensus.

CEN-CENELEC via CMS Law, Compliance Week
Feb 12, 2026 GPAI

GPAI Code Signatory Taskforce Launches; Meta Remains the Lone Holdout

The GPAI Code of Practice Signatory Taskforce held its first meeting with Google, OpenAI, Microsoft, Anthropic, and Mistral participating. Meta is the only major AI company refusing the voluntary Code, facing increased scrutiny from the AI Office. A second Code on AI content transparency is expected in draft around March 2026.

European Commission via Bird & Bird

EU AI ACT Resources & Tools

Picture of Derrick Jackson

Derrick Jackson

Founder: CISSP, CRISC, CCSP

LinkedIn
X
Facebook
Reddit
Email

Hello Everyone. Please consider helping us grow our community by sharing and/or supporting us on other platforms. This allow us to show verification that what we are doing is valued. It also allows us to plan and allocate resources to improve what we are doing, as we then know others are interested/supportive! We would appreciate the ability to converse about what knowledge topics or tools we can cover to help you or your organization. Cheers!