AI Governance EU AI ACT Guide
- Home
- AI Governance EU AI ACT Guide
The EU AI Act Takes
Full Effect in
The world's first binding AI regulation, with enforcement deadlines already live and high-risk obligations taking effect August 2, 2026. Where does your organization stand?
WHAT IS THE EU AI ACT & WHY IT MATTERS
The EU AI Act (Regulation (EU) 2024/1689) is the first binding AI law anywhere in the world. Published in the Official Journal on July 12, 2024, it creates a risk-based regulatory framework that classifies AI systems into four tiers and assigns obligations based on how much harm a system can cause.
That’s not a future problem.
Prohibited AI practices have been enforceable since February 2, 2025. GPAI model providers have been subject to transparency and documentation obligations since August 2, 2025. The next major deadline, August 2, 2026, activates the full weight of high-risk obligations for systems classified under Annex III. That includes AI used in employment decisions.
Worth pausing on that. Annex III, Section 4 specifically flags AI systems used in recruitment, candidate screening, performance evaluation, and decisions about promotion or termination. If your organization uses AI anywhere in the employee lifecycle, you’re likely looking at high-risk classification. Not possibly. Likely.
The scope assessment above helps you figure out whether the Act applies to your organization at all. The risk tiers below answer a different question: how does it apply? Four categories, four levels of regulatory burden, four different sets of consequences for getting it wrong. The classification your system receives determines everything that follows, from documentation requirements to penalty exposure.
Start with the tier that matches your situation.
Does the EU AI Act
Apply to You?
The world's first binding AI regulation, and its reach extends well beyond EU borders. Answer three questions to find out where you stand.
What's your organization's role with AI?
Does your AI reach the EU?
Does your AI touch any of these high-risk areas?
The EU AI Act classifies specific AI use cases as high-risk under Annex III. These carry the heaviest obligations. Select any that apply.
EU AI ACT Article Tracker Reference - Example - Demo Only
Disclaimer: This tracker is an illustrative example designed to demonstrate how the spreadsheet can be used. Columns such as Time to Complete, Completion %, ROI Timeline, Cost Implication, Order, and Implementation are pre-filled with sample data to show the tracker’s functionality. If you choose to use this tracker, you’ll need to verify and update each column to reflect your organization’s specific use case, requirements, and compliance obligations.
EU AI ACT Risk Categories
Four Tiers. One Framework.
The EU AI Act classifies every AI system into one of four risk tiers. Each carries different obligations and penalties, with enforcement dates already rolling. Select a tier to see what applies.
Prohibited AI Practices
Article 5These AI practices are banned outright. No conformity assessment, no exemptions (except narrow law-enforcement exceptions for real-time biometrics). Organizations must have ceased or never deployed these systems.
Banned Systems
High-Risk AI Systems
Articles 6–27This is where the compliance work lives. Two classification pathways determine whether a system qualifies, and obligations land on both providers and deployers.
Pathway 1: Annex I (Safety Components)
AI used as a safety component in products already covered by EU harmonization legislation. Think machinery, medical devices, vehicles, aviation, toys, lifts, and pressure equipment.
Deadline: August 2, 2027Pathway 2: Annex III (Standalone Use Cases)
AI deployed in sensitive domains, regardless of the product it's embedded in. Eight categories of high-risk use cases:
10 Provider Obligations
Organizations using high-risk AI systems carry their own obligations under Article 26. That includes fundamental rights impact assessments (Art. 27), human oversight, and monitoring the data going into the system.
Limited Risk: Transparency Obligations
Article 50These AI systems aren't high-risk, but users and affected persons must be told they're interacting with AI or consuming AI-generated content. No conformity assessment required. Just clear disclosure.
Transparency Requirements
Minimal Risk: No Mandatory Requirements
VoluntaryMost AI systems land here. No registration required, no conformity assessment, no mandatory documentation. Organizations can adopt voluntary codes of conduct, but nothing compels them to.
Common Examples
Even minimal-risk AI benefits from trustworthy practices. Article 95 encourages voluntary codes of conduct around transparency and fairness, plus energy efficiency and environmental sustainability.
Most of the compliance work lands in one place. High-risk.
If your AI system falls under Annex III or qualifies as a safety component in regulated products under Annex I, the Act assigns a specific set of obligations based on your role in the value chain. Provider, deployer, importer, distributor. Each carries different responsibilities, different documentation requirements, and different liability exposure. (If any of those terms are unfamiliar, the EU AI Act Glossary breaks them down.)
The question isn’t whether obligations exist. It’s which ones apply to you.
That depends on two things: what your system does and what role your organization plays. A CTO building an AI-powered hiring tool faces different requirements than a procurement lead sourcing one from a third party. The obligations overlap, but they don’t match. Understanding where your role fits in the governance structure is what separates a compliance plan from a compliance gap.
The explorer below maps it out. Select your role or pick an obligation type to see exactly what the Act requires, which articles apply, and when enforcement begins.
What Does the Act Require from You?
Obligations depend on two things: what your AI system does and what role your organization plays. Pick your role to see what applies, or start from a specific obligation to see who owns it.
CTO / CAIO
The CTO or Chief AI Officer owns system-level governance. That means the risk management framework, the quality management system, and making sure AI literacy programs actually exist across the organization. If your company provides GPAI models, this role also covers those obligations.
Product / Solution Owner
This role makes the classification call. Does the product fall under a prohibited use case? Does it qualify as high-risk? Those decisions determine everything downstream. Product owners also own the transparency requirements that travel with the system to deployers and end users.
ML / Data-Science Lead
The technical implementation of most high-risk obligations falls here. Data governance, model documentation, oversight mechanisms, and performance standards all require hands-on technical work from ML and data science teams.
Legal & Compliance
Legal and compliance teams carry the regulatory interpretation load. Classification assessments, fundamental rights impact assessments, conformity procedures, and penalty exposure analysis all need legal sign-off. If something goes wrong, this is where the enforcement response originates.
Risk & Governance Manager
Risk managers own the ongoing monitoring loop. The Act doesn't treat compliance as a one-time checklist. It requires continuous risk assessment, active post-market surveillance, and mandatory incident reporting when things go wrong.
Procurement Lead
Sourcing AI systems from third parties doesn't remove your obligations. The Act assigns specific duties to importers and distributors, and Article 25 can reclassify a distributor as a provider under certain conditions. Procurement teams need to know where those lines are.
MLOps / DevSecOps
The infrastructure layer. MLOps and DevSecOps teams build and maintain the logging pipelines, the monitoring systems, and the security controls that the Act's technical requirements demand. These aren't optional add-ons. They're compliance infrastructure.
Data Protection Officer
The AI Act and GDPR overlap most visibly here. Data governance requirements under the Act intersect with existing data protection obligations. The fundamental rights impact assessment adds a new layer on top of the DPIA process DPOs already manage.
Deployer / AI Operations
Not every organization builds AI. Many buy it, integrate it, and operate it. The Act calls these organizations deployers, and it holds them directly accountable. Deployers must follow the provider's instructions, assign qualified human oversight, monitor system performance, retain logs, and report serious incidents. If you're using a third-party AI system under your authority, these obligations are yours.
AI Auditor
The Act creates a natural internal audit function. Someone needs to verify the risk management system works, the conformity assessment was conducted properly, and the post-market monitoring plan produces useful data. External notified bodies handle third-party assessments for biometric systems, but internal audit owns the ongoing verification loop.
Risk Management System
Article 9A continuous, iterative process that runs throughout the AI system's entire lifecycle. Not a one-time risk assessment. The Act requires identification of known and foreseeable risks, estimation of those risks, adoption of mitigation measures, and testing to validate those measures work.
Data Governance
Article 10Training, validation, and testing datasets must meet quality criteria relevant to the system's intended purpose. That includes examining datasets for biases, gaps, and inadequacies. Special category data under GDPR can be processed for bias detection, but only under strict safeguards outlined in Art. 10(5).
Technical Documentation
Article 11Detailed documentation must be prepared before the system is placed on the market and kept up to date. Covers general system description, design specifications, development process, training methodology, testing and validation results, and monitoring capabilities. Annex IV spells out the minimum content requirements.
Record-Keeping
Article 12High-risk AI systems must include automatic logging capabilities. Logs should enable traceability of the system's operation and allow for post-deployment monitoring. Deployers must retain these logs for a minimum of 6 months (Art. 19), and documentation must be kept for 10 years (Art. 18).
Transparency
Article 13Providers must supply deployers with clear instructions for use. That includes the system's intended purpose, level of accuracy, known limitations, and any circumstances that could lead to risks. The information needs to be accessible and understandable to the people who'll actually operate the system.
Human Oversight
Article 14Systems must be designed so natural persons can effectively oversee them. That means tools for understanding outputs, the ability to intervene in real time, and a mechanism to halt or override the system. Oversight measures should prevent or minimize risks to health, safety, and fundamental rights.
Accuracy & Robustness
Article 15Systems must achieve appropriate levels of accuracy for their intended purpose, declared in the instructions for use. They need to be resilient against errors and inconsistencies, and include cybersecurity protections against unauthorized modification, data poisoning, adversarial examples, and model flaws.
Quality Management System
Article 17Providers must put a QMS in place that covers compliance strategy, design and development procedures, testing and validation techniques, data management practices, resource allocation, and post-market monitoring processes. The QMS must be proportionate to the size of the organization. It needs to be documented in writing and maintained throughout the system's lifecycle.
Conformity Assessment
Article 43Before placing a high-risk system on the market, providers must complete a conformity assessment. Most systems use internal assessment (Annex VI), but biometric identification systems require third-party assessment through a notified body (Annex VII). The assessment confirms the system meets all applicable requirements.
Post-Market Monitoring
Article 72Providers must establish a post-market monitoring system proportionate to the nature of the AI technology and the risks involved. It must actively collect and analyze data on system performance throughout its operational life. The monitoring plan is part of the technical documentation under Annex IV. Serious incidents must be reported within 15 days (Art. 73).
Where You Are.
What's Next.
The EU AI Act rolls out in phases from 2024 through 2030. Some obligations are already enforceable. Others depend on guidance that hasn't arrived yet. Here's the full picture, and where the pressure points sit right now.
Regulation (EU) 2024/1689 published in the Official Journal and entered into force. At this stage, none of the Act's substantive requirements applied. They phase in over the following two years.
Begun an AI system inventory across your organization. Identified which teams develop, deploy, or procure AI. Started familiarizing leadership with the risk-based framework.
Eight categories of AI systems became banned outright. That includes behavioral manipulation causing harm, exploitation of vulnerabilities, social scoring by public authorities, and predictive policing based solely on profiling. It also covers untargeted facial recognition scraping, emotion recognition in workplaces and schools, biometric categorization inferring protected characteristics, and real-time remote biometric ID in public spaces. Law enforcement gets narrow exceptions on that last one. AI literacy obligations also took effect. Providers and deployers must ensure staff have sufficient understanding of the AI systems they work with.
Audited all AI systems against the eight prohibited categories. Discontinued or redesigned any system that crosses the line. Rolled out AI literacy training for staff who interact with AI systems. Documented your assessment.
The AI Office finalized the General-Purpose AI Code of Practice. This voluntary framework helps GPAI model providers demonstrate compliance with their obligations until harmonized European standards are published. Providers who follow the Code get a presumption of conformity.
If you provide a GPAI model, reviewed the Code of Practice and assessed whether to adopt it or prepare alternative compliance documentation.
Full GPAI obligations took effect. All GPAI model providers must now maintain technical documentation, provide a public summary of training content, comply with EU copyright rules, and notify the Commission if their model meets the systemic risk threshold (1025 FLOPs). Providers of systemic-risk models face additional requirements: adversarial testing, systemic risk assessment, incident reporting to the AI Office, and adequate cybersecurity protections. Governance provisions also activated: the AI Office, the European AI Board, and national competent authorities became operational.
GPAI model providers: completed technical documentation, published training data summaries, implemented copyright compliance measures. Systemic-risk providers: established adversarial testing programs and incident reporting protocols. All organizations: identified your national competent authority and AI regulatory sandbox opportunities.
The Commission was required to publish practical guidelines on how Article 6 classification works, including a full list of examples showing which AI use cases qualify as high-risk and which don't. This deadline was missed. The Commission has indicated it is integrating feedback and expects to publish a draft for further consultation, with final adoption potentially in March or April 2026.
Without these guidelines, organizations classifying their AI systems as high-risk (or not) are working without official Commission guidance. The post-market monitoring plan template, also due by this date, has not been published either. This creates a compliance gap: obligations are approaching in August 2026, but the tools to determine whether those obligations apply to your system are delayed.
Don't wait for the guidelines to start your classification work. Use the Article 6 criteria and Annex III use-case list directly. Document your reasoning. If the guidelines change your classification when published, you'll have a defensible paper trail showing you acted in good faith.
This is the headline date. The remainder of the AI Act starts to apply. High-risk AI systems listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, immigration, justice) must meet the full set of provider and deployer obligations. That means risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, quality management, conformity assessment, and registration. Transparency obligations under Article 50 also take effect. Chatbots, deepfakes, and emotion recognition systems must disclose their AI nature. Member States must have at least one operational AI regulatory sandbox.
A complete risk management system (Art. 9). Data governance documentation (Art. 10). Technical documentation and record-keeping infrastructure (Arts. 11–12). Transparency mechanisms for user-facing systems (Art. 13). Human oversight protocols (Art. 14). Accuracy, robustness, and cybersecurity testing (Art. 15). A quality management system (Art. 17). Conformity assessment completed and EU declaration of conformity filed (Arts. 43, 47). Registration in the EU database (Art. 49). Post-market monitoring plan (Art. 72).
On November 19, 2025, the European Commission proposed the Digital Omnibus on AI (COM/2025/837). If adopted, it would defer high-risk obligations until compliance support tools (harmonized standards, common specifications, and Commission guidelines) are confirmed available. Long-stop deadlines:
High-risk AI systems used as safety components in regulated products (medical devices, vehicles, aviation, machinery, all covered under Annex I harmonization legislation) must comply. These get an extra year beyond Annex III systems. Separately, GPAI models already on the market before August 2, 2025 must be fully compliant by this date.
If your AI is embedded in a product covered by Annex I legislation (such as a medical device or automotive safety system), your full compliance package must be complete by this date. That includes third-party conformity assessment. GPAI providers who were operational before August 2025 should be well into their compliance programs by now.
The Commission evaluates the functioning of the AI Office and assesses the impact and effectiveness of voluntary codes of conduct. This is also the review window for potential amendments to the governance and supervision framework. If the Digital Omnibus is adopted, this date also serves as the long-stop deadline for product-embedded (Annex I) high-risk system compliance.
Expect the regulatory landscape to evolve based on the Commission's review findings. Organizations should treat this as a checkpoint for governance maturity. Your compliance framework should be operating, documented, and generating evidence by this point.
The final compliance wave. High-risk AI systems operated by public authorities that were placed on the market or put into service before August 2, 2026 must be brought into compliance. Large-scale IT systems listed in Annex X (components of EU freedom, security, and justice systems) have a separate deadline of December 31, 2030.
Public sector organizations and operators of large-scale IT systems (border management, asylum processing, criminal records databases) should use the intervening years to plan and execute their compliance programs. The extended timeline reflects the scale and complexity of these deployments, not a lower standard of compliance.
GPAI & the Digital Omnibus
General-purpose AI models don’t fit neatly into the four risk tiers. The Act treats them as a separate category entirely, with their own obligations under Articles 52 through 56.
Here’s what catches people off guard: these rules aren’t coming. They took effect on August 2, 2025. If you provide or integrate a GPAI model, you’re already subject to documentation, copyright compliance, and transparency requirements. Models that cross the systemic risk threshold (10²⁵ FLOPs or Commission designation) carry additional requirements on top of that.
One more thing worth tracking. The European Commission proposed the Digital Omnibus on AI on November 19, 2025. It’s a proposed amendment (not enacted law) that would tie certain high-risk enforcement deadlines to the availability of harmonized standards, potentially extending some timelines to December 2027 or August 2028. It doesn’t affect GPAI obligations or prohibited practices. But if you’re planning around the August 2026 high-risk deadline, the Omnibus is something your legal team should be watching.
The widget below breaks down exactly what GPAI providers owe under the current, enforceable rules.
GPAI Rules Are Already Live
Since August 2, 2025, every provider of a general-purpose AI model has obligations under the EU AI Act. Models that cross the systemic risk threshold face additional requirements. Here's what applies now.
All GPAI Models
Art. 53Four core obligations apply to every GPAI model provider. No exceptions, no size-based exemptions. These took effect August 2, 2025.
Technical Documentation
Maintain and make available to the AI Office and national authorities on request. Covers model architecture, training process, and evaluation results.
Training Data Summary
Publish a sufficiently detailed summary of training content, following the template provided by the AI Office.
Copyright Compliance
Comply with EU copyright rules. Identify and respect opt-outs expressed by rights holders under the text and data mining provisions.
Systemic Risk Notification
Notify the European Commission without undue delay if your model meets the 1025 FLOPs threshold or is designated by the Commission as posing systemic risk.
Systemic Risk Models
Arts. 52-55Models at or above this training compute threshold are presumed to pose systemic risk (Art. 51(2)). The Commission can also designate models below this threshold based on other criteria.
Systemic risk models must meet all four base obligations above, plus these additional requirements:
Model Evaluations
Conduct standardized evaluations, including adversarial testing, with protocols proportionate to the model's capabilities.
Systemic Risk Assessment
Identify and mitigate systemic risks, including through adherence to the Code of Practice or other adequate means.
Incident Reporting
Track and report serious incidents to the AI Office without undue delay.
Cybersecurity
Maintain adequate cybersecurity protections for the model and its physical infrastructure.
Code of Practice
Published May 2025Whichever is higher. For SMEs and startups, the lower amount applies (Art. 99(6)). Full penalty breakdown in the Fines section below.
Art. 101EU AI ACT Fines & Enforcement
What Non-Compliance Costs
The EU AI Act uses a three-tier penalty structure. Fines scale with the severity of the violation and the size of the organization. The highest penalties target prohibited AI practices.
Whichever amount is higher applies.
Which tier applies to you?
Select your situation to see applicable penalties.
SME & Startup Protection
For SMEs and startups, the lower of the two amounts (percentage or fixed) applies. This cap exists to prevent disproportionate impact on smaller organizations.
Art. 99(6)EU Institutions
EU institutions, bodies, and agencies face a separate fine structure: EUR 1.5M for prohibited practices violations, EUR 750K for all other violations.
Art. 100How Authorities Calculate Fines
Fines aren't automatic maximums. National authorities consider several factors when determining the actual penalty amount:
Fines are calculated on worldwide annual turnover of the preceding financial year. If turnover data isn't available, authorities can estimate it.
Art. 99(7)EU AI ACT NEWS
EU AI Act News
Recent enforcement actions, regulatory guidance, and compliance developments.
Commission Misses Article 6 High-Risk Classification Deadline
The European Commission did not publish its Article 6(5) guidelines on high-risk AI classification or the Article 72(3) post-market monitoring template by the February 2 statutory deadline. A draft is expected by end of February, with final adoption targeted for March or April 2026.
Parliament Publishes Omnibus Analysis: Up to 16-Month Delay Possible
The European Parliament Research Service published its legislative briefing on the Digital Omnibus, confirming that high-risk obligations could be deferred to December 2, 2027 for Annex III systems. The proposal must be adopted before August 2, 2026 for any delay to take effect. Trilogue negotiations expected in spring 2026.
Ireland Publishes AI Enforcement Bill with 15 Sectoral Regulators
Ireland's AI Bill establishes a distributed enforcement model with 15 sectoral regulators and creates the AI Office of Ireland. Powers include source code access for high-risk systems and fines up to 7% of worldwide turnover. The AI Office must be operational by August 1, 2026.
OECD Releases Cross-Regime AI Compliance Mapping Tool
The OECD published Due Diligence Guidance for Responsible AI with explicit mapping across the EU AI Act, NIST AI RMF, and ISO 42001. Backed by all OECD members plus 17 partner governments and the EU, it's the first government-endorsed tool for multi-jurisdictional AI compliance.
Harmonized Standards Portfolio Now Targeted for Q4 2026
CEN-CENELEC's full standards portfolio won't be ready until Q4 2026, well past the August enforcement deadline. The first standard (prEN 18286 on quality management) just closed public enquiry. Fast-track measures may accelerate delivery, but JTC 21 members warn they could undermine consensus.
GPAI Code Signatory Taskforce Launches; Meta Remains the Lone Holdout
The GPAI Code of Practice Signatory Taskforce held its first meeting with Google, OpenAI, Microsoft, Anthropic, and Mistral participating. Meta is the only major AI company refusing the voluntary Code, facing increased scrutiny from the AI Office. A second Code on AI content transparency is expected in draft around March 2026.
EU AI ACT Resources & Tools
Go Deeper
Reference tools, templates, and the full article text. Everything you need to move from understanding to implementation.
EU AI Act Glossary
Definitions for every key term in the regulation, from "AI system" to "substantial modification."
View glossaryRisk Assessment Checklist
10-section compliance template covering classification, risk management, data governance, FRIA, and post-market monitoring.
Download templateRelated Resources
QMS Requirements
Article 17The EU AI Act requires providers of high-risk AI systems to establish a Quality Management System. The QMS must ensure responsible development, deployment, and management that upholds safety, transparency, accountability, and fundamental rights protection.
Purpose and Framework
The QMS must ensure compliance with the EU AI Act and establish sound quality management practices to mitigate risks and ensure trustworthiness. Documentation must be systematic and orderly, presented as written policies, procedures, and instructions.
Integration Options
Providers of AI systems already covered by Union harmonization legislation may integrate QMS elements into their existing quality systems, provided equivalent protection levels are achieved.
May implement QMS requirements within national or regional quality management systems.
Permitted to comply with certain QMS elements in a simplified manner.
Core QMS Requirements (Article 17)
Strategy for Regulatory Compliance
Compliance with conformity assessment procedures. Processes for managing modifications to high-risk AI systems. Documentation of compliance verification methods.
Management and Organization
Management Responsibilities: Clearly defined allocation of QMS management roles.
Staff Competence: Measures ensuring personnel have necessary competence and training.
Management Review: Periodic review procedures to ensure QMS suitability, adequacy, and effectiveness.
Technical Standards and Specifications
Documentation of applied technical standards. When harmonized standards are not fully applicable or do not cover all requirements, documentation of alternative compliance methods is required.
Data Management
All data operations performed before market placement or service deployment must be documented. This covers nine areas: acquisition, collection, analysis, labeling, storage, filtration, mining, aggregation, and retention.
Special emphasis on ensuring training, validation, and testing datasets are relevant, representative, error-free, and complete to the best extent possible for their intended purposes.
Documentation Requirements
Full QMS documentation. Technical documentation for each high-risk AI system. Procedures ensuring the QMS remains adequate and effective. Provider name, address, and a list of AI systems covered by the QMS.
Post-Market Monitoring
Documented monitoring plan for deployed systems. Collection and review of usage experience. Identification of needs for immediate corrective or preventive actions. Continuous improvement mechanisms.
Authority Communication
Established procedures for interacting with competent authorities. Reporting mechanisms for serious incidents. Documentation accessibility for regulatory reviews.
Conformity Assessment and QMS
During conformity assessment procedures, providers must verify QMS compliance with Article 17 requirements. Design, development, and testing systems undergo examination and ongoing surveillance. Applications for notified body assessment must include full QMS documentation.
Notified Body Requirements
Notified bodies conducting QMS assessments must satisfy organizational requirements, quality management standards, resource adequacy, process requirements, and cybersecurity measures.
Alignment with International Standards
The EU AI Act's QMS requirements align with broader AI governance frameworks such as ISO/IEC 42001. Both emphasize risk management processes, AI impact assessments, full lifecycle management, and a culture of continuous improvement.
Implementation Timeline
Providers should begin QMS implementation well before these deadlines to ensure compliance readiness and allow time for refinement based on operational experience.
EU AI Act Articles
Regulation (EU) 2024/1689Quick-reference guide to all 113 articles across 13 chapters. Click any article number to read the full text on the official reference site.
Derrick Jackson
Founder: CISSP, CRISC, CCSP
Hello Everyone. Please consider helping us grow our community by sharing and/or supporting us on other platforms. This allow us to show verification that what we are doing is valued. It also allows us to plan and allocate resources to improve what we are doing, as we then know others are interested/supportive! We would appreciate the ability to converse about what knowledge topics or tools we can cover to help you or your organization. Cheers!