EU AI Act and Agents
High-Risk Classification and Compliance Requirements
Regulation (EU) 2024/1689 -- the EU Artificial Intelligence Act -- is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024, it entered into force on 1 August 2024. Enforcement is coordinated by the EU AI Office (established February 2024) and national market surveillance authorities. It is not a directive that member states transpose at their discretion. It is a regulation: directly applicable, legally binding, and enforceable across all 27 EU member states from the date each provision takes effect.
The Act applies to providers placing AI systems on the EU market and deployers using them within the EU. It also reaches providers and deployers located outside the EU when their AI system's output is used inside the EU. That extraterritorial reach means any organization deploying AI agents that touch EU citizens, EU data, or EU operations must comply -- regardless of where the agent is hosted or the company is headquartered.
The Act uses a risk-based approach. Not every AI system faces the same obligations. The framework classifies systems into four risk tiers -- prohibited, high-risk, limited risk, and minimal risk -- with compliance obligations scaled to the potential for harm. For agentic AI systems, the high-risk tier is where most of the regulatory weight falls, and most enterprise agent deployments will need to engage with it directly.
Before you can classify an AI agent under the Act, you need to understand what the Act considers an AI system. Article 3(1) provides the legal definition:
"'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Article 3(1), EU AI ActAgentic AI systems fall squarely within this definition. They operate with "varying levels of autonomy" -- that is the defining characteristic of an agent. They exhibit "adaptiveness after deployment" through memory, learning, and context accumulation. And they generate outputs that "influence physical or virtual environments" through tool use and autonomous actions: sending emails, executing code, modifying databases, invoking APIs.
The definition is deliberately broad. It does not reference specific technologies, model architectures, or training methods. This technology-neutral approach means the Act will cover agent architectures that do not yet exist. For the same reason, it captures every major agent pattern in production today: single agents, multi-agent orchestration, hierarchical delegation chains, and tool-augmented reasoning loops.
Two other definitions from Article 3 matter for agent compliance. A provider (Article 3(3)) is "a natural or legal person... that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark." A deployer (Article 3(4)) is "a natural or legal person... that uses an AI system under its authority." When you build and deploy agents for enterprise use, you need to know which hat you are wearing. The obligations differ.
The Act structures its entire compliance regime around four risk tiers (explore the full text via the AI Act Explorer). Each tier carries different obligations and penalties. Understanding where your agent falls is the first step in any compliance program.
A critical point for agent builders: all agents that interact with humans must comply with Article 50 transparency requirements regardless of their overall risk classification. Article 50(1) requires that "AI systems intended to interact with natural persons are designed and developed in such a way that the natural person is informed that they are interacting with an AI system." Every chatbot agent, email agent, and phone agent falls under this requirement. Agents that generate synthetic text, images, or audio must also ensure those outputs are machine-readable and detectable as AI-generated (Article 50(2)).
Agents with access to personal data and autonomous decision-making could also cross into prohibited territory if they perform social scoring, exploit vulnerabilities of specific groups, or make criminal risk assessments without adequate human oversight. The prohibited practices list in Article 5 is not hypothetical for agents -- it defines hard boundaries that agent designers must respect.
Article 6 establishes two pathways through which an AI system becomes classified as high-risk.
Pathway 1 -- Article 6(1): Safety components in regulated products. AI systems intended to be used as a safety component of a product, or that are themselves a product, covered by the Union harmonisation legislation listed in Annex I, and that are required to undergo third-party conformity assessment. Annex I lists 20 pieces of legislation covering machinery, toys, medical devices, vehicles, aviation, and rail equipment. These obligations take full effect on 2 August 2027.
Pathway 2 -- Article 6(2): Standalone high-risk systems in Annex III areas. AI systems that fall into any of the eight high-risk areas listed in Annex III are classified as high-risk. This is the pathway that captures most enterprise agent deployments. These obligations apply from 2 August 2026.
Annex III defines eight areas. Each one maps directly to scenarios where organizations are already deploying or planning to deploy AI agents. Click any area below to see how agents are affected.
Remote biometric identification systems, biometric categorisation based on sensitive or protected attributes, and emotion recognition systems. Agents that process biometric data for identification or categorisation purposes fall directly into this area.
AI systems intended as safety components in the management and operation of critical digital infrastructure, road traffic, or supply of water, gas, heating, or electricity.
Agent scenario: An agent managing critical infrastructure operations -- monitoring power grid loads, routing traffic signals, or managing water treatment systems -- is classified as high-risk.
High agent exposureAI systems for determining access or admission to educational institutions, evaluating learning outcomes (including when used to steer learning), assessing appropriate education level for an individual, and monitoring or detecting prohibited behavior during tests.
Agent scenario: An AI agent that evaluates student submissions or determines admissions eligibility is high-risk under Area 3(a) and 3(b).
AI systems for recruitment and selection -- including targeted job advertisements, filtering applications, and evaluating candidates (Area 4(a)). Also covers decisions on work-related terms, promotion, termination, task allocation based on behavior or traits, and monitoring or evaluating performance (Area 4(b)).
Agent scenario: An agent that autonomously screens job applications is high-risk under Area 4(a). An agent that monitors employee performance and recommends terminations is high-risk under Area 4(b).
High agent exposureAI systems for evaluating eligibility for public assistance benefits or services, including healthcare (Area 5(a)); evaluating creditworthiness or establishing credit scores, except fraud detection (Area 5(b)); risk assessment and pricing for life and health insurance (Area 5(c)); and evaluating or classifying emergency calls or dispatching emergency services, including emergency healthcare triage (Area 5(d)).
Agent scenario: An agent that evaluates credit applications is high-risk under Area 5(b). An agent that triages patient cases in healthcare is high-risk under Area 5(a) or 5(d). An agent managing emergency dispatch is high-risk under Area 5(d).
High agent exposureAI systems for assessing risk of becoming a crime victim, polygraph-like tools, evaluating evidence reliability, assessing offending or re-offending risk, and profiling in detection, investigation, or prosecution of criminal offences.
AI systems for polygraph-like tools, assessing risks at borders (security, irregular migration, health), examining asylum, visa, or residence permit applications, and detecting, recognising, or identifying persons in the migration context.
AI systems assisting judicial authorities in researching and interpreting facts and law and in applying law to facts, including alternative dispute resolution (Area 8(a)). Also covers AI systems intended to influence election or referendum outcomes or voting behavior (Area 8(b)).
Agent scenario: An agent that assists judges in legal research and case analysis is high-risk under Area 8(a).
The governance crosswalk (CW-001) makes the agent exposure explicit: "Agentic AI systems that autonomously execute multi-step tasks, access external tools, or make decisions affecting individuals are highly likely to fall under high-risk classification. Organizations must map each agent's capability envelope -- including tool access, decision authority, and action scope -- against Annex III categories."
Article 6(3) provides a narrow exception: an Annex III system is not high-risk if it "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons" because it performs a narrow procedural task, improves a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task. This exception is unlikely to apply to most agentic systems, because agents by definition perform autonomous multi-step tasks, not narrow procedural tasks.
Once an agent is classified as high-risk, Articles 9 through 15 impose a set of mandatory obligations. These are not optional best practices. They are legal requirements with enforcement teeth. Here is what each obligation requires and why agents create specific compliance challenges.
Article 15 -- Accuracy, Robustness and Cybersecurity adds a final mandatory layer: high-risk systems must "achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently throughout their lifecycle." Article 15(4) specifically requires resilience "against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities." For agents, this means the entire compositional attack surface created by tool integration must be addressed. As the governance crosswalk (CW-022) notes: "Each MCP server, API endpoint, or external tool the agent can invoke is both a dependency and an attack vector." The Agentic AI Threat Landscape article covers these attack surfaces in detail.
The Act assigns distinct obligations depending on whether you are a provider or a deployer. In many enterprise agent deployments, organizations act as both -- they build agents (provider) and use them internally (deployer). The obligations stack.
- Ensure compliance with Chapter III, Section 2 before market placement
- Establish quality management system (Article 17) with documented policies for design, development, testing, validation, data management, and risk management
- Draw up technical documentation (Article 11, Annex IV)
- Keep automatically generated logs (Article 19)
- Carry out conformity assessment (Article 43)
- Draw up EU declaration of conformity (Article 47)
- Affix CE marking (Article 48)
- Register in the EU database before market placement (Article 49)
- Establish post-market monitoring system (Article 72)
- Report serious incidents within 15 days (Article 62)
- Take appropriate technical and organisational measures to use the system in accordance with instructions of use (Article 26(1))
- Assign human oversight to competent, trained, and authorised persons
- Ensure input data is relevant and sufficiently representative for the intended purpose
- Monitor operation and inform provider or distributor of serious incidents or malfunctions (Article 26(5))
- Conduct fundamental rights impact assessment for public bodies and certain private entities (Article 27)
- Keep logs automatically generated by the system for at least six months
Article 25 creates an important wrinkle for agent architectures: "Where a third party integrates an AI system into another product, the responsibilities of providers apply to the integrator." Agents that use foundation models from one provider, tools from another, and orchestration logic from a third create complex responsibility chains. If you integrate a third-party model into your agent system and put it into service under your name, you inherit provider obligations for the combined system.
The fundamental rights impact assessment (Article 27) deserves special attention for agent deployers in the public sector. As the governance crosswalk (CW-015) notes: "Fundamental rights impact assessments for agentic systems must address unique agent risks: autonomous decision-making that bypasses human judgment, potential for discriminatory tool-chain outcomes, privacy implications of agent memory and context accumulation, and the power asymmetry between an autonomous agent and the individuals it affects." This assessment methodology connects directly to the risk management practices described in our Agent Governance Stack article.
The EU AI Act was drafted primarily with traditional AI systems in mind -- classifiers, recommendation engines, scoring models. Agentic AI systems introduce compliance challenges that the Act does not explicitly address. These gaps are not theoretical. They affect every organization attempting to deploy agents in regulated environments.
These challenges do not mean compliance is impossible. They mean that agent compliance requires more sophisticated approaches than traditional AI compliance. The Agent Governance Stack provides a layered framework -- mapping NIST AI RMF, ISO 42001, and EU AI Act requirements into actionable controls for agent deployments. The Behavioral Bill of Materials (BBOM) addresses the documentation challenge by creating a living specification of what an agent can do, must do, and must never do.
The Act's penalty structure follows the GDPR model of scaling fines to revenue, ensuring that financial consequences are material for organizations of any size. Article 99 establishes three tiers of administrative fines.
In all cases, the higher amount applies -- the fixed amount or the percentage of total worldwide annual turnover, whichever is greater. For SMEs and startups, the lower amount applies.
Serious incident reporting (Article 62) adds an operational obligation: providers of high-risk systems must report any serious incident to market surveillance authorities "immediately after the provider has established a causal link between the AI system and the incident or malfunction or the reasonable likelihood of such a link, and, in any event, not later than 15 days." A serious incident is one that directly or indirectly leads to death, serious health damage, serious and irreversible disruption of critical infrastructure management, breach of fundamental rights protections, or serious damage to property or the environment.
Post-market monitoring (Article 72) requires providers to actively and systematically collect, document, and analyse relevant data to evaluate continuous compliance. For agents, this is especially demanding. As the governance crosswalk (CW-031) notes: "Emergent risk tracking for agents must account for behaviors that arise from novel tool combinations, context accumulation over time, and multi-agent interactions that were not present during testing." Post-market monitoring for agents is not a periodic audit. It is continuous observability.
Most agent systems are built on top of general-purpose AI (GPAI) models -- GPT-4, Claude, Gemini, and others. Chapter V of the Act (Articles 51-56) creates a separate obligations regime for GPAI model providers that flows upstream and directly affects agent builders.
Article 51 classifies GPAI models, with models posing "systemic risk" (those exceeding 10^25 FLOPs of training compute, or designated by the Commission) subject to additional obligations under Article 55. All GPAI model providers must comply with baseline obligations under Article 53: drawing up technical documentation, providing information to downstream providers integrating the model, establishing a copyright compliance policy, and publishing a sufficiently detailed summary of training data content.
For agent providers, this means your compliance partly depends on your upstream model provider's compliance. If you build an agent on a GPAI model, you inherit a dependency on the model provider's technical documentation, safety evaluations, and transparency disclosures. You cannot fully satisfy your own Article 11 documentation requirements or Article 15 robustness requirements without adequate information from the model provider. This creates a supply chain compliance model that organizations building agent systems must factor into their vendor selection and risk management processes. The cloud agent platforms from AWS, Google, and Azure each handle GPAI model integration differently, which affects your upstream compliance exposure.
The GPAI provisions took effect on 2 August 2025. Codes of practice for GPAI model providers, called for by Article 56, are being developed to provide more specific guidance on how these obligations should be implemented in practice.
The main application date for high-risk AI system obligations is 2 August 2026. That is not a distant deadline. For organizations building or deploying AI agents, the compliance preparation work is substantial. Here is the practical sequence.
Step 1: Map your agents against Annex III. For every agent in production or development, determine whether its use case falls into any of the eight high-risk areas. Remember that the same agent platform may be minimal-risk in one deployment context and high-risk in another. The classification follows the use case, not the technology.
Step 2: Inventory your tool connections. Every MCP server, API endpoint, and external tool your agent can access affects its capability envelope and potentially its risk classification. Build and maintain an inventory. Re-evaluate classification when tool configurations change.
Step 3: Implement Article 12 logging now. Complete execution trace logging -- every LLM call, tool invocation, memory operation, and decision point -- is a prerequisite for almost every other compliance obligation. It is also the foundation for the post-market monitoring required by Article 72. Start logging before you need to prove compliance.
Step 4: Design your human oversight architecture. Article 14 compliance requires more than a kill switch. Design a tiered oversight system: pre-deployment scope approval, runtime guardrails with automated constraint enforcement, real-time monitoring dashboards, asynchronous log review, and emergency stop mechanisms that can halt in-progress operations safely.
Step 5: Build your documentation practice. Article 11 and Annex IV documentation requirements map directly to the Behavioral Bill of Materials (BBOM) pattern. Start documenting your agents' intended purpose, capability boundaries, tool access, decision authority, and known limitations now. Documentation debt is compliance debt.
Step 6: Establish your governance stack. The EU AI Act does not operate in isolation. Organizations deploying agents in regulated environments need a layered governance approach that maps the Act's requirements alongside NIST AI RMF controls and ISO 42001 management system practices. Our Agent Governance Stack article provides the complete crosswalk. The EU AI Act Hub covers the broader regulatory landscape beyond agent-specific requirements, and the AI Governance Hub addresses the organizational and policy infrastructure needed to sustain compliance over time.
The EU AI Act creates hard compliance deadlines for AI agent deployments. Start with the Agent Governance Stack to build your compliance framework, then use the BBOM to document your agents. For the broader regulatory landscape beyond agents, explore our EU AI Act Hub. Stay current with agent security developments at the Security News Center and the latest industry trends at the AI News Hub. Professionals building careers in AI compliance and governance will find the AI Governance Careers hub a useful resource for role definitions, required skills, and salary benchmarks. For a hands-on assessment of your agent architecture, try the Blueprint Quest.