Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Hub / Govern / EU AI Act and Agents
Govern Pillar

EU AI Act and Agents

High-Risk Classification and Compliance Requirements

2,847 Words 12 Min Read 4 Sources 22 Citations
01 // Regulation The EU AI Act In Force

Regulation (EU) 2024/1689 -- the EU Artificial Intelligence Act -- is the world's first comprehensive legal framework for artificial intelligence. Published on 12 July 2024, it entered into force on 1 August 2024. Enforcement is coordinated by the EU AI Office (established February 2024) and national market surveillance authorities. It is not a directive that member states transpose at their discretion. It is a regulation: directly applicable, legally binding, and enforceable across all 27 EU member states from the date each provision takes effect.

The Act applies to providers placing AI systems on the EU market and deployers using them within the EU. It also reaches providers and deployers located outside the EU when their AI system's output is used inside the EU. That extraterritorial reach means any organization deploying AI agents that touch EU citizens, EU data, or EU operations must comply -- regardless of where the agent is hosted or the company is headquartered.

The Act uses a risk-based approach. Not every AI system faces the same obligations. The framework classifies systems into four risk tiers -- prohibited, high-risk, limited risk, and minimal risk -- with compliance obligations scaled to the potential for harm. For agentic AI systems, the high-risk tier is where most of the regulatory weight falls, and most enterprise agent deployments will need to engage with it directly.

1 Aug 2024 Live
Entry into force. 20 days after publication on 12 July 2024.
2 Feb 2025 Live
Chapters I and II apply. Definitions and prohibited AI practices take effect.
2 Aug 2025 Live
Governance and GPAI. Chapter III Section 4, Chapter V (general-purpose AI), Chapter VII, Chapter XII, and Article 78.
2 Aug 2026 Upcoming
Main application date. Most provisions apply, including high-risk AI system requirements for standalone Annex III systems.
2 Aug 2027
Article 6(1) obligations. High-risk systems that are safety components of products under Annex I harmonisation legislation.
02 // Definitions How the Act Defines AI Systems Article 3

Before you can classify an AI agent under the Act, you need to understand what the Act considers an AI system. Article 3(1) provides the legal definition:

"'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Article 3(1), EU AI Act

Agentic AI systems fall squarely within this definition. They operate with "varying levels of autonomy" -- that is the defining characteristic of an agent. They exhibit "adaptiveness after deployment" through memory, learning, and context accumulation. And they generate outputs that "influence physical or virtual environments" through tool use and autonomous actions: sending emails, executing code, modifying databases, invoking APIs.

The definition is deliberately broad. It does not reference specific technologies, model architectures, or training methods. This technology-neutral approach means the Act will cover agent architectures that do not yet exist. For the same reason, it captures every major agent pattern in production today: single agents, multi-agent orchestration, hierarchical delegation chains, and tool-augmented reasoning loops.

Two other definitions from Article 3 matter for agent compliance. A provider (Article 3(3)) is "a natural or legal person... that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark." A deployer (Article 3(4)) is "a natural or legal person... that uses an AI system under its authority." When you build and deploy agents for enterprise use, you need to know which hat you are wearing. The obligations differ.

03 // Classification Four Risk Tiers Risk-Based

The Act structures its entire compliance regime around four risk tiers (explore the full text via the AI Act Explorer). Each tier carries different obligations and penalties. Understanding where your agent falls is the first step in any compliance program.

Unacceptable
Article 5
Prohibited practices: social scoring, subliminal manipulation, exploitation of vulnerabilities, untargeted facial image scraping, emotion recognition in workplaces.
Max: 35M EUR / 7% turnover
High-Risk
Articles 6-7, Annex III
Mandatory requirements: risk management, data governance, documentation, logging, transparency, human oversight, accuracy and security.
Max: 15M EUR / 3% turnover
👁
Limited Risk
Article 50
Transparency obligations: AI systems interacting with people must disclose they are AI. Synthetic content must be labeled.
Max: 7.5M EUR / 1% turnover
Minimal Risk
No specific articles
No mandatory obligations. Voluntary codes of conduct encouraged.
No penalties

A critical point for agent builders: all agents that interact with humans must comply with Article 50 transparency requirements regardless of their overall risk classification. Article 50(1) requires that "AI systems intended to interact with natural persons are designed and developed in such a way that the natural person is informed that they are interacting with an AI system." Every chatbot agent, email agent, and phone agent falls under this requirement. Agents that generate synthetic text, images, or audio must also ensure those outputs are machine-readable and detectable as AI-generated (Article 50(2)).

Agents with access to personal data and autonomous decision-making could also cross into prohibited territory if they perform social scoring, exploit vulnerabilities of specific groups, or make criminal risk assessments without adequate human oversight. The prohibited practices list in Article 5 is not hypothetical for agents -- it defines hard boundaries that agent designers must respect.

04 // High-Risk Article 6: Dual Pathway to High-Risk Critical

Article 6 establishes two pathways through which an AI system becomes classified as high-risk.

Pathway 1 -- Article 6(1): Safety components in regulated products. AI systems intended to be used as a safety component of a product, or that are themselves a product, covered by the Union harmonisation legislation listed in Annex I, and that are required to undergo third-party conformity assessment. Annex I lists 20 pieces of legislation covering machinery, toys, medical devices, vehicles, aviation, and rail equipment. These obligations take full effect on 2 August 2027.

Pathway 2 -- Article 6(2): Standalone high-risk systems in Annex III areas. AI systems that fall into any of the eight high-risk areas listed in Annex III are classified as high-risk. This is the pathway that captures most enterprise agent deployments. These obligations apply from 2 August 2026.

Annex III defines eight areas. Each one maps directly to scenarios where organizations are already deploying or planning to deploy AI agents. Click any area below to see how agents are affected.

👤
Area 1
Biometrics
Area 2
Critical Infrastructure
🎓
Area 3
Education
💼
Area 4
Employment
🏥
Area 5
Essential Services
Area 6
Law Enforcement
🌎
Area 7
Migration
Area 8
Justice & Democracy
Area 1: Biometrics

Remote biometric identification systems, biometric categorisation based on sensitive or protected attributes, and emotion recognition systems. Agents that process biometric data for identification or categorisation purposes fall directly into this area.

Area 2: Critical Infrastructure

AI systems intended as safety components in the management and operation of critical digital infrastructure, road traffic, or supply of water, gas, heating, or electricity.

Agent scenario: An agent managing critical infrastructure operations -- monitoring power grid loads, routing traffic signals, or managing water treatment systems -- is classified as high-risk.

High agent exposure
Area 3: Education and Vocational Training

AI systems for determining access or admission to educational institutions, evaluating learning outcomes (including when used to steer learning), assessing appropriate education level for an individual, and monitoring or detecting prohibited behavior during tests.

Agent scenario: An AI agent that evaluates student submissions or determines admissions eligibility is high-risk under Area 3(a) and 3(b).

Area 4: Employment and Workers' Management

AI systems for recruitment and selection -- including targeted job advertisements, filtering applications, and evaluating candidates (Area 4(a)). Also covers decisions on work-related terms, promotion, termination, task allocation based on behavior or traits, and monitoring or evaluating performance (Area 4(b)).

Agent scenario: An agent that autonomously screens job applications is high-risk under Area 4(a). An agent that monitors employee performance and recommends terminations is high-risk under Area 4(b).

High agent exposure
Area 5: Essential Private and Public Services

AI systems for evaluating eligibility for public assistance benefits or services, including healthcare (Area 5(a)); evaluating creditworthiness or establishing credit scores, except fraud detection (Area 5(b)); risk assessment and pricing for life and health insurance (Area 5(c)); and evaluating or classifying emergency calls or dispatching emergency services, including emergency healthcare triage (Area 5(d)).

Agent scenario: An agent that evaluates credit applications is high-risk under Area 5(b). An agent that triages patient cases in healthcare is high-risk under Area 5(a) or 5(d). An agent managing emergency dispatch is high-risk under Area 5(d).

High agent exposure
Area 6: Law Enforcement

AI systems for assessing risk of becoming a crime victim, polygraph-like tools, evaluating evidence reliability, assessing offending or re-offending risk, and profiling in detection, investigation, or prosecution of criminal offences.

Area 7: Migration, Asylum, and Border Control

AI systems for polygraph-like tools, assessing risks at borders (security, irregular migration, health), examining asylum, visa, or residence permit applications, and detecting, recognising, or identifying persons in the migration context.

Area 8: Administration of Justice and Democratic Processes

AI systems assisting judicial authorities in researching and interpreting facts and law and in applying law to facts, including alternative dispute resolution (Area 8(a)). Also covers AI systems intended to influence election or referendum outcomes or voting behavior (Area 8(b)).

Agent scenario: An agent that assists judges in legal research and case analysis is high-risk under Area 8(a).

The governance crosswalk (CW-001) makes the agent exposure explicit: "Agentic AI systems that autonomously execute multi-step tasks, access external tools, or make decisions affecting individuals are highly likely to fall under high-risk classification. Organizations must map each agent's capability envelope -- including tool access, decision authority, and action scope -- against Annex III categories."

Article 6(3) provides a narrow exception: an Annex III system is not high-risk if it "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons" because it performs a narrow procedural task, improves a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task. This exception is unlikely to apply to most agentic systems, because agents by definition perform autonomous multi-step tasks, not narrow procedural tasks.

05 // Obligations What High-Risk Systems Must Do Articles 9-15

Once an agent is classified as high-risk, Articles 9 through 15 impose a set of mandatory obligations. These are not optional best practices. They are legal requirements with enforcement teeth. Here is what each obligation requires and why agents create specific compliance challenges.

Article 9
Risk Management System
A continuous, iterative risk management process throughout the entire lifecycle. Must identify known and reasonably foreseeable risks, estimate risks from intended use and reasonably foreseeable misuse, and test against prior-defined metrics and probabilistic thresholds.
Agent challenge: Risk management must be "continuous" -- agents that learn from interactions and accumulate memory continuously change, potentially drifting out of compliance.
Article 10
Data and Data Governance
Training, validation, and testing data sets must meet quality criteria. Data sets must account for "the specific geographical, contextual, behavioral, or functional setting within which the AI system is intended to be used."
Agent challenge: Data governance extends beyond training data to runtime data -- agent memory stores, retrieved context, tool outputs, and intermediate reasoning artifacts (CW-020).
Article 11
Technical Documentation
Documentation drawn up before market placement, kept up to date, conforming to Annex IV. Must cover intended purpose, architecture, design specs, training data, performance metrics, risk management, and lifecycle changes.
Agent challenge: A Behavioral Bill of Materials (BBOM) maps directly to Annex IV documentation requirements.
Article 12
Record-Keeping (Logging)
Automatic recording of events ("logs") over the system's lifetime. Logging must ensure traceability of functioning throughout the lifecycle and conform to recognised standards.
Agent challenge: Must capture the complete execution trace -- every LLM call, tool invocation, memory read/write, external API call, and decision point (CW-030).
Article 13
Transparency
Systems must be "sufficiently transparent to enable deployers to interpret the system's output and use it appropriately." Must include system characteristics, capabilities, limitations, human oversight measures, and expected lifetime.
Agent challenge: Agentic transparency is elevated -- every agent-initiated communication must identify the agent as non-human. Agentic provenance must be disclosed for all generated content (CW-027).
Article 14
Human Oversight
Systems must be designed for effective oversight by natural persons, including ability to understand capabilities, monitor operation, interpret outputs, override or reverse outputs, and intervene via a stop mechanism.
Agent challenge: Per-action approval is impractical for high-throughput agents. Kill-switch must halt in-progress tool invocations, cancel queued actions, revoke temporary credentials, and notify downstream systems.

Article 15 -- Accuracy, Robustness and Cybersecurity adds a final mandatory layer: high-risk systems must "achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently throughout their lifecycle." Article 15(4) specifically requires resilience "against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities." For agents, this means the entire compositional attack surface created by tool integration must be addressed. As the governance crosswalk (CW-022) notes: "Each MCP server, API endpoint, or external tool the agent can invoke is both a dependency and an attack vector." The Agentic AI Threat Landscape article covers these attack surfaces in detail.

06 // Roles Provider vs. Deployer Obligations Articles 16-27

The Act assigns distinct obligations depending on whether you are a provider or a deployer. In many enterprise agent deployments, organizations act as both -- they build agents (provider) and use them internally (deployer). The obligations stack.

Provider (Article 16)
"Develops an AI system... and places it on the market or puts it into service under its own name or trademark."
  • Ensure compliance with Chapter III, Section 2 before market placement
  • Establish quality management system (Article 17) with documented policies for design, development, testing, validation, data management, and risk management
  • Draw up technical documentation (Article 11, Annex IV)
  • Keep automatically generated logs (Article 19)
  • Carry out conformity assessment (Article 43)
  • Draw up EU declaration of conformity (Article 47)
  • Affix CE marking (Article 48)
  • Register in the EU database before market placement (Article 49)
  • Establish post-market monitoring system (Article 72)
  • Report serious incidents within 15 days (Article 62)
Deployer (Article 26)
"Uses an AI system under its authority."
  • Take appropriate technical and organisational measures to use the system in accordance with instructions of use (Article 26(1))
  • Assign human oversight to competent, trained, and authorised persons
  • Ensure input data is relevant and sufficiently representative for the intended purpose
  • Monitor operation and inform provider or distributor of serious incidents or malfunctions (Article 26(5))
  • Conduct fundamental rights impact assessment for public bodies and certain private entities (Article 27)
  • Keep logs automatically generated by the system for at least six months

Article 25 creates an important wrinkle for agent architectures: "Where a third party integrates an AI system into another product, the responsibilities of providers apply to the integrator." Agents that use foundation models from one provider, tools from another, and orchestration logic from a third create complex responsibility chains. If you integrate a third-party model into your agent system and put it into service under your name, you inherit provider obligations for the combined system.

The fundamental rights impact assessment (Article 27) deserves special attention for agent deployers in the public sector. As the governance crosswalk (CW-015) notes: "Fundamental rights impact assessments for agentic systems must address unique agent risks: autonomous decision-making that bypasses human judgment, potential for discriminatory tool-chain outcomes, privacy implications of agent memory and context accumulation, and the power asymmetry between an autonomous agent and the individuals it affects." This assessment methodology connects directly to the risk management practices described in our Agent Governance Stack article.

07 // Challenges Why Agents Break the Compliance Model Analysis

The EU AI Act was drafted primarily with traditional AI systems in mind -- classifiers, recommendation engines, scoring models. Agentic AI systems introduce compliance challenges that the Act does not explicitly address. These gaps are not theoretical. They affect every organization attempting to deploy agents in regulated environments.

01
Classification Complexity
Agents are general-purpose tools whose risk classification depends on use case, not architecture. The same agent platform might be minimal-risk for document summarization but high-risk when used for employee performance evaluation. Multi-agent systems create classification ambiguity -- is the orchestrator or the individual agent the "AI system"?
02
Dynamic Capability
Agent capabilities change based on which tools are connected at runtime. Adding a new MCP server can change an agent's risk classification overnight. Article 6 classification should be re-evaluated when tool configurations change, but the Act provides no guidance on this dynamic reassessment.
03
Human Oversight Architecture
Article 14 requires "effective" human oversight but does not define minimum oversight standards for autonomous systems. Agents executing hundreds of actions per minute cannot have per-action human approval. Tiered oversight is necessary -- pre-deployment scope approval, runtime guardrails, monitoring dashboards, async log review, emergency stop -- but not explicitly addressed in the Act.
04
Third-Party Responsibility
Agents using foundation models from one provider, tools from another, and orchestration from a third create responsibility chains the Act does not fully address. MCP tool providers may or may not fall under Article 25, depending on how "integration" is interpreted. Agent orchestration patterns (multi-agent, hierarchical delegation) create novel regulatory surface area.
05
Continuous Compliance
Article 9(1) requires risk management as "a continuous iterative process planned and run throughout the entire lifecycle." Agents that learn from interactions and accumulate memory continuously change. Model updates from upstream providers can change agent behavior without the agent provider's knowledge. Emergent behaviors from novel tool combinations may not have been present during testing.

These challenges do not mean compliance is impossible. They mean that agent compliance requires more sophisticated approaches than traditional AI compliance. The Agent Governance Stack provides a layered framework -- mapping NIST AI RMF, ISO 42001, and EU AI Act requirements into actionable controls for agent deployments. The Behavioral Bill of Materials (BBOM) addresses the documentation challenge by creating a living specification of what an agent can do, must do, and must never do.

08 // Enforcement Penalties and Serious Incident Reporting Article 99

The Act's penalty structure follows the GDPR model of scaling fines to revenue, ensuring that financial consequences are material for organizations of any size. Article 99 establishes three tiers of administrative fines.

Administrative Fines (Article 99)
35M EUR or 7%
Prohibited practices (Article 5) violations
15M EUR or 3%
High-risk non-compliance (Articles 6-15)
7.5M EUR or 1%
Incorrect information to authorities

In all cases, the higher amount applies -- the fixed amount or the percentage of total worldwide annual turnover, whichever is greater. For SMEs and startups, the lower amount applies.

Serious incident reporting (Article 62) adds an operational obligation: providers of high-risk systems must report any serious incident to market surveillance authorities "immediately after the provider has established a causal link between the AI system and the incident or malfunction or the reasonable likelihood of such a link, and, in any event, not later than 15 days." A serious incident is one that directly or indirectly leads to death, serious health damage, serious and irreversible disruption of critical infrastructure management, breach of fundamental rights protections, or serious damage to property or the environment.

Post-market monitoring (Article 72) requires providers to actively and systematically collect, document, and analyse relevant data to evaluate continuous compliance. For agents, this is especially demanding. As the governance crosswalk (CW-031) notes: "Emergent risk tracking for agents must account for behaviors that arise from novel tool combinations, context accumulation over time, and multi-agent interactions that were not present during testing." Post-market monitoring for agents is not a periodic audit. It is continuous observability.

09 // GPAI General-Purpose AI and the Upstream Chain Chapter V

Most agent systems are built on top of general-purpose AI (GPAI) models -- GPT-4, Claude, Gemini, and others. Chapter V of the Act (Articles 51-56) creates a separate obligations regime for GPAI model providers that flows upstream and directly affects agent builders.

Article 51 classifies GPAI models, with models posing "systemic risk" (those exceeding 10^25 FLOPs of training compute, or designated by the Commission) subject to additional obligations under Article 55. All GPAI model providers must comply with baseline obligations under Article 53: drawing up technical documentation, providing information to downstream providers integrating the model, establishing a copyright compliance policy, and publishing a sufficiently detailed summary of training data content.

For agent providers, this means your compliance partly depends on your upstream model provider's compliance. If you build an agent on a GPAI model, you inherit a dependency on the model provider's technical documentation, safety evaluations, and transparency disclosures. You cannot fully satisfy your own Article 11 documentation requirements or Article 15 robustness requirements without adequate information from the model provider. This creates a supply chain compliance model that organizations building agent systems must factor into their vendor selection and risk management processes. The cloud agent platforms from AWS, Google, and Azure each handle GPAI model integration differently, which affects your upstream compliance exposure.

The GPAI provisions took effect on 2 August 2025. Codes of practice for GPAI model providers, called for by Article 56, are being developed to provide more specific guidance on how these obligations should be implemented in practice.

10 // Action Compliance Readiness for Agent Builders Practical

The main application date for high-risk AI system obligations is 2 August 2026. That is not a distant deadline. For organizations building or deploying AI agents, the compliance preparation work is substantial. Here is the practical sequence.

Step 1: Map your agents against Annex III. For every agent in production or development, determine whether its use case falls into any of the eight high-risk areas. Remember that the same agent platform may be minimal-risk in one deployment context and high-risk in another. The classification follows the use case, not the technology.

Step 2: Inventory your tool connections. Every MCP server, API endpoint, and external tool your agent can access affects its capability envelope and potentially its risk classification. Build and maintain an inventory. Re-evaluate classification when tool configurations change.

Step 3: Implement Article 12 logging now. Complete execution trace logging -- every LLM call, tool invocation, memory operation, and decision point -- is a prerequisite for almost every other compliance obligation. It is also the foundation for the post-market monitoring required by Article 72. Start logging before you need to prove compliance.

Step 4: Design your human oversight architecture. Article 14 compliance requires more than a kill switch. Design a tiered oversight system: pre-deployment scope approval, runtime guardrails with automated constraint enforcement, real-time monitoring dashboards, asynchronous log review, and emergency stop mechanisms that can halt in-progress operations safely.

Step 5: Build your documentation practice. Article 11 and Annex IV documentation requirements map directly to the Behavioral Bill of Materials (BBOM) pattern. Start documenting your agents' intended purpose, capability boundaries, tool access, decision authority, and known limitations now. Documentation debt is compliance debt.

Step 6: Establish your governance stack. The EU AI Act does not operate in isolation. Organizations deploying agents in regulated environments need a layered governance approach that maps the Act's requirements alongside NIST AI RMF controls and ISO 42001 management system practices. Our Agent Governance Stack article provides the complete crosswalk. The EU AI Act Hub covers the broader regulatory landscape beyond agent-specific requirements, and the AI Governance Hub addresses the organizational and policy infrastructure needed to sustain compliance over time.

The EU AI Act creates hard compliance deadlines for AI agent deployments. Start with the Agent Governance Stack to build your compliance framework, then use the BBOM to document your agents. For the broader regulatory landscape beyond agents, explore our EU AI Act Hub. Stay current with agent security developments at the Security News Center and the latest industry trends at the AI News Hub. Professionals building careers in AI compliance and governance will find the AI Governance Careers hub a useful resource for role definitions, required skills, and salary benchmarks. For a hands-on assessment of your agent architecture, try the Blueprint Quest.

◀ Previous Article Behavioral Bill of Materials (BBOM): Documenting What Your Agent Can Do Back to Pillar ▶ Govern: Agentic AI