Start with what the EU AI Act actually requires, because the compliance gap for agentic AI only becomes visible once you look at the specific text.
Chapter III of the Act, governing high-risk AI systems listed under Annex III, mandates conformity assessments, continuous technical documentation, and human oversight mechanisms. Article 13 requires that high-risk AI systems be designed and developed to be transparent, that their operation can be understood by deployers and relevant oversight personnel. Article 14 requires that high-risk AI systems allow effective human oversight, including the ability to monitor outputs, intervene in operation, and halt the system when necessary. Article 9 requires ongoing risk management throughout the system’s lifecycle.
These are demanding requirements for any high-risk AI system. For agentic AI, systems designed to pursue goals autonomously across multiple steps, using tools, calling external APIs, making intermediate decisions, and adapting behavior based on prior outputs, they present specific challenges that bounded, deterministic AI systems don’t share to the same degree.
The Certification Problem
Conformity assessment under the EU AI Act is not a one-time audit. It’s an ongoing process of documentation and risk management that must be maintainable across the system’s deployment lifecycle. For a conventional high-risk AI system, a credit scoring model, a medical imaging classifier, a hiring screening tool, behavioral documentation is demanding but tractable. The system takes a defined input, produces a defined output through a bounded process, and that process can be characterized, tested, and documented.
Agentic systems complicate this at every stage. An AI agent pursuing a multi-step workflow may take actions that weren’t explicitly programmed for the specific input it received, that’s part of what makes it useful. It may call external systems, generate intermediate outputs that feed into subsequent decisions, and arrive at a final action through a chain of reasoning that wasn’t fully anticipated at design time. The documentation challenge isn’t just technical, it’s architectural. You can’t produce a static document describing all possible action chains for a system whose action chains are dynamically determined.
Analysts and legal commentators have raised these traceability and interpretability concerns directly in the context of EU AI Act compliance, according to AI News reporting from April 9, 2026 and related April 11 coverage. The practical question, how does a compliance team produce Article 13-compliant transparency documentation for a system whose outputs are context-dependent and partially non-deterministic, doesn’t have a published answer in official EU guidance yet.
What Articles 13 and 14 Actually Require for Agents
Working through the requirements with agentic architectures in mind makes the challenge concrete.
Article 13 (Transparency) requires that high-risk AI systems be accompanied by instructions for use adequate for deployers to understand the system’s capabilities, limitations, and conditions under which performance is adequate. For an agentic system, “capabilities and limitations” includes the conditions under which the agent may take unexpected actions, which is, in part, definitionally unpredictable. Documentation that satisfies Article 13 for an agentic system needs to describe not just intended behavior but the boundaries of the system’s autonomy, the conditions under which human review is triggered, and the logging infrastructure that captures what actions were taken and why.
Article 14 (Human Oversight) requires that high-risk AI systems be designed to allow natural persons to effectively oversee operation during deployment. For agentic systems, this means the human-in-the-loop design isn’t optional compliance theater, it’s a structural requirement. That design needs to answer specific questions: At what points in a multi-step workflow is human review required? What triggers an interruption of autonomous operation? What is the escalation path when the system’s action would cross a defined threshold? These aren’t questions you can answer after deployment. They require deliberate architectural choices made during design.
The Literacy Training Obligation: Already in Force
One EU AI Act obligation doesn’t require waiting for August’s deadline, or for a possible extension of it, because it’s already in effect. The Act’s requirement that providers and deployers ensure appropriate AI literacy for all staff working with or affected by AI systems entered into force on February 2, 2025.
Boards Impact Forum’s April 9, 2026 analysis is direct: this isn’t an aspiration, it’s a legal obligation, and boards, who often sit furthest from operational AI systems but are responsible for governance, should have addressed it first. For organizations deploying agentic AI, the literacy obligation has heightened urgency. Staff overseeing agentic systems need to understand not just what AI does in general, but the specific oversight responsibilities they carry for systems capable of autonomous action. An organization that provides generic “AI awareness” training to staff responsible for overseeing high-risk agentic deployments has not satisfied the spirit of the literacy requirement, and arguably hasn’t satisfied its letter either.
The Deadline Uncertainty Layer
The current statutory deadline for Annex III compliance is August 2, 2026. Two previously published Tech Jacks Solutions briefs, covering the proposal to delay this deadline to 2027 or 2028 and the three-track compliance planning scenarios it creates, provide essential context for how to plan around this uncertainty. Compliance teams should read both before committing to a specific compliance timeline.
The short version: planning for August 2026 and then having the deadline extend is a manageable outcome. The compliance work you did is still required eventually, and being early is a defensible position. Planning for an extension that doesn’t materialize, and arriving at August 2026 without completed conformity assessments, documentation, or human oversight mechanisms, is a material regulatory risk. The asymmetry favors treating August 2026 as the operative deadline while tracking the delay discussion closely.
Legal analysts have noted, with appropriate qualification, that organizations may also face overlapping penalty exposure under both the EU AI Act and GDPR where AI systems process personal data. The intersection is particularly sharp for agentic systems, which often need to query, retrieve, and act on personal data as part of their workflows. Whether both regulatory regimes can simultaneously penalize the same failure is a question that hasn’t been resolved in published enforcement guidance, but the theoretical exposure is real and warrants legal review for organizations in scope of both frameworks.
Practical Compliance Actions for Agentic AI Deployments
The compliance path for organizations deploying agentic systems in high-risk contexts runs through five concrete actions, each of which is both required and presently underspecified in published EU guidance.
Document the autonomy boundaries now. Define, in writing, the scope of actions the agent is permitted to take without human review, the conditions that trigger escalation, and the logging infrastructure that captures each step in the action chain. This documentation is the foundation of Article 13 compliance and the audit trail for any future conformity assessment. It also forces design conversations that many teams are currently avoiding.
Design human oversight as an architectural requirement, not a feature. Article 14 compliance for agentic systems means human-in-the-loop design choices that are embedded at the architectural level, not a “review queue” added after the fact. Define the intervention points, test them, and document them.
Complete AI literacy training for all staff in scope. This obligation is already past-effective. If your organization hasn’t implemented AI literacy programs for staff responsible for overseeing AI systems, including agentic ones, address this immediately. The program needs to cover the specific oversight responsibilities of each role, not just general AI awareness.
Initiate conformity assessment preparation for your highest-risk agentic deployments. The full assessment process takes time. Waiting for deadline clarity before starting means you’ll be executing under time pressure regardless of when the deadline lands. The conformity assessment for a non-deterministic agentic system is more complex than for a conventional high-risk AI tool, start early.
Map your GDPR obligations against your agentic AI data flows. If your agents process personal data, and most do, in meaningful ways, the intersection with GDPR creates compliance requirements that need separate legal review. Don’t assume EU AI Act compliance covers GDPR exposure for the same systems.
The EU AI Act’s conformity framework will eventually develop published guidance specific to agentic architectures. It hasn’t yet. In the gap between current requirements and future guidance, the organizations that fare best will be those that took the Act’s underlying intent seriously, systems that affect people in consequential ways should be transparent, auditable, and subject to meaningful human control, and designed their agentic deployments accordingly.