August 2 arrives whether or not the Digital Omnibus trilogue resolves. It arrives whether or not the legal community reaches consensus on agentic drift. And it arrives whether or not your conformity documentation is complete.
One hundred five days is enough to close specific, identified gaps. It’s not enough to build a conformity documentation system from scratch, remediate agentic pipeline architectures, or negotiate with national competent authorities about assessment scope. The question for compliance teams is not whether to act. It’s what to act on first.
Section 1: Annex III Obligations in Plain Language
The EU AI Act’s Annex III designates categories of AI systems as high-risk. The list covers: biometric identification systems, AI managing critical infrastructure, educational and vocational training tools affecting access, employment and worker management systems, access to essential private and public services (including credit scoring and benefits eligibility), law enforcement applications, migration and asylum management, and administration of justice systems.
If your AI system falls into one of those categories and is deployed in the EU, you face Annex III obligations as of August 2, 2026. Those obligations require:
Conformity assessment. You must demonstrate, before deployment, that the system meets the Act’s safety and performance requirements. For most systems, this is a self-assessment against harmonized standards. For some biometric systems, third-party assessment is required.
Technical documentation. You must maintain a technical file covering: the system’s intended purpose and design logic; the data used in training and validation; the human oversight measures; the expected system lifetime and performance thresholds; and the results of testing. The documentation must be kept current.
Data governance standards. Training, validation, and testing data must meet quality criteria including relevance to intended purpose, representation of affected populations, and freedom from known errors that could affect safety.
Human oversight mechanisms. The system must be designed to allow human operators to monitor, understand, and override its outputs. The oversight mechanism must be documented and accessible.
Transparency to deployers. You must provide deployers (the organizations using your system) with information sufficient for them to understand the system’s capabilities and limitations, use it appropriately, and implement the required human oversight.
Registration. High-risk systems must be registered in the EU database maintained by the European AI Office before being placed on the market.
These requirements are not light. For companies that have been building AI systems without EU market compliance as a primary design constraint, the conformity assessment process will surface gaps that require engineering remediation, not just documentation work.
Section 2: The Agentic Drift Problem, What the Legal Argument Actually Claims
Legal researchers published an April 2026 paper arguing that agentic AI systems exhibiting “untraceable behavioral drift” may be incompatible with the EU AI Act’s transparency obligations for high-risk systems. This is a legal interpretation, not a regulatory determination. The Nannini et al. paper’s publication venue is pending confirmation, which affects the weight it carries as legal authority. The argument itself, however, is analytically coherent.
Here’s the logic chain: Annex III requires conformity documentation that accurately describes the system’s behavior. An agentic system, one that takes sequences of actions, calls external tools, updates its operating context, or interacts with other agents, may behave differently in deployment than it did during the conformity assessment. If that behavioral drift is sufficiently significant and cannot be traced (documented), the conformity assessment no longer accurately describes the system. A system whose actual behavior diverges materially from its conformity documentation may not meet the Act’s requirements for being placed on the EU market.
The argument targets a genuine architectural feature of agentic systems, not a hypothetical vulnerability. Systems that use tool-calling, retrieval-augmented generation with live data sources, memory mechanisms that persist across sessions, or multi-agent orchestration where one agent’s outputs affect another’s behavior are all candidates for behavioral drift that is difficult to predict and document in advance.
What this doesn’t mean: The paper’s argument is not that all agentic AI is banned from the EU market. It’s that agentic AI with untraceable behavioral drift may fail the conformity documentation requirement. The compliance response is not to abandon agentic architectures, it’s to build agentic systems with behavioral monitoring, logging, and drift detection that makes the drift traceable and documentable.
What compliance teams should do with this argument: Model it. If your agentic pipeline could exhibit behavioral drift that you cannot document, that’s a gap. Identify whether your human oversight mechanisms would catch material drift before it causes harm. Build logging that creates an audit trail of system behavior in deployment. If the argument gains traction with the European AI Office, which monitors enforcement posture, operators who have already addressed the underlying technical concern will be in a stronger position than those who dismissed the paper before enforcement posture clarified.
Section 3: The Digital Omnibus Wild Card, What a Delay Would and Wouldn’t Change
EU trilogue negotiations over a Digital Omnibus legislative package include provisions that could delay certain Annex III obligations to December 2027. The word “certain” matters. Even if the delay goes through, it’s unlikely to apply uniformly to all Annex III categories. Biometric identification systems, which have attracted the most political attention, may face different treatment than employment screening systems. The specific scope of any delay won’t be known until the final trilogue text is adopted.
More importantly: the delay has not been confirmed. The August 2 date is operative. Compliance teams that are treating the Digital Omnibus delay as a planning premise, as a reason to slow current compliance work, are accepting legal exposure for an outcome that may not materialize. The safer planning premise is this: build toward August 2, and treat any delay as a bonus rather than a baseline.
If the delay does go through, it won’t eliminate Annex III requirements. It will push the enforcement date for specific obligation categories. The conformity documentation infrastructure you build before August 2 doesn’t become worthless, it becomes ahead of schedule. There’s no compliance downside to being ready for a deadline that gets extended.
Section 4: A 105-Day Compliance Checklist for Operators Who Aren’t Ready
This isn’t a comprehensive compliance program, it’s a triage framework for prioritizing the next 105 days.
Week 1-2: Scope determination. Audit your AI system portfolio against Annex III categories. Which systems are deployed or intended for deployment in the EU? Which of those fall into a listed high-risk category? This scoping exercise should produce a prioritized list of systems requiring Annex III compliance.
Week 3-4: Gap assessment. For each in-scope system, map current documentation against Annex III requirements. Technical documentation, data governance records, human oversight specifications, and registration status. The gaps you find now are your compliance priority list.
Week 5-8: Documentation and remediation. Close documentation gaps. Where remediation requires engineering changes, particularly for human oversight mechanisms and behavioral monitoring in agentic systems, begin that work now. Engineering changes that take 6-8 weeks to implement need to start in the next 30 days to be ready by August 2.
Week 9-12: Conformity assessment. Complete the self-assessment (or engage third-party assessors for systems requiring external review). Document the assessment process and outcomes.
Week 13-15 (final stretch): Registration and review. Register in-scope systems in the EU database. Prepare deployer-facing documentation. Conduct a final gap review against the checklist.
Ongoing: Agentic system monitoring. Implement behavioral logging and drift monitoring for any agentic pipelines in scope. The Nannini et al. argument, regardless of its regulatory outcome, identifies a real technical requirement: you need to be able to demonstrate that your system’s deployment behavior is consistent with your conformity documentation.
Section 5: What to Watch
The trilogue. Watch for formal text, not informal signals or press briefings about progress. The Digital Omnibus delay is only real when it’s in adopted legislative text.
The European AI Office. The Office’s first enforcement guidance on Annex III interpretation will clarify how it reads conformity requirements for agentic systems. Any guidance that references behavioral monitoring or drift detection signals that the Nannini et al. argument has influenced regulatory thinking.
GPAI model classifications. General Purpose AI Model obligations under the EU AI Act create a separate compliance track that intersects with Annex III for systems built on GPAI foundations. Classification decisions by the European AI Office will affect which systems trigger which requirements.
National competent authority readiness. Member states are establishing their own national competent authorities for AI Act enforcement. The readiness of those authorities, and their interpretive posture, will shape how Annex III is enforced in practice in each market.
TJS Synthesis
The Annex III deadline is real. The agentic drift argument is analytically credible, regardless of whether it becomes an enforcement priority in year one. The Digital Omnibus delay is unconfirmed and shouldn’t be planned around. Those three facts together produce a clear recommendation: use the next 105 days to build conformity documentation infrastructure and behavioral monitoring capability for agentic systems. The documentation work is required regardless of the drift argument’s fate. The behavioral monitoring is required regardless of the delay’s outcome. The companies that treat August 2 as an opportunity to establish governance credibility in the EU market will have a structural advantage over those that treated it as a compliance deadline to survive. In a market where the EU AI Act certification will increasingly function as a trust signal, to enterprise buyers, to regulators in other markets, and to end users, being ready is not just a legal requirement. It’s a competitive position.