Compliance deadlines have a way of feeling distant until they don’t. August 2, 2026 is 112 days away. For organizations that have been monitoring EU AI Act developments but deferring implementation, that distance is now short enough to matter, and the compliance work required isn’t the kind that compresses into 30 days.
The EU AI Act’s phased implementation structure was designed deliberately. Prohibited practices (Article 5) and AI literacy obligations (Article 4) took effect February 2, 2025. The heaviest operational requirements, the seven Articles governing high-risk AI system compliance, come into force August 2, 2026, giving organizations 24 months from the Act’s entry into force to build compliant systems. That window closes in 112 days.
Where August 2 Sits in the Full Timeline
Understanding what activates August 2 requires understanding what’s already active.
February 2, 2025 (already in force): – Article 5 prohibited AI practices (biometric categorization by sensitive characteristics, social scoring by public authorities, certain real-time remote biometric identification systems, and others) – Article 4 AI literacy requirements (providers and deployers must ensure sufficient AI literacy among relevant personnel)
August 2, 2026 (112 days): – Articles 9-15 high-risk AI system obligations (full set, see below) – Article 57 regulatory sandbox establishment (Member State obligation)
Later milestones (post-August 2026): – GPAI model obligations and codes of practice timelines extend beyond August 2026
Organizations with GPAI exposure have additional timelines to track. For high-risk system operators, August 2 is the primary countdown.
Seven Obligations: What Each One Actually Requires
The text of Articles 9-15 is precise. The compliance challenge is translating that precision into operational reality. Here’s what each article requires in practice.
Article 9, Risk Management System
A documented, iterative risk management system covering the full lifecycle of the high-risk AI system. This isn’t a one-time risk assessment. The Article 9 system must be reviewed and updated throughout the system’s operational life, identifying and analyzing known and foreseeable risks, evaluating risks that emerge from actual use, and implementing mitigation measures. The key word in the text is “systematic” – ad hoc risk documentation doesn’t satisfy it.
Article 10, Data and Data Governance
Training, validation, and testing datasets must meet defined quality criteria. Relevant practices include examination for possible biases, identification of relevant data gaps, and appropriate data governance measures. For organizations using third-party AI systems or foundation models, this requirement intersects with supplier due diligence, the deployer carries obligations even when the provider built the training pipeline.
Article 11, Technical Documentation
Technical documentation must be prepared before the system is placed on the market or put into service, and kept up to date. Annex IV of the EU AI Act specifies what that documentation must contain: a general description of the system, a description of the system’s elements and development process, detailed information about monitoring and control, validation and testing procedures, and the measures taken to comply with other obligations. This isn’t a technical spec sheet, it’s a conformity evidence package.
Article 12, Record-Keeping
High-risk AI systems must be designed and developed with logging capabilities that enable automatic recording of events throughout the system’s lifetime. For systems used in decisions affecting individuals, the record-keeping requirements support the traceability that regulators and affected parties can demand. The practical implication: logging must be built into the system architecture, not retrofitted.
Article 13, Transparency and Provision of Information
Deployers must be able to understand the system’s capabilities and limitations sufficiently to implement appropriate human oversight. Providers must supply instructions for use that are clear, complete, and calibrated to the deployment context. This obligation creates a direct interface between provider documentation obligations and deployer oversight obligations.
Article 14, Human Oversight
High-risk AI systems must be designed and developed to allow effective human oversight. The system must be interpretable and controllable by qualified individuals who can intervene, override, or halt the system. Article 14 specifies that oversight measures must be proportionate to the risks and suited to the specific use context, a standard that requires documented justification, not just the presence of a human in the loop.
This obligation is particularly consequential for organizations tracking agentic AI system deployment. Published coverage has examined why agentic AI systems are structurally harder to certify under the EU AI Act, the human oversight obligation is one of the primary reasons. An agentic system that operates across variable contexts with dynamic tool access creates a harder interpretability and control challenge than a static classification system.
Article 15, Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose and be resilient against attempts to alter their performance by third parties. Cybersecurity measures must be proportionate to the risk. Consistency of performance across the operational lifetime is an explicit requirement, a system that degrades under operational conditions without triggering a documented response would fail this standard.
Penalty Structure: Two Tiers, Not One
The penalty figures for EU AI Act non-compliance are widely misquoted, including in the Wire’s initial research note for this item. The correct structure matters for compliance prioritization.
| Violation Category | Maximum Penalty |
|---|---|
| Article 5 prohibited practices | €35M or 7% global annual turnover |
| Articles 8-15 high-risk obligations | €15M or 3% global annual turnover |
| Supplying incorrect information | €7.5M or 1.5% global annual turnover |
The €35M/7% figure applies only to Article 5 violations, the prohibited practices that have been enforceable since February 2025. Organizations that have already conducted Article 5 compliance reviews are not out of the woods: if those reviews missed prohibited practices currently in deployment, the higher penalty tier is already in play.
For August 2 compliance, the relevant tier is €15M/3%. That figure is still material for mid-market organizations, 3% of global annual turnover is not a rounding error in any compliance budget.
The Sandbox Option
Article 57 requires EU Member States to establish at least one AI regulatory sandbox at the national level by August 2, 2026. The same date as the high-risk system obligations. That’s not a coincidence – the sandbox mechanism is designed to provide a structured testing environment for organizations navigating classification ambiguity or conformity assessment complexity.
For organizations deploying agentic AI systems whose Annex III scope is genuinely uncertain, the sandbox path offers a way to demonstrate compliance intent under regulatory supervision before full conformity assessment is complete. It isn’t a compliance deferral, it’s a structured path for difficult cases that can’t be resolved by documentation alone.
What the Next 112 Days Actually Require
The compliance work for August 2 doesn’t compress well. Articles 9-15 require built systems, not written policies. Here’s a practical sequencing framework.
Days 1-30: Complete Annex III scope mapping for all AI systems in scope. Identify which systems trigger high-risk classification. Prioritize systems closest to deployment or already deployed. Begin technical documentation drafting under Annex IV.
Days 31-60: Build or validate the Article 9 risk management system for priority systems. Conduct data governance review under Article 10. Assess logging architecture for Article 12 compliance. Identify human oversight gaps under Article 14.
Days 61-90: Complete technical documentation. Conduct conformity assessment for priority systems. Address identified gaps in accuracy, robustness, and cybersecurity under Article 15. Prepare instructions for use under Article 13.
Days 91-112: Audit readiness review. Validate that documentation, logging, and operational controls are demonstrable, not just documented. For systems with unresolved scope or conformity questions, evaluate the sandbox path in the relevant Member State.
TJS Synthesis
The EU AI Act’s August 2 deadline is not a soft launch. The seven obligations under Articles 9-15 describe operational systems that must function, not compliance statements that must be filed. Organizations that have treated 2026 as a monitoring year rather than an implementation year are now inside the implementation window – and the window is 112 days.
The most consequential compliance risk isn’t the penalty calculation. It’s the discovery, after August 2, that a deployed system couldn’t demonstrate Article 11 technical documentation or Article 14 human oversight to a regulatory standard. That discovery happens at the worst possible moment: when a regulator is already looking. The 112 days that remain are enough to build compliant systems. They’re not enough to recover from not having started.