August 2, 2026 is 112 days away. For organizations deploying high-risk AI systems under EU jurisdiction, or providing those systems to EU-market deployers, that date is not a policy milestone. It’s an enforcement trigger.
The EU AI Act’s phased implementation schedule brings seven distinct obligation categories into full force on that date, under Articles 9 through 15 of the Act. These aren’t disclosure requirements or best-practice guidelines. They’re mandatory compliance frameworks with audit trails, technical documentation standards, and operational controls that must be built, tested, and demonstrable.
The seven obligations activating August 2
Article 9 requires a functioning risk management system, documented, iterative, and integrated into the development and deployment lifecycle. Article 10 mandates data governance practices covering training, validation, and testing datasets. Article 11 requires technical documentation sufficient to demonstrate conformity. Article 12 mandates automated logging and record-keeping for traceability. Article 13 requires transparency measures so deployers can interpret the system’s outputs. Article 14 mandates human oversight measures that allow meaningful intervention. Article 15 requires accuracy, robustness, and cybersecurity standards appropriate to the system’s risk profile.
None of these can be demonstrated with a policy document. Each requires operational implementation.
Who is in scope
Annex III of the EU AI Act defines the high-risk system categories subject to these obligations. They include AI systems used in critical infrastructure, education and vocational training, employment and workforce management, essential private and public services (including credit scoring and insurance), law enforcement, migration and border control, and the administration of justice. Providers of these systems carry the primary compliance burden; deployers carry a defined subset of obligations under Article 26.
The penalty structure
The Wire’s initial framing misstated how penalties apply. The correct structure: violations of the high-risk system obligations under Articles 8-15 carry penalties of up to €15 million or 3% of global annual turnover, whichever is higher. The higher tier, up to €35 million or 7% of global annual turnover, applies to violations of the prohibited practices under Article 5, which have been enforceable since February 2, 2025. Supplying incorrect information to authorities carries a separate tier of up to €7.5 million or 1.5% of global annual turnover.
Context: what’s already in force
The August 2 deadline doesn’t arrive in isolation. Article 5 prohibited practices and Article 4 AI literacy obligations, which require providers and deployers to ensure sufficient AI literacy among relevant staff, have been enforceable since February 2, 2025. Organizations that haven’t addressed those obligations are already exposed.
What to watch
The August 2 deadline also requires EU Member States to have established at least one national AI regulatory sandbox, per Article 57. For organizations operating in the grey zone of high-risk classification, particularly those deploying agentic AI systems whose scope makes Annex III mapping genuinely ambiguous, the sandbox mechanism may offer a structured path for demonstrating compliance intent while conformity assessment processes mature.
TJS synthesis
The compliance work required by August 2 is operational, not clerical. Organizations that have been monitoring EU AI Act developments but deferring implementation are now inside the window where “planning to comply” and “able to demonstrate compliance” diverge meaningfully. The 112-day countdown is a countdown to an audit-readiness standard, not a filing deadline.