August 2, 2026 is the compliance deadline for EU AI Act obligations covering high-risk AI systems, those falling under Annex III of the regulation, including systems used in employment screening, credit assessment, biometric identification, and critical infrastructure. The deadline applies to both providers building these systems and deployers operating them in the EU market.
“High-risk” has a specific legal meaning under the EU AI Act. It’s not a judgment call. Annex III defines the categories precisely, and organizations need to assess whether their systems fall within scope before assuming they don’t.
For compliance teams evaluating a governance framework, ISO/IEC 42001 has emerged as a relevant reference point. According to isms.online, an ISO 42001 compliance platform, the standard addresses quality management requirements that overlap with EU AI Act Article 17 obligations. That framing comes from a vendor with a commercial interest in ISO 42001 adoption, compliance teams should evaluate it alongside independent analysis. A draft European standard, prEN 18286, is also reported by isms.online to address Article 17 requirements for high-risk AI providers, though this claim hasn’t been independently verified against official EU standardisation sources.
One additional variable: EU legislative discussions, including a reported “Digital Omnibus” package, may affect implementation timelines. Compliance teams should monitor official EU sources directly for any amendments rather than treating the August 2 date as settled until confirmed by the final legislative text.
Five months is tighter than it sounds. Conformity assessments, documentation systems, and human oversight mechanisms aren’t built quickly. Organizations that haven’t started a gap analysis should start one now.