The EU AI Act has a mechanism designed for exactly this moment. The question, until now, was whether regulators would use it.
According to Courthouse News, EU Commission spokesman Thomas Regnier confirmed the Commission is seeking information on Claude Mythos’ systemic risks under the AI Act’s General Purpose AI provisions. The inquiry is focused on the model’s potential systemic risks, not its current deployment, because Mythos hasn’t been deployed. That’s the notable shift. The EU isn’t reacting to harm caused. It’s asking questions before the model reaches users.
This is a follow-up to a regulatory story that’s been building across multiple cycles. Prior coverage here has documented how the US, UK, and EU are diverging on Mythos access and risk assessment. The EU Commission’s decision to formally enter the picture under a specific legal mechanism, the GPAI systemic risk provisions, moves the story from geopolitical observation to active regulatory process.
What the AI Act’s systemic risk provisions actually require: The AI Act establishes that GPAI models posing systemic risks, broadly, those with significant general-purpose capabilities that could have wide-scale adverse impacts, face obligations beyond those applied to standard GPAI models. Those obligations include adversarial testing, incident reporting, cybersecurity measures, and a requirement to report serious incidents to the Commission. The Commission’s inquiry into Mythos appears to be an information-gathering step preceding a formal systemic risk classification, or a determination that such classification isn’t warranted. Either outcome sets precedent.
Concurrent with this regulatory development, Anthropic released Claude Opus 4.7 on April 16. Anthropic states that Opus 4.7 was trained to reduce offensive cybersecurity capabilities compared to the unreleased Mythos model, a framing that positions the released version as a moderated alternative. According to Anthropic, Opus 4.7 achieves 21% fewer errors in document reasoning tasks compared to version 4.6, though that figure reflects Anthropic’s internal evaluation and should be treated as a vendor-reported benchmark rather than an independently verified result. Epoch AI’s model database, updated April 16, tracks model capabilities and benchmarks across the field, independent verification of Opus 4.7’s specific performance figures was not available in the source material reviewed for this brief.
The regulatory significance of the capability framing is this: Anthropic is describing its capability decisions in terms that respond directly to safety scrutiny. Whether that’s genuine safety engineering, regulatory positioning, or both is a question regulators, including the EU Commission, are now formally pursuing.
For compliance professionals and legal teams at AI companies: The EU Commission’s willingness to initiate inquiry under GPAI systemic risk provisions before a model’s release expands the practical compliance window significantly. Companies developing frontier models with potential systemic risk characteristics should now treat the pre-release period as a regulatory engagement window, not just a development phase. Documentation of capability assessments, red-team results, and mitigation decisions made during development may become relevant to Commission inquiries before deployment begins.
What to watch: Whether the EU Commission formally classifies Mythos as a systemic-risk GPAI model, which would trigger the full obligation set, or uses this inquiry to establish dialogue without formal classification. The outcome will be instructive about how the Commission intends to use its pre-deployment authority. Also worth tracking: whether other frontier labs with unreleased high-capability models receive similar inquiries, which would signal a systematic pre-deployment review posture rather than a Mythos-specific one.
The EU AI Act’s systemic risk mechanism is no longer theoretical. It’s in use.