Kevin Hassett put a name to it.
The National Economic Council Director told Fox Business that AI models should be “released to the wild after they’ve been proven safe, just like an FDA drug”, a direct, attributed framing that compliance and policy teams have been waiting for since reports of a mandatory pre-release review executive order surfaced in early May. That quote, confirmed via MSN and Fox Business reporting, is the clearest on-record articulation of where the administration wants to land.
The framework isn’t confirmed policy yet. No executive order has been signed, no formal rulemaking document has been published, and the scope, which models, which developers, what “proven safe” means operationally, remains undefined. But the analogy matters. The FDA drug-testing model has a specific regulatory architecture: mandatory pre-market review, evidence-based safety thresholds, agency sign-off before commercial release. Invoking it by name signals an intent to build something structurally similar, not a voluntary pledge system.
Federal Pre-Release AI Review: Known Positions
This follows the trajectory established over the past two weeks. The White House was reportedly drafting a mandatory pre-release AI review executive order as of May 8. A day later, reporting on the voluntary AI safety era ending and the CAISI structural shift indicated the administration was moving from commitments to requirements. Hassett’s Fox Business statement is the named-official confirmation of that direction.
Vice President Vance has reportedly expressed alarm over the capabilities of advanced AI models, according to multiple reports. The specific context of those concerns, which models, which briefings, isn’t confirmed in available sources, and the causal link between any particular capability assessment and this policy shift should be treated as reported, not established fact. Axios reporting characterizes the administration’s posture shift as driven by concerns about advanced model capabilities and national security implications.
The catch is that “proven safe” has no operational definition yet. The FDA comparison is useful as a directional signal and deeply problematic as a literal template. Drug safety reviews operate on decades of methodology, defined endpoints, and a dedicated agency with statutory authority. None of that infrastructure exists for AI models. The administration is naming the destination without having mapped the road.
Unanswered Questions
- Which models and developers fall within the scope of a mandatory pre-release review?
- Which regulatory body would administer reviews, and does it have statutory enforcement authority?
- What does 'proven safe' mean operationally, what evidence standard, what methodology?
Watch for three things. First, whether a formal executive order follows in the coming weeks and whether its scope covers only frontier labs or extends to enterprise model deployers. Second, how the CAISI participants, the labs that signed voluntary safety commitments, respond to a mandatory framework, given that some reportedly have friction with the current federal access architecture. Third, whether Congress moves to give a regulatory body statutory authority to conduct these reviews, or whether this remains an executive-branch construct.
The real question is who administers the review. If the answer is NIST, that’s a standards body with limited enforcement authority. If the answer is a new agency or an empowered existing one, the compliance implications for frontier AI developers change dramatically. Hassett named the destination. The architecture of getting there is still being built.