Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

AI Regulation News Today: Hassett Names FDA Drug-Testing as the Model for Federal Pre-Release AI Review

2 min read Fox Business / MSN Partial Weak
National Economic Council Director Kevin Hassett stated publicly that AI models should be "released to the wild after they've been proven safe, just like an FDA drug," giving compliance teams their clearest on-record signal yet of the administration's intended framework for pre-release AI oversight. The statement, made in a Fox Business interview, follows months of reported White House activity on mandatory pre-release review and connects to prior coverage of advanced capability assessments that reportedly shifted administration posture.

Key Takeaways

  • NEC Director Kevin Hassett publicly invoked the FDA drug-testing model as the administration's intended framework for pre-release AI review, the first named-official, on-record framing of this policy direction
  • No executive order has been signed and no formal rulemaking document has been confirmed; the FDA-model framing reflects stated direction, not enacted policy
  • VP Vance has reportedly expressed alarm over advanced AI model capabilities, though the specific venue and causal links to this policy shift aren't independently confirmed in available sources
  • The critical unresolved question is which regulatory body would administer pre-release reviews and what statutory authority it would hold, that answer determines the compliance burden for AI developers

Released to the wild after they've been proven safe, just like an FDA drug.

Kevin Hassett, NEC Director

Kevin Hassett put a name to it.

The National Economic Council Director told Fox Business that AI models should be “released to the wild after they’ve been proven safe, just like an FDA drug”, a direct, attributed framing that compliance and policy teams have been waiting for since reports of a mandatory pre-release review executive order surfaced in early May. That quote, confirmed via MSN and Fox Business reporting, is the clearest on-record articulation of where the administration wants to land.

The framework isn’t confirmed policy yet. No executive order has been signed, no formal rulemaking document has been published, and the scope, which models, which developers, what “proven safe” means operationally, remains undefined. But the analogy matters. The FDA drug-testing model has a specific regulatory architecture: mandatory pre-market review, evidence-based safety thresholds, agency sign-off before commercial release. Invoking it by name signals an intent to build something structurally similar, not a voluntary pledge system.

Federal Pre-Release AI Review: Known Positions

NEC Director Kevin Hassett
for
Publicly invoked FDA drug-testing model as framework; stated AI must be proven safe before release
VP JD Vance
for
Reportedly expressed alarm over advanced AI model capabilities; specific context not independently confirmed
CAISI Participant Labs
neutral
Signed voluntary safety commitments; posture toward mandatory framework not yet publicly stated

This follows the trajectory established over the past two weeks. The White House was reportedly drafting a mandatory pre-release AI review executive order as of May 8. A day later, reporting on the voluntary AI safety era ending and the CAISI structural shift indicated the administration was moving from commitments to requirements. Hassett’s Fox Business statement is the named-official confirmation of that direction.

Vice President Vance has reportedly expressed alarm over the capabilities of advanced AI models, according to multiple reports. The specific context of those concerns, which models, which briefings, isn’t confirmed in available sources, and the causal link between any particular capability assessment and this policy shift should be treated as reported, not established fact. Axios reporting characterizes the administration’s posture shift as driven by concerns about advanced model capabilities and national security implications.

The catch is that “proven safe” has no operational definition yet. The FDA comparison is useful as a directional signal and deeply problematic as a literal template. Drug safety reviews operate on decades of methodology, defined endpoints, and a dedicated agency with statutory authority. None of that infrastructure exists for AI models. The administration is naming the destination without having mapped the road.

Unanswered Questions

  • Which models and developers fall within the scope of a mandatory pre-release review?
  • Which regulatory body would administer reviews, and does it have statutory enforcement authority?
  • What does 'proven safe' mean operationally, what evidence standard, what methodology?

Watch for three things. First, whether a formal executive order follows in the coming weeks and whether its scope covers only frontier labs or extends to enterprise model deployers. Second, how the CAISI participants, the labs that signed voluntary safety commitments, respond to a mandatory framework, given that some reportedly have friction with the current federal access architecture. Third, whether Congress moves to give a regulatory body statutory authority to conduct these reviews, or whether this remains an executive-branch construct.

The real question is who administers the review. If the answer is NIST, that’s a standards body with limited enforcement authority. If the answer is a new agency or an empowered existing one, the compliance implications for frontier AI developers change dramatically. Hassett named the destination. The architecture of getting there is still being built.

View Source
More Regulation intelligence
View all Regulation

Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub