Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

Who Must Do What Under a Mandatory Federal AI Review: Mapping the Architecture, the Stakeholders, and the Compliance...

5 min read The Hill Partial
The White House now has a named model for federal pre-release AI oversight, the FDA drug-testing framework, and a named official on record saying so. What it doesn't have yet is a regulatory body with statutory authority to run it, a defined scope of which models and developers it covers, or an operational definition of what "proven safe" actually requires. This deep-dive maps the stakeholder landscape, traces the policy trajectory that produced this moment, and identifies the specific open questions that compliance teams at AI developers need to start answering now.
OMIT, no standout quantifiable metric in this brief

Key Takeaways

  • NEC Director Hassett's FDA drug-testing analogy is the administration's first on-record, named-official articulation of its intended pre-release AI review framework, a directional signal, not enacted policy
  • The FDA model implies a regulatory architecture (statutory authority, defined safety standards, a dedicated reviewing body) that doesn't yet exist for AI; building it requires either an EO with legal hooks into existing agency authority or new congressional legislation
  • CAISI participants face the most immediate exposure: their existing voluntary commitments are the most likely foundation for a mandatory framework, and the gap between what they've committed to and what a formal review requires is the compliance gap to map now
  • The unanswered question that determines compliance burden for the entire AI developer ecosystem: which regulatory body gets statutory authority to run pre-release reviews, and what evidence package satisfies "proven safe"

Released to the wild after they've been proven safe, just like an FDA drug.

Kevin Hassett, NEC Director

Mandatory Pre-Release AI Review: Stakeholder Positions

NEC Director Kevin Hassett
for
On record invoking FDA drug-testing model as intended framework; framing confirmed via Fox Business
VP JD Vance
for
Reportedly expressed alarm over advanced AI model capabilities; specific briefing context not independently confirmed
CAISI Participant Labs
neutral
Signed voluntary safety commitments; posture toward mandatory framework not yet publicly stated
Congress
neutral
No meaningful federal AI safety legislation enacted; statutory authority question remains open

The comparison arrived in a Fox Business interview, not a Federal Register notice. That distinction matters for how compliance teams should weight it.

NEC Director Kevin Hassett told Fox Business that AI models should be “released to the wild after they’ve been proven safe, just like an FDA drug.” The quote is confirmed via MSN and Fox Business reporting. It is the first time a named, senior White House official has publicly articulated the FDA model as the intended framework for pre-release AI oversight, not a think-tank recommendation, not a congressional proposal, but an on-record statement from the director of the National Economic Council.

That’s a meaningful shift from the reporting of the prior two weeks, which described drafting activity but no public framing. Now there’s a named official and a named model. The compliance implication isn’t certainty, it’s direction. Policy teams should treat this as a strong signal, not a final rule.

How the Administration Got Here

The current posture didn’t emerge from a single event. It accumulated.

In late April and early May, reporting established that the White House was moving from voluntary safety commitments toward mandatory pre-release review architecture. The CAISI framework reached all frontier labs, and with it came reporting that the voluntary era was ending. By May 8, Axios and other outlets were reporting the White House was actively drafting a mandatory pre-release review executive order.

What appears to have accelerated this trajectory, according to prior hub coverage, was government exposure to advanced capability assessments of frontier models. The administration’s briefings on what current models can do, particularly in domains relevant to national security and critical infrastructure, reportedly produced a posture shift from “let innovation run and course-correct later” toward “gate and verify before release.” VP Vance has reportedly expressed alarm over advanced AI model capabilities, according to multiple reports; the specific briefings and their causal role in this policy shift aren’t independently confirmed in available sources and should be treated as reported context, not established fact.

The FDA comparison didn’t come from nowhere. It reflects a specific administrative instinct: that high-capability AI models pose risks that justify pre-market review by a regulatory authority, analogous to the FDA’s role in certifying pharmaceutical safety before products reach consumers. The analogy has intuitive force. It also has serious structural problems when examined closely.

What the FDA Analogy Actually Implies, and Where It Breaks

FDA drug approval rests on several pillars that don’t currently exist for AI. It has a dedicated agency with statutory authority granted by Congress (the Food, Drug, and Cosmetic Act). It has defined endpoints for what counts as safety evidence (randomized controlled trials, phase structures, adverse event reporting). It has a permanent bureaucracy with scientific staff capable of evaluating submissions. And it operates on timelines measured in years, not weeks.

US Federal AI Oversight: Voluntary to Mandatory Trajectory

Pre-May 2026
CAISI voluntary safety commitments; labs self-attest; no mandatory pre-release review requirement
Signaled direction (not yet enacted)
FDA-model mandatory pre-release review; agency sign-off before commercial release; scope and administering body still undefined

Unanswered Questions

  • Which regulatory body receives statutory authority to administer pre-release reviews?
  • What evidence package satisfies 'proven safe', benchmarks, red-teaming, third-party audit, or something else?
  • Does scope extend beyond frontier labs to fine-tuned models, open-source releases, and enterprise deployments?
  • Does an EO provide sufficient legal authority to bind private developers, or is congressional legislation required?

None of that infrastructure exists for AI model review. The FDA comparison names the destination. It says nothing about how to build the road.

If the administration pursues an FDA-model framework, three structural questions determine what it actually means for AI developers:

*Who administers the review?* NIST is the most natural candidate given its existing role in the AI RMF and its technical credibility with industry. But NIST is a standards body, it has influence and convening authority, not enforcement power. Empowering NIST to conduct mandatory pre-release reviews would require statutory authorization from Congress. An executive order can direct federal agencies, but it can’t create a new regulatory regime that binds private companies without a legal hook. If the vehicle is an EO rather than legislation, the mandatory review framework may apply to federal contractors and CAISI participants first, with broader applicability dependent on congressional action.

*What counts as “proven safe”?* The FDA’s safety standard for drugs is outcome-based, clinical trial data showing the product’s risk-benefit profile in defined populations. There’s no equivalent established methodology for AI model safety. Benchmark performance doesn’t capture deployment risk. Red-teaming protocols aren’t standardized. Capability evaluations for national security implications require classified access and classified methodologies. The administration will need to define what evidence package satisfies the review, and until it does, “proven safe” is a phrase, not a standard.

*Which models and developers are in scope?* Frontier labs developing the most capable foundation models are the obvious first tier. But the scope question gets complicated quickly. Does it cover fine-tuned versions of foundation models? Open-source releases? Enterprise models deployed internally? The EU AI Act addressed scope through a risk classification system; the US framework has no equivalent published architecture yet.

The Stakeholder Landscape

The stakeholder positions currently visible around a mandatory pre-release review framework reflect early positioning, not fixed stances.

The CAISI participants, the labs that signed voluntary safety commitments, have existing relationships with the federal government on AI safety. Their posture toward a mandatory framework will depend heavily on what “mandatory” requires operationally. Labs that have already built internal safety evaluation processes may find a formalized review less burdensome than the uncertainty of an undefined standard. Labs that have experienced friction with federal access architecture, reporting on which has appeared in prior hub coverage, face a more complicated calculus.

Pre-Release Review Preparation: Actions for AI Developers Now

  • Map model release pipeline against a hypothetical pre-release review trigger threshold
  • Audit existing safety evaluation documentation for gaps vs. a formal evidence submission standard
  • Brief government affairs and legal teams on FDA-model framing before EO or legislation lands

Analysis

The NIST AI RMF gives the administration a ready-made technical foundation to point to, but NIST is a standards body, not an enforcement agency. Watch for whether any EO language attempts to vest NIST with new authority, refers to a new Office of AI Safety, or routes review responsibility through an existing agency like CISA or the Commerce Department. That routing decision is where the compliance architecture actually lives.

Congressional posture is the biggest open variable. An EO-based mandatory review has limits. A statutory framework would require legislation, and Congress hasn’t moved meaningful AI safety legislation at the federal level. The Hassett framing may be designed in part to create public pressure for congressional action, naming the model before the law exists can shape the legislative conversation.

Industry groups representing broader technology companies have not yet publicly responded to the FDA framing specifically. Their response will matter: the CAISI framework covers frontier labs, but a broader mandatory review regime could affect the much larger ecosystem of companies building on foundation model APIs.

What Compliance Teams Should Do Now

Don’t expect formal requirements in the next 30 days. An executive order could follow quickly or could be months away. Legislation is a longer timeline. But the signal from a named NEC Director in a Fox Business interview is strong enough to justify preparation, not just monitoring.

Three actions are worth starting now. First, map your model release pipeline against a hypothetical pre-release review requirement: which releases would trigger review, what’s the lead time, and where are the documentation gaps? Second, review your existing safety evaluation documentation – what you already produce for internal review or CAISI commitments is the foundation of whatever evidence package a federal review might require. Third, flag this for your government affairs and legal teams now, before the EO or legislation lands. The time to brief stakeholders is before the rule is final, not after.

The real question isn’t whether mandatory pre-release review is coming. Hassett’s public invocation of the FDA model, combined with the documented drafting activity of the prior two weeks, makes the directional signal unusually strong for pre-rulemaking. The question is what form it takes and who has statutory authority to run it. When those answers arrive, in an executive order, a NIST guidance document, or a congressional bill, compliance teams that have already mapped their pipeline will have a significant head start on those that haven’t.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub