The Trump administration sent Congress a blueprint for federal AI governance on April 24, and its clearest message is jurisdictional: states should not regulate AI development, and the federal government intends to say so explicitly.
What the framework says
The White House document, titled “Ensuring a National Policy Framework for Artificial Intelligence,” recommends that Congress prohibit states from imposing requirements on AI developers and from assigning liability to AI developers for third-party misuse of their systems. The direct quote from the document: “States AI companies must be free to innovate without cumbersome regulation.” That language isn’t ambiguous, it’s the administration’s stated position, in primary source form.
The framework carves out narrow exceptions. States retain authority over child safety, anti-fraud, and consumer protection, areas where the administration isn’t prepared to strip state police powers entirely. Those carve-outs come from Freshfields’ legal analysis of the document, which confirms their scope.
On agency structure, the framework favors sector-specific oversight through existing regulators, the FTC, FCC, and SEC, rather than establishing a new centralized AI regulator, according to reporting on the document. That framing is directionally consistent with the administration’s broader deregulatory posture but should be treated as a policy preference, not a statutory prohibition on future agency creation.
Why this matters
The stakes of this framework aren’t hypothetical. California, Connecticut, and Colorado have each moved forward with AI-specific legislation that would impose requirements the White House framework wants Congress to preempt. California’s SB 53 and AB 853 are named in the document directly. California signed legislation the same week the White House pushed preemption, the jurisdictional tension is active, not theoretical.
For compliance teams at AI companies operating across multiple states, this framework creates a decision-point: do you build compliance programs against state law requirements that may be preempted, or do you wait for federal clarity that may never arrive on a useful timeline? The framework proposes – Congress disposes. Nothing in this document has legal force yet.
Context
Federal preemption of state regulatory authority has a long history in US technology law. Section 230 preempted state liability for platforms. COPPA preempted state children’s privacy laws. In financial services, the National Bank Act preemption debate ran through the courts for decades. AI preemption would follow a similar arc, contested, litigated, and ultimately settled by courts interpreting whatever Congress actually passes. The White House framework is the opening position, not the final word. See our stakeholder map of the competing actors writing US AI policy.
What to watch
The critical variable is Congressional appetite. The framework needs legislation to have legal effect – preemption doesn’t happen by executive framework alone. Watch for companion bills in the House and Senate, and watch how state attorneys general respond. If California challenges a preemption bill in court, the litigation timeline extends well beyond any near-term compliance horizon.
TJS synthesis
The White House is making three simultaneous bets: that Congress will act, that states will defer, and that existing sector regulators can handle AI-specific risks without new authority. All three are contestable. The framework is real; its legal effect is not yet. Compliance teams should map their current state-law obligations and model two scenarios, preemption passes, preemption fails, before Q3 2026.