Twenty-five state AI laws enacted in a single year. A White House framework calling that fragmentation a problem. An executive order from December 2025 directing federal agencies to address it. A California governor issuing her own AI executive order eleven days after the federal framework dropped.
This is the current state of US AI regulation. Not a single unified system, two competing systems trying to establish which one governs.
Understanding what compliance professionals actually need to do right now requires understanding how those two systems work, where they conflict, and why neither has settled the question.
Track One: The Legislative Framework
The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026. The document is not a law, not a rule, and not a binding directive of any kind. It is a set of nonbinding legislative recommendations to Congress, a White House statement of what a unified federal AI regulatory system should look like, submitted in the hope that Congress will act on it.
According to Holland & Knight’s legal analysis of the Framework, it prioritizes “targeted federal preemption” and explicitly cautions against “vague standards, open-ended liability and fragmented state regulation.” The same analysis confirms the Framework addresses child safety, community protections, free speech, innovation, and workforce readiness as core policy areas. Dozens of specific legislative recommendations are reportedly included, according to Global Policy Watch, though that count has not been independently confirmed in primary text review for this package.
The Framework’s policy structure, the precise number and organization of its thematic areas, has been characterized differently by different legal analysts, and the full PDF text is available at the White House website for direct verification. What is confirmed by multiple sources is the central thesis: the administration wants a single federal AI law that displaces the growing patchwork of state requirements.
That nonbinding status is not a minor footnote. It is the governing fact about this document. Congress receives a lot of White House policy frameworks. Not all of them become legislation. Not most of them.
Track Two: The Executive Order
Before the Framework existed, there was a different instrument. The White House issued an executive order in December 2025 titled “Ensuring a National Policy Framework for Artificial Intelligence.” That EO and the March 2026 Framework are distinct documents operating through different legal mechanisms, a distinction that matters for understanding what’s actually happening here.
An executive order directs federal agencies. The December 2025 EO, according to legal analysis of the order, directs federal agencies to address state AI laws that conflict with national AI policy. The specific operational language of that order requires full primary text review to characterize precisely, and the full text is available at the White House website. What legal analysts have characterized consistently is its intent: limiting state ability to impose AI regulations that would fragment a national approach.
This is where the two-track strategy becomes visible. Track one (the Framework) asks Congress to pass a preempting federal law. Track two (the EO) uses executive authority to direct agencies to push back against conflicting state laws in the meantime. Both tracks aim at the same destination. They use fundamentally different legal tools to get there.
The State Response: Not Waiting
States have been moving faster than the federal legislative process. According to Plural Policy, a legislative tracking platform, roughly 25 AI laws had been enacted across US states in 2026 as of early April, up sharply from approximately six in mid-March. Another 27 bills had reportedly passed both chambers and were awaiting final action.
Those figures are from a single legislative tracking platform and should be treated as directional rather than definitive, but the underlying pattern is confirmed by the documented pace of state AI lawmaking: multiple states enacting AI requirements in the same weeks the White House was publishing a framework calling for federal supremacy.
California’s response to the federal preemption push is particularly direct. Governor Newsom issued Executive Order N-5-26 on March 30, 2026, eleven days after the White House Framework dropped, establishing AI trust and safety standards for California state government procurement, according to legal alerts from Wiley Law and Akin Gump. A California executive order establishing state AI procurement requirements is, in practice, a statement that California intends to govern AI in its procurement environment regardless of what federal policy recommends.
That’s not unique to California. It’s the pattern across the states enacting AI laws this year: legislative action that proceeds as if federal preemption is neither guaranteed nor imminent.
The Compliance Problem
Here is the practical situation for compliance professionals, AI product teams, and legal counsel advising US-based AI companies:
The federal preemption the White House Framework recommends does not exist yet. It requires an act of Congress. Congress has not passed a federal AI law. The Framework’s recommendations are not self-executing.
The December 2025 EO directs federal agencies to address conflicting state AI laws, but that mechanism operates through agency action, a slower, contested process. It does not automatically suspend or invalidate state AI laws.
State AI laws are currently enforceable. The 25 laws enacted in 2026 are real compliance obligations in the jurisdictions where they apply. A company doing business in multiple states is already managing a multi-jurisdiction AI compliance environment, and that environment is adding roughly several new requirements per month based on the current pace.
The practical implication runs in the opposite direction from what the Framework’s preemption language might suggest. Waiting for federal clarity before building compliance infrastructure is not a sound strategy. The federal legislative timeline is uncertain; state compliance deadlines are not. Companies that have deferred state-law compliance work on the assumption that federal preemption is coming are taking a documented legal risk.
What to Watch
Three developments would materially change this picture. First: any congressional committee advancing a federal AI bill with explicit preemption language would signal that the Framework’s legislative recommendations are gaining traction. Second: any federal agency action, under the December 2025 EO, challenging a specific state AI law in court would test whether the executive track has teeth. Third: any state court or federal court ruling on the preemption question directly would begin to set the legal precedent that neither the Framework nor the EO has yet established.
Until one of those developments occurs, compliance professionals are operating in a system where federal ambition is real but federal authority to preempt state AI laws has not been legally established.
TJS Synthesis
The two-track federal strategy is coherent as a political posture. It is not yet coherent as a legal framework. A nonbinding legislative recommendation and an executive directive to federal agencies represent two different bets on how federal preemption eventually gets established, but neither has crossed the line into settled law. Meanwhile, state AI legislation is a present-tense compliance obligation.
The most defensible posture for AI companies right now is not to wait for federal clarity. It is to build compliance infrastructure against existing state law requirements while monitoring the federal developments that could eventually rationalize the landscape. The Framework tells you what Washington wants. Twenty-five enacted state laws tell you what compliance teams need to manage today.