The question of who governs AI in the United States doesn’t have a clean answer. It has three answers, each from a different actor, each backed by a different kind of power.
That’s not a temporary condition. It’s the operating reality for any compliance team managing AI systems across state lines in 2026.
The Stakes: Why Governance Fragmentation Is a Compliance Problem
When a company deploys an AI system across the United States, it currently faces a patchwork of state-level AI and data privacy laws, some enacted, some pending, some proposed. Each imposes different requirements on different types of systems. Legal teams working in this environment have been managing overlapping obligations with no clear hierarchy.
The White House’s National Policy Framework and California’s Executive Order N-5-26 don’t resolve that fragmentation. They deepen it. One argues for a single federal standard. The other demonstrates exactly why states aren’t waiting for one.
The Federal Position: Preemption as Policy
On March 20, 2026, the White House released the National Policy Framework for Artificial Intelligence, a document addressed to Congress, not to agencies or businesses. Its core governance recommendation: Congress should “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations.”
Three things about this position deserve precision.
First, the Framework is not law. It creates no binding obligations. As legal analysts at Ballard Spahr have noted, it’s a set of legislative recommendations, a policy direction, not an enforceable mandate. For preemption to take effect, Congress must pass legislation that explicitly displaces conflicting state law.
Second, the word “undue” is doing heavy work in that sentence. Preemption doctrine in federal law doesn’t automatically sweep away all state regulation in a given domain. It sweeps away state laws that conflict with federal law, or that Congress explicitly decides to displace. The definition of what constitutes an “undue burden” will be negotiated in the legislative drafting process, and that process has not yet started.
Third, the Framework is reported to recommend against creating a new dedicated federal AI regulatory body. If accurate, federal AI governance would flow through existing agencies, the FTC, NIST, sector-specific regulators, rather than a centralized authority. Who enforces federal AI standards matters as much as what those standards say.
The Framework includes substantive provisions on child protection: age-verification requirements for AI services likely accessed by minors, and parental tools. These provisions represent an area where federal action enjoys broader bipartisan support than the preemption question and may move faster through Congress.
The California Counter-Move: Procurement as Regulation
California didn’t wait. On March 30, 2026, Governor Newsom signed Executive Order N-5-26, directing state agencies to develop new AI vendor certification standards and restructure California’s procurement processes for AI technologies.
The EO’s mechanism is deliberate. California isn’t passing legislation that subjects private AI companies directly to new legal requirements. It’s conditioning access to the California state market on meeting certification standards that state agencies will develop. Any vendor seeking state contracts must qualify. That’s procurement power used as a regulatory instrument.
The reach is national in effect. California doesn’t purchase from only California-based vendors. The certification standards that emerge from this EO will apply to any company, headquartered anywhere, that wants to sell AI technology to California state agencies. California’s state budget exceeds the GDP of most countries. The market access at stake is significant.
California’s state agencies are reportedly directed to submit implementation recommendations within 120 days of the EO’s signing, placing an approximate deadline in late July 2026. The standards themselves won’t be fully defined until agencies complete that process. But the timeline is moving.
The Industry Actor: OpenAI’s Policy Blueprint
There’s a third force shaping this governance space. On April 6, 2026, OpenAI published “Industrial Policy for the Intelligence Age”, a comprehensive policy proposal that positions the company as an active participant in designing US AI governance, not merely a subject of it.
OpenAI’s document addresses some of the same questions as the White House Framework: how to structure AI access, how to handle AI’s economic impact, how to monitor dangerous AI applications. It also goes further in areas the Framework touches only briefly, proposing incentives for a four-day workweek and support mechanisms for workers displaced by AI, framing these as government obligations the company believes should accompany widespread AI adoption.
The significance here isn’t whether Congress will adopt OpenAI’s specific proposals. It’s that the company most associated with the current AI capability moment is now openly participating in the governance design process, advancing positions that would shape the regulatory environment in which it operates. That’s a legitimate form of policy participation. It’s also a form of influence worth tracking separately from government actors.
Where OpenAI’s proposal intersects with the White House Framework, particularly on monitoring dangerous AI applications, there’s signal about where industry-government alignment is possible. Where they diverge, there’s signal about where lobbying pressure will concentrate.
The Conflict Points: Where Federal and State Positions Directly Contradict
The preemption question is the central collision. If Congress passes legislation preempting state AI laws that impose undue burdens, some portion of California’s state-level AI regulatory activity, including, potentially, the certification standards emerging from EO N-5-26, could be displaced.
But preemption isn’t total. Federal law typically preserves state authority in domains where states have historically exercised police power, and it frequently carves out areas of shared concern. Child protection provisions in the White House Framework align closely with California’s existing approach to children’s data. Those provisions may survive any preemption legislation, given bipartisan political support.
The deeper conflict is structural. The White House Framework envisions a compliance landscape where businesses deal with one federal standard. California’s EO envisions a landscape where state procurement power continues to function as a parallel regulatory lever. Both visions can coexist legally, but they produce compliance costs that stack, not substitute.
What Compliance Teams Should Actually Do
There’s no resolved framework to build to yet. That’s the honest assessment.
What there is: a defined timeline (California’s approximately 120-day agency recommendation window, ending around late July 2026), a legislative question (whether Congress introduces federal AI legislation with preemption language in the next two quarters), and a growing industry advisory record (OpenAI’s document is one of several policy positions now on the table from major AI developers).
A defensible compliance posture right now has three components.
Monitor California’s certification standards process actively. The DGS and CDT recommendation period sets the shape of what vendor certification will actually require. If you sell, or plan to sell, AI technology to California state agencies, those standards define your next compliance threshold. Track the process, don’t wait for the final publication.
Build the federal preemption question into your legislative monitoring calendar. Watch for AI bills introduced in Congress with preemption provisions. When they appear, assess whether they would displace California’s certification approach or carve it out. The answer determines whether you’re managing one compliance regime or two.
Document your current AI risk management practices against the NIST AI Risk Management Framework. The Framework references NIST’s work as a baseline. California’s certification standards are likely to reference it as well. NIST AI RMF alignment is the closest thing to a hedge that exists right now, it positions organizations to demonstrate responsible AI governance to whichever actor ultimately holds authority.
The US AI governance contest is not resolved. The compliance teams that build durable postures today are the ones that stop waiting for resolution and start mapping exposure to each actor’s current power.