The prior brief on this hub asked what law firms were telling compliance teams about federal preemption while the policy outcome remained unresolved. That question now has a partial answer. The White House released its National AI Policy Framework on March 20, 2026. It isn’t a law. It isn’t a regulation. It is a nonbinding set of legislative recommendations, and that distinction matters less than most compliance teams are treating it, because nonbinding administration positions have a way of becoming binding legislation when Congress picks them up.
Three federal AI governance models are now visible. They are not hypothetical. They are competing legislative and policy positions, each with named advocates, each with different implications for the compliance infrastructure that companies operating under state AI law have spent the past two years building.
Model 1: The White House Framework, Consolidation with Guardrails
The Framework organizes the administration’s priorities around seven themes: child safety, community protections, free speech, innovation, intellectual property, workforce readiness, and targeted federal preemption. Per Holland & Knight’s analysis, the Framework explicitly cautions against vague standards, open-ended liability, and fragmented state regulation. Legal analysts describe the proposed preemption threshold as targeting regulations that impose what the administration characterizes as undue burdens on innovation – though that specific framing reflects law firm characterization of the Framework’s language, not confirmed text from the primary document.
The Framework’s operational intent is consolidation. It wants one federal AI law, not fifty state regimes. The policy logic is coherent: patchwork regulation increases compliance costs for multistate operators, creates inconsistent protections for consumers depending on where they live, and complicates the ability of U.S. companies to present a unified posture in international markets where the EU AI Act and Japan’s AI Promotion Act are defining what “responsible AI” governance looks like.
What the Framework doesn’t do is define enforcement. It sets priorities. Congress writes the rules.
Model 2: The TRUMP AMERICA AI Act, Legislative Implementation
According to Holland & Knight’s analysis, the Framework aligns with Sen. Marsha Blackburn’s updated TRUMP AMERICA AI Act. This bill is named in law firm analysis as the legislative vehicle most closely tracking the Framework’s preemption position. The specific provisions of the Act require independent research to characterize accurately, this deep-dive notes the bill’s existence and its alignment with the Framework as confirmed through T2 legal analysis. Detailed legislative text and section-by-section analysis will be incorporated when the Wire delivers that research. A `[COVERAGE-GAP]` flag has been raised with the Wire team for next cycle.
What law firm analysis confirms: this is the administration’s preferred congressional vehicle. Its advancement is the primary scenario under which broad preemption of state AI laws becomes a near-term compliance risk.
Model 3: The GUARDRAILS Act, The Counter-Position
Legal analysts note that competing proposals, including the GUARDRAILS Act, represent opposing congressional positions on federal preemption. Where the Framework and its companion legislation push toward consolidation, the GUARDRAILS Act represents the position that state-level AI protections should be preserved, or strengthened, rather than preempted. Like the TRUMP AMERICA AI Act, specific GUARDRAILS Act provisions await Wire research for detailed characterization. The bill’s existence and its role as the primary legislative opposition to broad preemption are confirmed through law firm analysis.
The GUARDRAILS Act matters for compliance teams even if it doesn’t pass. If Congress is actively debating it, the final federal AI law, whatever form it takes, will likely reflect compromises between these positions. A compliance program built around the assumption that preemption will be total and immediate is probably wrong. A program built on the assumption that nothing will change is also probably wrong.
What State-Law Compliance Teams Lose Under Broad Preemption
Colorado’s AI Act requires impact assessments for consequential AI decision-making and disclosure to consumers. Illinois’s AIPCA covers employment decisions. Texas’s law imposes obligations on high-risk AI system developers. Each of these frameworks took years to pass, was shaped by state-specific political environments, and was designed to address harms that federal policymakers have historically moved more slowly on.
If broad federal preemption passes and the resulting federal law sets a lower floor than existing state requirements, companies that built compliance programs around Colorado or Illinois standards face a decision: maintain the higher standard voluntarily, or reduce compliance expenditure to the federal minimum. Neither answer is automatic. Both require deliberate policy choices that leadership, legal, and compliance teams need to make together, before the law changes, not after.
There’s also a simpler operational problem. State-law compliance programs are usually embedded in vendor contracts, employment policies, and product development workflows. Unwinding them isn’t a policy decision, it’s an operational project. Companies that wait for the law to change before assessing the unwind cost will be playing catch-up.
What Compliance Teams Should Do Now
The Framework is nonbinding. State laws are fully operative. Nothing requires immediate action in the narrow legal sense. These are three reasons to act now rather than wait, not three reasons to set a calendar reminder for when the law might change.
First, map your current state-law obligations against what the Framework’s priorities would require at the federal level. Where your current program exceeds what the Framework envisions, that excess represents either durable best practice or contingent compliance cost, and you need to know which before a legislative timeline forces the question.
Second, design your compliance infrastructure for modularity. Programs built as integrated wholes are hard to modify. Programs built as discrete components, one module for bias auditing, one for impact assessments, one for disclosure obligations, can be adjusted by component when the legal landscape shifts.
Third, watch two things: Council of the EU’s response to the EU AI Act deadline restructuring (covered separately in today’s package), and floor scheduling for the TRUMP AMERICA AI Act. These are the two near-term legislative events with direct compliance timeline implications. Neither is imminent. Both are closer than they were six months ago.
TJS synthesis. The Framework signals a direction, not a deadline. That’s the key calibration. Organizations that treat this as a fire drill, scrambling to rebuild programs before a law that hasn’t passed, will burn resources on a contingency. Organizations that treat it as background noise, continuing to build state-law compliance programs as if the landscape is stable, are making a different kind of mistake. The right posture is informed optionality: know your current obligations, understand what changes under each of the three models, build your program to be adaptable, and put a named person in your compliance function on legislative monitoring. The question isn’t whether federal AI law is coming. It’s what it will look like when it arrives.