Two clocks are running simultaneously, and they don’t coordinate with each other.
On March 20, 2026, the White House released “A National Policy Framework for Artificial Intelligence”, a set of legislative recommendations proposing that Congress establish a unified federal AI framework that would preempt state laws deemed to impose undue burdens on AI. Five days later, on March 25, the Georgia House of Representatives passed SB 540, a chatbot disclosure and child safety bill that had already cleared the Georgia Senate on March 6. Georgia’s bill is at the governor’s desk. The federal preemption framework is at Congress’s door, awaiting legislation that hasn’t been drafted.
This is the compliance environment that multi-state operators are navigating in late March 2026, and the two developments aren’t resolving toward each other. They’re accelerating in parallel.
What the federal framework actually proposes, and what it leaves open
Parker Poe confirmed directly from the document: “The document is not itself law, and it does not create binding obligations.” That’s the starting point for any analysis. The White House framework is a policy recommendation to Congress, not an executive order, not a regulation, not an enacted statute.
DLA Piper’s analysis confirms the administration’s stated intent: the White House looks forward to working with Congress to advance corresponding legislation in the coming months. The framework covers multiple areas of AI governance, sources report six to seven distinct policy sections, depending on how sections are counted, including child safety, data center infrastructure, consumer protection from AI scams, national security considerations for frontier AI, copyright, free speech, testing environments, and workforce development.
The preemption structure is the part that changes compliance calculus. The framework recommends that Congress establish federal AI policy that, per the administration’s stated intent, would preempt state laws imposing undue burdens on AI, while preserving state authority in other domains. Holland & Knight and DLA Piper both confirm this framing. Neither source, and no available primary source, specifies where the line falls between preempted and preserved domains. That boundary doesn’t exist yet in legal terms. It will only exist when Congress translates the framework’s intent into statutory language.
Parker Poe noted the administration is “framing AI policy as both a competitiveness issue and a governance issue.” That dual framing is a meaningful signal. When the federal government ties deregulation to competitiveness, the preemption push tends to be sustained, not symbolic.
What states are actually doing right now
Georgia’s official legislative records confirm that SB 540 passed the state Senate on March 6 and the House on March 25. It establishes chatbot disclosure requirements and child safety protections for conversational AI services. Georgia isn’t waiting for federal clarity. Neither is it alone.
IAPP’s analysis of the 2026 state legislative session independently confirms that chatbot safety and AI transparency legislation is advancing in both Republican- and Democratic-led states. The legislative momentum is bipartisan and multi-state. It predates the White House framework and shows no sign of pausing for it.
California has two significant bills in motion. California SB 574, which addresses attorney AI protections, confidentiality, accuracy, and bias provisions, had previously passed the California Senate. Its current procedural status as of late March 2026 is disputed between a state-level source and a March 27 legislative tracking report; the discrepancy couldn’t be resolved from available sources. California SB 813, which would establish a state AI Standards and Safety Commission, was reportedly amended and re-referred to committee as of March 27, according to a single tracking source.
Illinois has multiple AI bills advancing, including the Artificial Intelligence Provenance Data Act (SB3263, confirmed via LegiScan as active in the 104th General Assembly) and reportedly a Transparency in Frontier AI Act, per a March 27 legislative update. The second bill name hasn’t been independently confirmed. Kansas HB 2594 reportedly passed both chambers by mid-March 2026, per one legislative tracking source, that claim hasn’t been independently corroborated.
The partially-confirmed details matter less than the overall picture. Even with individual bill statuses uncertain, the scale of state legislative activity is confirmed across multiple independent sources. Organizations with multi-state AI deployments are watching new compliance obligations materialize at the state level across both chambers and both parties.
The compliance planning tension: three postures
> *This section describes analytical options for planning purposes. It doesn’t constitute legal advice. Consult qualified legal counsel for guidance specific to your organization.*
Compliance programs navigating the federal-vs.-state dynamic have roughly three postures available. None is definitively correct given current uncertainty. Each has tradeoffs.
Posture 1: Build fully for current state law, treat federal preemption as speculative.
The case for this: the White House framework creates no legal obligations today. State laws that pass and are signed create enforceable obligations. Georgia SB 540 may be law within days. California SB 574, if enacted, would affect every law firm using AI tools. Building for confirmed state obligations is defensible.
The tradeoff: if Congress acts on the framework and preempts a significant portion of state AI law, organizations may have invested in compliance infrastructure that partially becomes redundant. The more modular and documented that infrastructure is, the lower that switching cost.
Posture 2: Hedge, build for state law, but design for portability.
The case for this: compliance programs built on documented, modular frameworks can adapt faster when federal standards emerge. The specific state obligations may change, but the operational muscles, data mapping, vendor assessment, incident response, documentation practices, transfer. The hedge isn’t about waiting. It’s about building in a way that doesn’t lock you into any single regulatory configuration.
The tradeoff: modular design requires more upfront planning and documentation discipline than a targeted, state-specific buildout. The payoff is optionality.
Posture 3: Monitor and defer non-urgent state compliance investments until federal direction clarifies.
The case for this: for organizations where state AI bills don’t yet create near-term enforcement risk, waiting for federal clarity may be reasonable. If Congress acts in the administration’s stated timeframe of “coming months,” the regulatory picture could clarify before some state bills even pass.
The tradeoff: this posture requires accurate assessment of actual near-term enforcement risk in each relevant state. Georgia SB 540 is at the governor’s desk, organizations with consumer-facing AI services in Georgia can’t defer that one. California SB 574 directly affects legal professionals. Deferral that ignores near-term state obligations isn’t a strategy; it’s an oversight.
What to watch
The most immediate trigger is Georgia’s governor. A signature on SB 540 would make Georgia the first state in this legislative cycle to enact a specific chatbot disclosure and child safety law for conversational AI. That’s an enforcement reality, not a planning scenario.
Congressional response to the White House framework is the larger variable. Watch for bill introductions that mirror the framework’s structure, particularly any that specify which state laws would be preempted and in which domains. The administration’s “coming months” language suggests the legislative push is near-term, not aspirational. A House or Senate AI bill introduction would be the signal that the preemption dynamic is entering a concrete legislative phase.
California’s committee process for SB 813 and the resolution of SB 574’s procedural uncertainty both matter for the legal tech and multi-state compliance audience. Illinois’s SB3263 is a provenance data requirement with cross-industry reach, worth tracking independently of the federal preemption question.
TJS synthesis
The White House framework and accelerating state legislation aren’t in tension because one side is moving faster than the other. They’re in tension because they’re moving on different tracks toward potentially incompatible destinations, and no one knows yet whether or when those tracks will merge. Organizations building AI compliance programs right now are doing so without knowing which portions of their current state-law obligations will survive a federal framework and which won’t. The most defensible position isn’t certainty about which posture to adopt. It’s building the documentation and inventory discipline that makes rapid recalibration possible when the federal picture clarifies. The organizations that will navigate this transition most cleanly are the ones that can answer, on short notice, which of their compliance obligations derive from which state laws, and map those to any federal preemption framework the moment congressional language becomes available.