Seven days separated two documents that are now in direct tension.
On March 20, 2026, the Trump administration released its National Policy Framework for Artificial Intelligence, a seven-pillar set of recommendations to Congress that includes a call to preempt state AI laws deemed to impose “undue burdens.” On March 27, 2026, New York Governor Kathy Hochul signed a chapter amendment finalizing the Responsible AI Safety and Education Act, a state frontier AI compliance regime with enforceable obligations and a January 1, 2027 deadline.
The federal document has no enforcement authority. The state document does. But if Congress acts on the administration’s recommendations, the state document could be preempted before its deadline arrives. That is the compliance planning problem, not hypothetical, not distant, and not simple.
What the Federal Framework Actually Proposes
The National AI Policy Framework is a policy document, not a statute. It mandates nothing today. Its legal weight is zero until Congress acts.
What it does carry is political weight. The administration has named seven legislative priorities, and one of them, preemption of state AI laws, lands directly on active state legislation across multiple states. The framework calls on Congress to preempt state AI laws that “impose undue burdens,” while preserving state authority in areas including traditional police powers, child protection, and fraud prevention. Morrison Foerster’s analysis notes the carve-outs explicitly; they matter because they indicate the administration isn’t seeking to eliminate all state AI authority, only the portions it characterizes as burdensome to AI development and deployment.
The “undue burdens” standard is undefined. That phrase will become the central battlefield if preemption legislation advances in Congress. What qualifies? Transparency reporting requirements? Incident disclosure timelines? Developer registration obligations? The framework doesn’t answer these questions. Congress would have to. And the lobbying contest over the answer would be significant, state legislators, civil liberties organizations, and consumer protection advocates would all contest any definition that sweeps too broadly.
The framework emerged from a December 2025 Executive Order that directed the administration to develop AI legislative recommendations and established an AI Litigation Task Force. The March 20 document is the delivered output. It is the clearest articulation yet of where the administration wants Congress to go.
What the RAISE Act Requires, and When
New York’s RAISE Act is not proposed legislation. It is finalized law.
The chapter amendment signed March 27, 2026 replaced the prior compute-based definition of “large frontier developer” with a revenue-based threshold, a material change that makes the law’s coverage easier to assess. Compute thresholds are hard to verify externally; revenue thresholds are not.
Covered developers face three categories of obligation by January 1, 2027. First, publish a “frontier AI Framework” on their website. Second, produce transparency reports. Third, establish a critical safety incident disclosure mechanism. According to Wiley Rein’s analysis of the RAISE Act, the law mandates a 72-hour window for reporting critical safety incidents, a figure that has not yet been confirmed by a second independent source and should be treated as a single-source characterization pending DFS rulemaking clarification.
The RAISE Act also grants rulemaking authority to a new office within the New York Department of Financial Services. That rulemaking process is now underway, or will be shortly. The specific content requirements for a compliant frontier AI Framework, the format and scope of transparency reports, and the precise incident disclosure mechanism will all be shaped by DFS rules issued before January 2027.
California’s Transparent Frontier AI Act contains parallel requirements. Developers with exposure to both states should anticipate that compliance programs built to satisfy one law will largely satisfy the other, but the details matter, and the timelines and rulemaking calendars are distinct.
The Compliance Planning Problem
Here is the structural difficulty. The RAISE Act’s effective date is January 1, 2027. A compliance program built to meet that deadline requires time, time for legal analysis, internal documentation, transparency report frameworks, incident response procedures, and potentially vendor assessments. That work has a realistic lead time of six to nine months for organizations that aren’t already under way.
The federal preemption push, if it results in legislation, could arrive before, at, or after January 2027. Congressional timelines are notoriously unpredictable. The administration’s framework is not a bill. No companion legislation has been identified in this reporting period. The path from White House policy document to enacted federal statute involves committee hearings, markup, floor votes, conference, and presidential signature, a process that rarely moves on a fixed schedule.
Three scenarios are worth mapping for compliance planning purposes.
In the first scenario, Congress does not act on preemption before January 2027. The RAISE Act takes effect as written. Developers without compliant programs are exposed to DFS enforcement. This is the baseline scenario, the most likely outcome given typical Congressional timelines.
In the second scenario, Congress passes preemption legislation before January 2027 that sweeps broadly enough to preempt the RAISE Act. Developers who built toward RAISE Act compliance have over-invested, but they have also built infrastructure (frontier AI Frameworks, transparency reporting capacity, incident disclosure protocols) that is likely to be required by whatever federal framework eventually replaces the state laws. The compliance work isn’t wasted; it’s repositioned.
In the third scenario, Congress passes preemption legislation with carve-outs, preserving state authority in child protection, consumer protection, or other defined areas, that leaves portions of the RAISE Act intact. This is the most complex planning scenario and requires legal analysis of what exactly survives.
What Compliance Teams Should Do Now
The uncertainty is real. The deadline is also real. These aren’t contradictory, they’re both true simultaneously, and the practical response accounts for both.
Build toward RAISE Act compliance as a floor. The three core obligations, frontier AI Framework publication, transparency reporting, and incident disclosure, represent a baseline that is likely to survive any preemption scenario in some form. Federal frameworks, if enacted, will almost certainly require similar documentation. State carve-outs will preserve them in specific areas. The compliance infrastructure is durable regardless of the preemption outcome.
Watch DFS rulemaking as the near-term signal. The rules issued by the new DFS office will define exactly what a compliant frontier AI Framework must contain. That rulemaking is where the actionable specifics emerge. Track DFS announcements and engage in the public comment process, the window for shaping rules is open now, not after January 2027.
Track Congressional action specifically on the “undue burdens” definition. If a preemption bill advances, the scope of that phrase is the determinative question. Broad definitions that capture transparency and incident disclosure requirements pose direct risk to RAISE Act compliance programs. Narrow definitions that preserve those requirements while targeting heavier regulatory impositions change the calculus significantly.
Monitor California’s TFAIA rulemaking in parallel. New York and California are moving in the same direction. Federal preemption advocates will need to make the case against both laws simultaneously. How the political fight develops around TFAIA may signal how the RAISE Act fares.
The administration has named a direction. New York has set a deadline. The outcome of their collision is not determined, but the compliance planning required to navigate it can begin now, with the information already confirmed.