Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

AI Regulation News: Courts, Congress, and the White House Are Pulling US AI Policy in Three Directions

6 min read JURIST / Governing (White House) Partial
In a two-week span, three separate US institutions produced materially different signals about how AI should be governed, a federal judge blocked an executive AI policy tool, the White House asked Congress to consolidate AI authority at the federal level, and a Republican senator introduced legislation that moves in the opposite direction on liability and intellectual property. For compliance teams, the challenge isn't any single development. It's that the three primary levers of US AI governance are now producing contradictory outputs at the same time.

On March 28, 2026, US District Judge Rita F. Lin issued an injunction blocking the federal government’s designation of Anthropic as a supply chain risk. The order halted agency-wide restrictions on the Claude AI model. Legal reporting from JURIST confirms that Judge Lin ruled in Anthropic’s favor, and that her ruling directly referenced the administration’s conduct as retaliation against Anthropic’s public statements about its contracting concerns.

That ruling arrived eight days after the White House published a national AI policy framework calling on Congress to preempt state AI laws and limit AI developer liability. Eight days before the ruling, Senator Marsha Blackburn had reportedly introduced legislation that takes a substantially different approach, expanding liability, imposing a duty of care, and addressing AI training copyright. Three institutions. Three directions. Ten days.

This piece maps the three fronts and draws out what compliance professionals need to track.

Front One: The Courts

The Anthropic ruling matters well beyond Anthropic.

What the administration did, designating a specific AI company’s product as a supply chain risk and ordering federal agencies to stop using it, was an attempt to use procurement and contracting authority as an AI policy enforcement tool. That tool is now judicially constrained. According to JURIST’s reporting on the ruling, the court found the government’s actions likely constituted First Amendment retaliation for Anthropic’s public statements about its contracting position. The specific language of the judicial opinion isn’t fully extractable from available reporting, but legal coverage indicates the retaliation framing was central to the court’s reasoning, not a secondary observation.

The structural implication goes beyond this case. If courts are prepared to scrutinize executive AI directives through a First Amendment lens, the procurement tool the administration used here becomes a riskier instrument. An executive branch that wants to shape which AI companies work with the federal government now faces judicial review on a theory that wasn’t previously tested at this level.

Anthropic filed suit on March 23, reportedly characterizing the administration’s actions as “unprecedented and unlawful.” The injunction followed five days later.

For organizations using Claude in federal contract contexts, the injunction provides immediate relief. But it’s a preliminary order. Whether it becomes permanent, or whether the government appeals, determines whether this ruling establishes lasting doctrine or simply delays the underlying conflict. Full analysis of the ruling’s contracting implications is available on the Regulation pillar.

Front Two: The White House

On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence Legislative Recommendations, a document directed at Congress, not the agencies. Governing’s reporting confirms the framework’s central push: broad preemption of state AI laws, with the federal government asserting primary authority over the AI regulatory landscape.

The framework also opposes “open-ended liability” for AI firms, a clear signal about how the administration wants Congress to structure any federal AI statute. The full document is available at whitehouse.gov; readers who need precise clause-level language on data center permitting, regulatory sandboxes, and children’s protection provisions should consult the primary text directly, as those specifics depend on the document itself rather than secondary reporting.

This framework was mandated by a December 11, 2025 executive order directing the development of a national AI policy, a detail confirmed by Governing’s coverage of the release.

The strategic logic of the framework is worth noting. The Anthropic ruling (Front One) is a signal that executive-order-based AI policy tools face judicial exposure. A legislative framework asks Congress to create statutory authority, authority that would be far harder to challenge on First Amendment grounds than an executive designation. The White House, in other words, appears to be shifting strategy: from acting through agencies to seeking durable authority through statute.

For compliance teams, the preemption proposal is the variable with the most direct operational consequence. If Congress acts on the framework, the patchwork of state AI laws, California’s SB 1047 successors, Illinois biometric rules, Texas AI provisions, may eventually be superseded by a single federal standard. That would simplify compliance architecture significantly. It would also remove the state-level regulatory pressure that has moved faster than Congress on several AI issues. The state law collision analysis on the Regulation pillar covers the current state-level landscape in detail.

Front Three: Congress

This is where the picture becomes most uncertain, and where the qualified language matters most.

Reports indicate that Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act on March 18. The primary sources for this item, The Dispatch and Investing.com, were inaccessible during this verification cycle, and specific provisions should be confirmed against the primary bill text before being relied upon for compliance or legal analysis. The bill is reportedly available through Congress.gov and Senator Blackburn’s Senate office; readers who need the specific provisions should access the primary text directly.

What reporting suggests about the bill’s thrust is worth noting as context, even under those caveats. If the characterizations are accurate, the bill moves in the opposite direction from the White House framework on liability, expanding it rather than limiting it, and including a duty of care standard. It reportedly also addresses AI training copyright and fair use, which the White House framework’s preemption proposal does not resolve. These would be materially different legislative choices.

The significance isn’t the bill’s specific provisions. It’s the fact that a Republican senator and the Republican White House appear to be advancing incompatible legislative visions for AI at the same time. That intra-party divergence means no AI legislation moves through a simple majority alignment. Whatever eventually reaches the floor will require negotiation across positions that, if the reporting is accurate, start quite far apart on liability and IP.

For compliance professionals: the Blackburn bill’s specific provisions should not be incorporated into internal compliance frameworks or legal analysis until confirmed against primary text. The existence of significant Republican disagreement on AI policy, that much can be stated with confidence based on the broader reporting environment.

What Compliance Teams Should Track

Three variables determine whether this moment produces stable policy or continued fragmentation.

First: the injunction’s trajectory. A preliminary injunction is not settled law. Watch for whether the administration appeals, whether the injunction becomes permanent, and whether the First Amendment retaliation theory is tested at the appellate level. If an appeals court affirms the theory, executive AI policy tools face a materially higher legal bar. If the lower court’s ruling is reversed, those tools regain their viability.

Second: the framework’s legislative fate. The White House framework is a request to Congress, not a directive. Whether it advances depends on whether the House and Senate can close the gap between the administration’s liability-limiting approach and the Blackburn bill’s reported liability-expanding approach. Watch committee assignments and whether the framework is taken up as draft legislation or remains an executive statement of preference. Stalling is the base case. Movement is the signal worth tracking.

Third: state law activity. The preemption strategy’s success or failure determines whether state AI laws remain a live compliance variable. Until federal preemption is enacted, if it ever is, state laws continue to apply. Organizations building compliance programs now should not wait for federal resolution. The framework’s preemption analysis on the Regulation pillar provides the current state-of-play on what federal preemption would and wouldn’t cover.

The Structural Point

US AI governance has operated without a coherent statutory foundation. The administration has used executive orders and procurement authority. Courts have begun testing those tools on constitutional grounds. Congress has produced proposals that reflect genuinely different views about liability, copyright, and federal authority.

None of this resolves quickly. Compliance programs that require a stable, definitive federal standard to function are waiting for something that isn’t imminent. The more durable approach is a framework that maps to the three fronts: what courts are constraining, what the executive branch is pursuing legislatively, and where Congress is divided. That map changes as each front develops. It doesn’t collapse when any single development stalls.

The Anthropic ruling, the White House framework, and the reported Blackburn bill aren’t three separate stories. They’re the same story, told from three institutional positions that don’t yet agree on what AI governance should look like.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub