Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

OpenAI and Anthropic Are on Opposite Sides of Illinois' AI Liability Bill, Here's What Each Company Wants

2 min read Tech in Asia; Gizmodo Partial
OpenAI is publicly backing Illinois SB 3444, which would shield AI developers from liability under defined conditions. Anthropic is opposing it and backing a rival bill, marking the clearest public split between two frontier labs on US state AI legislation to date.

Two of the most prominent AI companies in the world have taken publicly opposing positions on the same piece of state legislation, and the fault line runs straight through Illinois Senate Bill 3444, the state’s proposed Artificial Intelligence Safety Act.

According to reporting on the bill, OpenAI is actively supporting SB 3444, which would protect AI developers from liability if they did not “intentionally or recklessly” cause harm and have published safety and security protocols. The bill reportedly defines “critical harms” as events causing death or serious injury to 100 or more people, or $1 billion or more in property damage, thresholds that, per reporting on the bill’s described terms, set a high bar before liability attaches.

Anthropic is not on board. Gizmodo reported that Anthropic has characterized SB 3444 as a “get-out-of-jail-free card”, attributed to Anthropic’s representatives in coverage of the bill – and is instead backing SB 3261, an alternative bill described as requiring auditable safety plans and child protection measures.

The split matters beyond Illinois. OpenAI’s position follows a familiar federal preemption argument: inconsistent state-level AI laws create a compliance “patchwork” that disadvantages US developers relative to international competitors. It’s a position OpenAI has advanced in other state-level debates. Anthropic’s counter-argument is structural, a liability shield without meaningful accountability mechanisms doesn’t create safety incentives, it removes them. The two bills represent two distinct theories of how AI governance should work in practice: one that centers developer discretion, and one that centers verifiable process obligations.

Neither bill has passed. SB 3444 is pending a legislative vote as of April 15, and the governor’s position has not been publicly confirmed. The Illinois debate is early-stage, but the public record of opposing lab positions is not. Compliance teams and legal counsel at AI companies now have documented, named positions from both labs to factor into their own regulatory strategy assessments.

What to watch: the vote timeline on SB 3444, any public statement from the governor’s office, and whether other state legislatures, particularly California and New York, which have active AI regulation pipelines, reference the Illinois split when drafting their own liability frameworks. If the “patchwork” argument succeeds in Illinois, it strengthens OpenAI’s case at the federal level. If Anthropic’s audit-and-accountability model gains traction, it may set a template that other states follow independently of federal action.

The TJS read: the Illinois debate is a preview of a larger structural argument about who bears accountability when AI systems cause harm at scale. OpenAI and Anthropic have now staked out opposite ends of that argument in a public, traceable way. Whatever happens with SB 3444, that record follows both companies into every subsequent state and federal regulatory conversation.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub