Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

Bipartisan Senate Bill Targets High-Risk AI Disclosure, What the Proposed Requirements Would Mean

A bipartisan group of US Senators is reported to have introduced legislation that would require risk assessments and training disclosures for high-risk AI applications, according to political reporting from the period, though the bill's full text and specific requirements could not be independently confirmed before publication. The proposed legislation represents the ongoing congressional effort to establish federal accountability standards for AI systems where companies currently set their own evaluation benchmarks.

A bipartisan group of US Senators reportedly introduced new AI transparency legislation around March 23, 2026, targeting the disclosure gap that currently lets AI companies define their own evaluation and reporting standards for high-risk applications. The bill’s full text has not yet been independently confirmed, and the primary news source for this report was unavailable at publication time, but the legislative direction aligns with active bipartisan efforts documented in congressional records, including S.2938, the Artificial Intelligence Risk Evaluation Act, introduced in the 119th Congress.

The reported legislation would require risk assessments and disclosure of how AI models are trained and evaluated for high-risk applications. These are the two core transparency obligations that AI governance advocates have sought from federal legislation: before a high-risk AI system is deployed, demonstrate that you assessed it; and disclose enough about how it was built that outside parties can evaluate those claims. Both requirements address the same underlying problem, that without mandated disclosure, there is no external check on whether a company’s own safety assessment is meaningful.

This bill arrives alongside the White House’s National AI Legislative Framework, released March 20, which also calls for a unified federal approach to AI governance. Whether the Senate bill’s approach to disclosure and risk assessment aligns with the administration’s framework is not yet clear. The two documents may reflect different legislative priorities even if they share a federal-first orientation. Compliance teams tracking US federal AI legislative developments should monitor both tracks.

What makes the bipartisan framing significant is the implicit signal about legislative viability. AI regulation bills introduced by a single party face structural barriers in the current Senate environment. Bipartisan co-sponsorship does not guarantee passage, but it is the baseline condition for a bill to advance through committee. The pattern of bipartisan AI transparency activity visible in recent congressional sessions, including the Schiff/Curtis bill on AI-generated content labeling and earlier Senate transparency efforts, suggests this is not an isolated legislative moment but an ongoing convergence around disclosure as the politically achievable federal AI standard.

What is not confirmed: the bill’s sponsors, the specific definition of “high-risk AI” it applies, its enforcement mechanism, and its timeline for compliance. These are not minor gaps. The definition of “high-risk” will determine which companies and systems fall within scope. The enforcement mechanism will determine whether the disclosure requirements have real teeth. None of this can be reported from available sources, and nothing here should be treated as a compliance requirement.

What to watch

the bill’s text when publicly available, committee assignment, and whether the White House signals support or opposition. A Senate AI transparency bill that aligns with the administration’s framework would have a materially different political trajectory than one that doesn’t. That alignment, or the lack of it, is the key variable for assessing this bill’s prospects.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub