Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

Virginia Becomes First State to Legislate Toward Independent AI Safety Verification

Governor Abigail Spanberger signed SB 384 and HB 797 on April 13, 2026, directing Virginia's Joint Commission on Technology and Science to evaluate a framework of Independent Verification Organizations for AI safety. Virginia has not enacted an IVO system, it has authorized a study that could become the model for one.

Virginia moved quietly on April 13, 2026, and the move may matter far beyond the state’s borders.

Governor Abigail Spanberger signed SB 384 and HB 797 into law, directing the Joint Commission on Technology and Science (JCOTS) to evaluate the feasibility and impact of developing a framework for Independent Verification Organizations (IVOs) for AI systems. The bill’s language is precise: JCOTS is directed to evaluate the feasibility of developing a framework, not to implement one. Virginia has taken the first legislative step toward an IVO model. The IVO framework itself does not yet exist as enforceable law.

That distinction matters. The bill’s significance lies in what it sets in motion, not what it enacts today.

The IVO concept was developed by Fathom, the organization that has been advancing the framework across multiple states. According to Fathom, whose press release confirmed the signing, the IVO model would have state governments set outcome-based safety goals for AI systems and authorize a marketplace of independent verifiers to assess AI products against those standards. Fathom describes the approach as increasing safety, trust, and innovation, though that characterization comes from the framework’s proponents, not from independent assessment.

The structural idea is distinct from the EU’s approach to AI conformity assessment, which operates through harmonized standards and notified bodies under prescriptive rules. The IVO model, as described by its advocates, is more market-oriented: competing verifiers, state-set goals, and AI developers choosing among qualified assessors. Whether that model produces rigorous safety verification or a softer compliance marketplace depends entirely on how state-set goals are defined and enforced, a question JCOTS will now examine.

Governor Spanberger’s office confirmed the signing as part of April 13 legislative actions. The governor has positioned Virginia as an active participant in AI governance at the state level, ahead of any federal framework that would preempt or supersede state action.

For compliance teams and AI developers, the practical question isn’t what Virginia requires today, it’s what the JCOTS study will recommend and whether other states follow. The IVO model has surfaced in Minnesota’s legislature as well. Virginia’s study directive gives the framework its first statutory footing in any U.S. state.

What to watch: JCOTS will produce findings and recommendations, the timeline for that study is the key variable. If recommendations support advancing an IVO framework, Virginia may introduce legislation in a future session that creates actual compliance obligations. Organizations developing or deploying AI systems should monitor the JCOTS process and track whether additional states adopt similar study directives in response to Virginia’s action. This also connects to the broader question covered in the existing “Federal vs. State AI Authority” brief: state-level experimentation is accelerating in the absence of a federal framework.

The TJS takeaway: Virginia’s signing represents the institutionalization of an idea, that AI safety can be verified independently, against public goals, through a market of qualified assessors. Whether that idea becomes policy depends on the JCOTS study. The early move gives Virginia a first-mover position in a governance model that, if adopted broadly, would create an entirely new category of AI compliance obligation.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub