Connecticut lawmakers voted 131-17 to pass SB 5, a bill that would do two things no other US state AI law currently does together: require that AI models used in consequential settings be verified by an independent third party before deployment, and compel employers to disclose when AI causes layoffs or workforce displacement.
The bill now sits on Governor Ned Lamont’s desk. Until he signs it, every provision remains conditional. Legal analysts at Freshfields describe SB 5 as establishing the first state-level program for independent third-party verification of AI models in the US.
Two mechanisms in one bill
The model verification program works through the Department of Consumer Protection, which may approve up to five organizations to serve as authorized verifiers. Those organizations check AI models against defined safety standards before the systems go into consequential use. That structure, a capped roster of approved verifiers, administered by DCP, is meaningfully different from vendor self-assessment or post-deployment audit. It puts a checkpoint before release, not after.
The second mechanism is the workforce disclosure mandate. The bill, per published analyses of its text, would require employers to disclose when AI is a direct cause of layoffs or workforce displacement. Connecticut Attorney General William Tong holds enforcement authority. That combination of advance model review and post-deployment workforce accountability is what separates SB 5 from the wave of state AI bills the hub has tracked across the past several months.
Why it matters
Connecticut’s verification structure introduces something that compliance teams haven’t had to plan for at the state level: a mandatory pre-deployment checkpoint administered by an external body, not by the deployer. Every other major US state AI law to date has focused on disclosure requirements, prohibited uses, or impact assessments, not on independent pre-market verification. If this model spreads to other states, it changes the compliance architecture for AI deployment from a documentation exercise to an external gate.
The workforce disclosure provision carries its own weight. AI-related layoff disclosure requirements are rare globally; legal commentary from JDSupra confirms the provision is explicit in the bill text, not implied. Compliance and HR teams at Connecticut-based employers using AI in workforce decisions will need to build disclosure protocols if Lamont signs.
Context and precedent
Connecticut’s April 25 legislation, a different bill covering chatbot mental health safeguards and 48-hour takedown mandates, showed the legislature was already moving on AI. SB 5 is a harder-edged follow-on: it touches procurement, safety assessment, and employment law simultaneously. The state has positioned AG Tong, a nationally recognized advocate for consumer technology accountability, as enforcement lead. That choice signals intent, not just process.
The broader state AI law landscape, which the hub has tracked across Connecticut, Montana, Oregon, New York, and California filings, is fragmenting into distinct regulatory philosophies. Connecticut’s third-party verification model most closely resembles the EU AI Act’s conformity assessment structure – not the disclosure-and-audit approach that dominates other US state bills.
What to watch
Governor Lamont has not indicated a timeline. The hub will track the signature date, effective date, and DCP’s initial rulemaking on the verifier approval process. The five-verifier cap is worth monitoring: a small, state-approved roster of authorized evaluators could become a bottleneck, or a significant commercial opportunity for safety assessment firms. Watch also for a federal preemption challenge. The White House’s National AI Framework explicitly calls for federal preemption of conflicting state AI laws; SB 5’s model verification requirement may be precisely the kind of structural state mandate that draws a federal response.
TJS synthesis
SB 5’s real significance isn’t the vote count, it’s the architecture. A pre-deployment, third-party verification gate administered by a state agency is a structural shift from every other US state AI compliance requirement currently on the books. If it survives federal preemption and the governor’s review, compliance teams at companies deploying AI in Connecticut will face a new question they haven’t had to answer before: not “have we documented our AI’s risks?” but “has an approved external body verified our model against safety standards?” That’s a different kind of compliance posture, and it may not stay in Connecticut.