Connecticut’s Senate moved faster than Congress. On April 22, 2026, the Connecticut Senate passed a comprehensive AI regulation bill targeting the two AI harms that have generated the most political momentum at every level of government: nonconsensual intimate imagery and youth mental health. The bill is not yet law, it awaits a House vote and the Governor’s signature, but its provisions signal where state-level AI compliance is heading.
The core compliance obligations, as reported by the Hartford Business Journal and per the Connecticut Senate record: platforms hosting nonconsensual AI-generated sexual images face a 48-hour removal window once content is flagged. Miss it, and the fine is $25,000 per day. Those are not ambiguous numbers.
The second provision is technically more demanding. AI chatbots designed to simulate human interaction would be required, according to the Connecticut Senate record, to include detection methods for expressions of self-harm or suicidal ideation. What that detection obligation requires in practice, specific model behaviors, logging requirements, escalation protocols, will depend on the final bill text and any subsequent regulatory guidance. Compliance teams at chatbot developers should treat this as a design-phase question, not an incident-response one.
Attorney General William Tong issued a statement supporting the bill, describing it as complementing the federal Take It Down Act, which addresses nonconsensual intimate imagery at the federal level. That characterization is Tong’s, not a legal determination, how state and federal provisions interact in practice will require legal analysis once both are in effect.
The 48-hour removal clock is the provision that defines compliance urgency. Platforms that currently rely on standard content moderation review cycles, often measured in days, would need to build a dedicated fast-track process for this category of content. The fine structure makes delay expensive quickly: two days of non-compliance at $25,000/day reaches $50,000 before a weekend ends.
State-level AI legislation is accelerating. Connecticut’s bill follows Washington state’s AI companion chatbot law and joins California’s SB 7 as part of a wave of state-level AI compliance obligations taking effect in 2025-2026. Washington state enacted AI chatbot safety and deepfake disclosure requirements that already affect platform operators. This is not one state acting alone.
What to watch: The Connecticut House vote timeline and Governor Lamont’s expected response. If signed, the bill’s effective date and any implementation guidance from the Attorney General’s office will define the compliance clock. Watch also for whether the bill number is confirmed and the full text is published via the Connecticut General Assembly, that text is what compliance lawyers will need, not the summary.
The pattern here is important: states are not waiting for federal AI legislation. The compliance burden for platform operators is assembling piece by piece at the state level, and the pieces do not always fit together cleanly.