The compliance deadline is real and it’s nine months away. Washington and Oregon have both enacted laws regulating consumer-facing interactive AI companions, chatbots, virtual assistants, and related products, effective January 1, 2027. For any business operating these products in either state, compliance is not a future consideration. It’s a current obligation with a fixed date.
The statutes, analyzed in detail by Morgan Lewis, require operators to adopt three categories of compliance measures: clear disclosures to users about the AI nature of the system they’re interacting with; crisis detection and intervention protocols; and heightened safeguards specifically for minors. Both states’ laws establish private rights of action, meaning users, not just regulators, can bring claims against operators who don’t comply. That’s a material enforcement dimension that distinguishes these laws from many state AI bills that rely solely on regulatory enforcement.
The laws’ focus on AI companions is specific and intentional. This isn’t broad AI governance legislation, it targets the category of products designed for ongoing, relationship-style interaction with users. Companies operating general-purpose chatbots, customer service tools, or enterprise productivity AI are less clearly in scope. Companies whose products are designed for social companionship, emotional support, or parasocial engagement with users are squarely within it.
Washington and Oregon are the most clearly actionable items in a broader state AI legislative surge. According to legislative tracking by the Transparency Coalition, Maine’s LD 2082, which would prohibit the use of clinical AI in mental health therapy settings, was approved in early April. Alabama’s SB 63, addressing AI use in healthcare plan coverage determinations, passed the state House on April 8. Missouri’s HB 2372, which includes a therapy chatbot ban with a reported penalty provision, passed the Missouri House on April 2. Tennessee’s SB 837, which would define “human being” in state law to exclude AI, advanced through both chambers in early April. All four claims are sourced from Transparency Coalition legislative tracking and should be verified against official state legislative databases before compliance decisions are made based on them.
California has a significant slate of AI bills advancing through its legislature – including measures related to AI safety standards, algorithmic accountability, and digital watermarking, all currently pending and not enacted. Kansas, Nebraska, Connecticut, and Kentucky each have relevant AI legislation in progress as well, per Transparency Coalition tracking.
The pattern matters for compliance planning. The federal preemption argument, that a unified national AI law would make state-by-state tracking unnecessary, is real, but it’s not a near-term planning assumption. Washington and Oregon’s January 1, 2027 deadline doesn’t pause while Congress deliberates. The therapy chatbot bans advancing in Missouri and Maine don’t wait for federal action. Teams building compliance programs for consumer-facing AI products need to treat state law as the operative framework now and adjust if federal preemption changes that calculus later.
What to watch: Whether Maine LD 2082 and Alabama SB 63 receive final passage and signature. Whether Missouri HB 2372 clears the Senate. California’s AI bills are worth tracking closely, any California enactment in this space typically drives compliance investment across the industry regardless of geography.
TJS synthesis: Washington and Oregon have drawn a line. January 1, 2027 is the first concrete state-level AI companion compliance deadline with T2-verified legal authority behind it. The broader legislative surge is real but partially unconfirmed, the Transparency Coalition’s tracking needs cross-referencing against official state sources before it drives compliance decisions. Start with Washington and Oregon. They’re confirmed, they’re specific, and they have private rights of action.