The compliance clock is running. Washington’s HB 2225 is now law, and January 1, 2027, is the hard deadline. Eight months is not long when the requirements include building detection systems for suicidal ideation.
Governor Bob Ferguson signed House Bill 2225 on March 24, 2026. The law, confirmed via the Washington State Legislature’s official record and analyzed by Hunton Andrews Kurth’s privacy and cybersecurity practice, targets operators of AI companion chatbots specifically. It’s the first state law in the country written for this category of AI product.
Three requirement categories define what compliance looks like:
Operators must clearly disclose that the chatbot is artificial. No implied humanity, no ambiguous personas that obscure the product’s AI nature.
The law prohibits sexually explicit content and manipulative engagement targeting minors. These are content and design constraints, not just disclosure requirements, they shape what the product can do, not just what it must say.
Operators must implement protocols for detecting and responding to suicidal ideation. The law establishes the requirement but doesn’t prescribe specific technical implementations, the compliance standard will require interpretation and likely industry guidance before January 2027.
Violations are treated as unfair or deceptive acts under Washington’s Consumer Protection Act. That enforcement mechanism matters: it means private plaintiffs, not just state regulators, can bring claims. The practical exposure is meaningful for any company operating a companion chatbot accessible to Washington residents.
The definition question is the first thing operators need to answer: does their product fall within the statute’s definition of “AI companion chatbot”? The law uses that phrase, but the precise statutory definition hasn’t been independently confirmed beyond legal analysis summaries. Operators should review the full text at the official Washington State Legislature record before making coverage determinations.
For EdTech companies and platforms serving minors, the minor protections provisions carry particular weight. Any AI-powered companionship or tutoring product with social or relationship-simulating elements needs a clear read on whether it qualifies as a companion chatbot under the statute. The compliance risk isn’t abstract, it’s the CPA’s unfair or deceptive acts standard, which Washington courts interpret broadly.
What to watch: Whether other states follow. New York’s RAISE Act, targeting frontier AI models with a separate but overlapping January 1, 2027, effective date, signals that two states are simultaneously setting AI compliance deadlines for the same calendar year. If California and Texas move in 2026, the patchwork of state AI law becomes a genuine compliance architecture challenge, one that a single federal standard would resolve but that currently requires state-by-state analysis.
The TJS read: Washington’s law is narrowly targeted but operationally demanding. The mental health protocol requirement in particular requires time to implement responsibly. Eight months is workable, but only for organizations that start now.