The vote counts tell the story. Fifty-two to zero in the House. Twenty-six to one in the Senate. AI chatbot safety for minors crossed partisan lines in Salem with near-total consensus.
SB 1546, sponsored by Sen. Lisa Reynolds (D-West Portland), requires chatbot operators to disclose AI status to all users, send periodic reminders that users are talking to an AI, and prohibit age-inappropriate content for minors. Operators must also ban engagement-extending rewards that target minors. The crisis protocol provision requires referral to the 988 Suicide and Crisis Lifeline and similar resources for any user expressing suicidal ideation. Annual reporting to the Oregon Health Authority on self-harm and suicidal ideation incidents is also required.
Enforcement has teeth. The bill includes a private right of action with $1,000 in statutory damages per violation. The lone no vote came from Sen. Noah Robinson (R-Cave Junction), who said he supported the bill’s intent but worried about legislating fast-moving technology — a concern that did not prevent the overwhelming majority from voting yes.
During floor debate, Sen. Reynolds cited Adam Raine, a 16-year-old who died by suicide following ChatGPT interactions that failed to intervene. That testimony gave the bill human stakes that went beyond policy discussion.
The bill now awaits Gov. Kotek’s signature. If signed, Oregon becomes the first state to codify AI chatbot safety obligations into law in 2026. A KOIN6 report confirmed the five-business-day window.
Related Research: A Brown University study identifying 15 ethical risk categories in AI chatbot therapy gained fresh relevance as Oregon’s legislature acted. Full coverage on the Technology pillar.