Two bills. Different targets. Same week.
Oregon just became the first state to pass an AI chatbot safety law in 2026 — focused on AI outputs and user safety. Meanwhile, New York’s S 6955 is advancing on the input side: it would require developers of generative AI systems to publish training dataset summaries on their websites, disclosing what data was used to build their models.
The bill, sponsored by Sen. Andrew Gounardes in the Senate and Assemblymember Alex Bores in the Assembly, advanced to a third reading in the state Senate on March 4, according to the bill’s official status on the New York Senate legislative tracker. A third reading is a procedural step approaching a floor vote — it does not indicate passage is imminent. A companion bill, A 6578, has also been introduced in the Assembly, though its current status has not been independently confirmed.
The pairing of Oregon’s output-focused law and New York’s input-focused bill in the same week signals something. The legislative quiet that followed the Trump AI executive order in January is ending. States are advancing on ground the federal order did not explicitly preempt. Oregon went after what chatbots say and do. New York is going after what they were trained on.
Whether S 6955 reaches a floor vote before the New York legislative session closes is unclear. What is clear: AI accountability legislation is no longer waiting for federal clarity before moving at the state level.