The bill’s full name is the Guidelines for User Age-verification and Responsible Dialogue Act. Senators Josh Hawley and Richard Blumenthal introduced it in October 2025 with 17 bipartisan cosponsors, and it cleared committee without a single dissent.
What the Bill Prohibits
First, it prohibits AI chatbots from encouraging suicide, self-injury, homicide, violence, or sexual content involving minors. That’s a conduct prohibition, not a content moderation guideline. Second, it requires that all AI chatbot services disclose clearly that users are interacting with a non-human system that lacks professional credentials. Third, it mandates “reasonable age verification measures” for AI companion services, defined as platforms that simulate sustained relationships with users. Fourth, it bans minors under 18 from accessing AI companion services entirely.
Enforcement Architecture
The enforcement structure is worth reading carefully. Both the U.S. Attorney General and State Attorneys General can bring civil actions. Civil penalties run up to $250,000 per violation. For conduct involving suicide encouragement, harm to minors, or sexual content generation, criminal liability applies with fines up to $250,000 per offense.
GUARD Act: Four Requirements for AI Chatbot Operators
- No AI chatbot may encourage suicide, self-injury, homicide, violence, or sexual content involving minors
- All AI chatbots must disclose non-human status and lack of professional credentials
- AI companion services must implement reasonable age verification measures
- Minors under 18 prohibited from accessing AI companion services
That dual enforcement path, federal plus state, is the part that should get compliance teams’ attention. It means a single AI companion product that fails age verification could face enforcement actions from multiple jurisdictions simultaneously.
What Happens Next
The bill awaits full Senate consideration. A companion bill has already been introduced in the House. Privacy and First Amendment advocates have raised concerns about the age verification requirements and the breadth of the content prohibitions, so the bill may undergo changes before passage.
What to Watch
The practical question for AI companies is whether their chatbot products already meet the GUARD Act’s requirements. Age verification for AI companions isn’t an optional feature under this bill. It’s a statutory requirement with criminal enforcement backing. Companies offering consumer-facing AI chat products should run an internal audit against the bill’s four requirements now, before the Senate floor vote, not after.
The unanimous committee vote is the signal that matters. Federal AI legislation is moving on multiple fronts simultaneously. Bipartisan AI child safety legislation doesn’t stall in committee. It moves. The question is how fast.