Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

The GUARD Act Just Cleared Committee: What the First Federal AI Chatbot Safety Bill Requires

2 min read Global Policy Watch / Congress.gov Confirmed
The Senate Judiciary Committee unanimously advanced S. 3062, the GUARD Act, on April 30, making it the first federal AI chatbot safety bill to clear committee. The bipartisan legislation targets AI systems that pose risks to minors, including mandatory age verification for AI companion services and criminal liability for platforms that generate harmful content involving children.
Committee vote, unanimous Apr 30

Key Takeaways

  • The GUARD Act (S. 3062) cleared the Senate Judiciary Committee unanimously on April 30, 2026, the first federal AI chatbot safety bill to advance past committee
  • The bill prohibits AI chatbots from encouraging suicide, self-injury, violence, or sexual content involving minors, with criminal liability of up to $250,000 per offense
  • AI companion services must implement age verification and are prohibited from serving users under 18
  • Dual enforcement: both federal AG and state AGs can bring civil actions, creating multi-jurisdictional exposure for non-compliant AI chatbot operators

Compliance Deadline

April 30, 2026
0 days remaining
EntityU.S. Senate
JurisdictionUS Federal
PenaltyUp to $250,000 per violation; criminal liability for harm/sexual content offenses

The bill’s full name is the Guidelines for User Age-verification and Responsible Dialogue Act. Senators Josh Hawley and Richard Blumenthal introduced it in October 2025 with 17 bipartisan cosponsors, and it cleared committee without a single dissent.

What the Bill Prohibits

First, it prohibits AI chatbots from encouraging suicide, self-injury, homicide, violence, or sexual content involving minors. That’s a conduct prohibition, not a content moderation guideline. Second, it requires that all AI chatbot services disclose clearly that users are interacting with a non-human system that lacks professional credentials. Third, it mandates “reasonable age verification measures” for AI companion services, defined as platforms that simulate sustained relationships with users. Fourth, it bans minors under 18 from accessing AI companion services entirely.

Enforcement Architecture

The enforcement structure is worth reading carefully. Both the U.S. Attorney General and State Attorneys General can bring civil actions. Civil penalties run up to $250,000 per violation. For conduct involving suicide encouragement, harm to minors, or sexual content generation, criminal liability applies with fines up to $250,000 per offense.

GUARD Act: Four Requirements for AI Chatbot Operators

  • No AI chatbot may encourage suicide, self-injury, homicide, violence, or sexual content involving minors
  • All AI chatbots must disclose non-human status and lack of professional credentials
  • AI companion services must implement reasonable age verification measures
  • Minors under 18 prohibited from accessing AI companion services

That dual enforcement path, federal plus state, is the part that should get compliance teams’ attention. It means a single AI companion product that fails age verification could face enforcement actions from multiple jurisdictions simultaneously.

What Happens Next

The bill awaits full Senate consideration. A companion bill has already been introduced in the House. Privacy and First Amendment advocates have raised concerns about the age verification requirements and the breadth of the content prohibitions, so the bill may undergo changes before passage.

What to Watch

Full Senate floor vote on S. 3062Q2-Q3 2026
House companion bill progress and potential reconciliationH2 2026
Industry lobbying response from AI companion platformsOngoing

The practical question for AI companies is whether their chatbot products already meet the GUARD Act’s requirements. Age verification for AI companions isn’t an optional feature under this bill. It’s a statutory requirement with criminal enforcement backing. Companies offering consumer-facing AI chat products should run an internal audit against the bill’s four requirements now, before the Senate floor vote, not after.

The unanimous committee vote is the signal that matters. Federal AI legislation is moving on multiple fronts simultaneously. Bipartisan AI child safety legislation doesn’t stall in committee. It moves. The question is how fast.

View Source
More Regulation intelligence
View all Regulation

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub