Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Deep Dive

FTC Marketing, Florida Criminal Liability: Two Enforcement Actions in Two Weeks Define AI's Legal Exposure

5 min read Florida Attorney General / My Florida Legal Partial
Within two weeks, two different enforcement agencies have used two different legal theories to bring formal government action against AI companies, the FTC on marketing deception under Section 5, and now the Florida AG on a criminal aiding and abetting theory following a shooting at Florida State University. Neither action has produced a final outcome. Together, they establish something more significant than any single case: a pattern of expanding government willingness to apply existing legal frameworks to AI conduct in ways that were theoretical a year ago.

The Florida Investigation: What Is Known

On April 21, Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI. The trigger was a shooting at Florida State University. The investigation is an official government action, it’s confirmed by multiple published reports and represents a formal exercise of state enforcement authority. The investigation is open. No charges have been filed.

According to reports, OpenAI has been subpoenaed for all internal policies regarding user safety protocols and cooperation with law enforcement. That subpoena claim comes from a single source, My Florida Legal, and couldn’t be independently verified against primary legal documents. It should be treated as reported until primary documentation is available.

What is independently confirmed: the investigation exists. Florida’s attorney general has authority to investigate potential violations of Florida law. Criminal investigations don’t require public evidence of a crime to be initiated, they’re the mechanism for determining whether evidence exists.

The Legal Theory: How “Aiding and Abetting” Could Apply to an AI System

The legal theory under examination, as reported by legal analysts, is the application of Florida statutes permitting criminal liability for those who aid or counsel the commission of a crime to an AI system. Legal analysts report that this specific application would be unprecedented in US case law. No AI system has faced criminal liability under an aiding and abetting framework before.

The theory, as reported, doesn’t argue that the AI system decided to facilitate harm. It examines whether the system’s outputs, in whatever form they took in interactions preceding the shooting, constituted the kind of assistance that Florida’s statutes contemplate. That’s a question about what the statute covers, not about AI intent.

Why does this matter even if the theory fails? Because the question being asked changes the legal landscape regardless of the answer. If a state attorney general can initiate an investigation on this theory, defense attorneys and plaintiffs will use the same framework in civil litigation. Insurers will begin pricing the risk. Compliance teams will start building documentation that addresses it. The investigation’s initiation is itself a legal event with downstream consequences.

Courts may ultimately reject the theory, aiding and abetting statutes generally require a human actor making a decision to assist, and applying that to a software system creates obvious doctrinal problems. But “obvious doctrinal problems” have a way of producing years of litigation before resolution. As examined in our coverage of federal vs. state AI enforcement dynamics, state attorneys general have broad investigative authority and the ability to advance novel legal theories without federal coordination.

The Enforcement Pattern: Florida AG Plus FTC in the Same Window

The Florida investigation isn’t happening in isolation. Our earlier brief on FTC Section 5 enforcement actions against AI companies covered the Federal Trade Commission’s use of its deceptive practices authority to pursue AI marketing claims. The FTC actions and the Florida AG investigation are unrelated as legal matters, different theories, different agencies, different conduct at issue. But their temporal proximity makes the pattern visible.

Both enforcement actions share a structural feature: they’re applying legal frameworks that predate AI to AI-specific conduct. Section 5 of the FTC Act covers deceptive practices. It predates large language models by decades. Florida’s aiding and abetting statutes predate them by even longer. Neither framework was designed with AI in mind. Both are being used anyway.

This is how legal adaptation to new technology typically works. Legislatures don’t always act quickly. Regulators and prosecutors fill the gap with existing authority. The frameworks bend, and eventually either break, producing a court ruling that limits their application, or hold, producing precedent that defines the new legal landscape. We’re in the bending phase.

Stakeholder Positions

OpenAI’s exposure in the Florida investigation is multi-layered. The direct question is legal: whether its systems’ outputs in any interactions related to the FSU shooter fall within any cognizable theory of criminal liability. But the collateral exposure matters more immediately. The subpoena, if confirmed, requires production of internal policies on user safety protocols and law enforcement cooperation. That production becomes a record of what OpenAI knew, when it knew it, and what processes it had in place. That record has relevance beyond this investigation.

State AG authority as a regulatory force is the broader structural implication. As covered in our federal preemption stakeholder analysis, the question of whether federal AI regulation preempts state enforcement is contested and unresolved. The Florida investigation advances the state side of that argument. A state attorney general using criminal authority to investigate an AI company is categorically different from a state legislature passing a consumer protection regulation. Criminal enforcement authority sits outside the typical preemption analysis.

Other state attorneys general are watching. Florida’s action provides a template. Whether other states with active AG offices in jurisdictions with relevant incidents follow this pattern is a question to monitor over the coming months.

Practical Implications: What AI Developers With Consumer-Facing Products Should Assess

The practical implication of the Florida investigation isn’t to redesign products or restructure legal entities. It’s to run a specific assessment on a specific question: does your organization have documented, coherent policies on three things?

User safety protocols, specifically, what the system is designed to detect and respond to regarding potentially harmful user intent. What triggers escalation, referral, or refusal? Is that documented and current?

Law enforcement cooperation, specifically, what data your organization retains about user interactions, for how long, in what form, and under what legal process it would be produced. A subpoena for “all policies on law enforcement cooperation” is a request for something that should already exist in writing.

Content safety architecture, specifically, what guardrails are in place, how they were designed, who approved them, and how they’re tested. The question the Florida investigation asks, implicitly, is whether the company made reasonable choices about system safety. Documentation is how you answer that question if asked.

None of this is new compliance obligation. These are practices that responsible AI deployment already requires. The Florida investigation makes the cost of not having them visible in a way that vendor risk assessments don’t. Two enforcement actions in two weeks, different theories, different levels of government, different conduct, converge on the same underlying question: what did the company know about how its system would be used, and what did it do about it? Document your answer.

View Source
More Regulation intelligence
View all Regulation
Related Coverage

More from April 24, 2026

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub