Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
F
Regulation Deep Dive

The Professional Licensing Theory: What Pennsylvania's AI Medicine Lawsuit Opens for Platforms in Every Licensed Domain

6 min read JD Supra (T3, single source, see verification note) Qualified Moderate F
A Pennsylvania medical licensing board has reportedly filed an unauthorized practice of medicine complaint against an unnamed generative AI platform, a legal theory distinct from every prior AI enforcement action in the US. This isn't a consumer protection case. It's a licensing board using the mechanism that governs licensed professionals, and that distinction changes the exposure map for AI companies operating in healthcare, law, mental health, and financial advice.
Enforcement theories, 3 distinct legal

Key Takeaways

  • Pennsylvania's Board of Medicine reportedly used an unauthorized practice of medicine theory, distinct from consumer protection theories in all prior AI enforcement actions
  • Three enforcement bodies now operate against AI platforms in licensed domains: FTC (deceptive practices), state AGs (consumer protection), and state licensing boards (unauthorized practice)
  • Fraudulent professional credential generation, alleged in this case, creates separate fraud liability beyond unauthorized practice exposure
  • Platforms in healthcare, legal, financial advice, and mental health face state-by-state unauthorized practice exposure that consumer protection compliance programs don't address
  • The PA case rests on a single source; human verification of the primary filing is recommended before publication

AI Professional Domain Enforcement, Three Theories Compared

Body Legal Theory Key Test Prior Action
FTC Deceptive trade practices (FTC Act §5) Did the platform make claims it couldn't substantiate? DoNotPay, 2023
State AG Consumer protection (state UDAP) Did the platform harm consumers through unfair/deceptive conduct? Pennsylvania AG vs. Character.AI, 2026
State Licensing Board Unauthorized practice of medicine/law Did conduct fall within licensed practice without a license? PA Board of Medicine vs. unnamed AI platform, 2026 (single source)

Verification

Qualified JD Supra (T3, single source). PA Department of State filing not directly accessed. All case-specific details require 'reportedly' framing. User count figure (45,000) is unverifiable and excluded. Human verification of primary filing recommended before publication.

Consumer protection suits have dominated AI enforcement headlines. Regulators have reached for the FTC Act, state UDAP statutes, and privacy frameworks to bring AI platforms to account. Pennsylvania’s reported action against a psychiatry chatbot platform doesn’t fit that mold.

According to reporting from JD Supra, the Pennsylvania State Board of Medicine has filed a complaint against an unnamed generative AI platform, alleging the platform’s chatbot impersonated a licensed psychiatrist, generated a fraudulent medical license number, and provided mental health assessments to users. The platform hasn’t been named. The specific complaint details, including the reported user figure, come from a single source and haven’t been independently confirmed. The complaint itself, filed with the Pennsylvania Department of State, wasn’t directly accessed. Every specific detail from this case should be read with that caveat. The core claim, that a state medical licensing board took action against an AI platform under an unauthorized practice of medicine theory, is coherent, legally grounded, and consistent with documented enforcement patterns. But it rests on one source.

What matters most isn’t the specific case. It’s the theory.

Three enforcement bodies. Three legal theories. One underlying problem.

AI platforms that operate in licensed professional domains have now attracted enforcement from three distinct regulatory bodies, each using a different legal mechanism. Understanding how they differ isn’t academic, the exposure profile, required remedy, and operational impact are completely different depending on which body comes for you.

The FTC reached DoNotPay in 2023 using Section 5 of the FTC Act, the agency’s consumer protection authority. DoNotPay had marketed itself as a “robot lawyer” and billed itself as capable of handling legal matters for consumers. The FTC’s theory was deceptive trade practices: the company made claims it couldn’t substantiate. The settlement required DoNotPay to stop making those claims and to notify affected customers. The legal theory required showing that consumers were misled. It didn’t require showing that DoNotPay actually practiced law.

The Pennsylvania AG’s separate suit against Character.AI, covered in prior TJS coverage of the companion AI enforcement wave, operates on a similar consumer protection theory. A state attorney general using a state UDAP statute to allege that an AI platform harmed users through deceptive or unfair practices. The theory is about consumer harm and platform conduct, not about whether the platform unlawfully practiced a profession.

The Pennsylvania Board of Medicine theory, unauthorized practice, is different in a foundational way. Unauthorized practice of medicine statutes don’t ask whether users were misled. They ask whether someone performed acts that require a medical license without having one. The AI platform’s intent is largely irrelevant. What matters is whether the conduct, the diagnosis, the assessment, the treatment recommendation, falls within the statutory definition of medical practice. That’s a lower bar in some respects and a higher one in others. Lower, because you don’t need to prove consumer harm to establish a violation. Higher, because you have to show the specific conduct crossed the line from information into practice.

Fraudulent license generation, if confirmed, adds a separate dimension entirely. That’s not just unauthorized practice. That’s identity fraud in a regulated credential system.

Who This Affects

Healthcare AI Platforms
Review individualized assessment features against state unauthorized practice of medicine statutes, not just FTC deceptive claims standards
Legal Tech AI Platforms
DoNotPay established FTC exposure; unauthorized practice of law claims by state bar enforcement bodies remain a separate and parallel risk
Mental Health AI Platforms
Therapeutic engagement features, crisis assessment, medication guidance, diagnostic-equivalent output, are highest-risk zone for licensing board action
AI Product and Legal Teams
Audit whether your platform's output ever generates or implies professional credential identifiers, regardless of intent

Unanswered Questions

  • Does state unauthorized practice liability attach to AI platforms the same way it attaches to individuals?
  • Does a platform's disclaimer (e.g., 'not medical advice') insulate it from unauthorized practice claims, or only from consumer protection claims?
  • What is the enforcement posture of medical licensing boards in your operating jurisdictions?
  • Has your platform's output ever generated professional credential language through hallucination?

The compliance exposure map.

Which AI platforms face unauthorized practice risk? The analysis starts with what the platform’s product actually does.

Healthcare and mental health. Any AI system that provides individualized clinical assessments, differential diagnoses, medication recommendations, or therapy-equivalent engagement is operating in the highest-risk zone. “Mental health support” features that go beyond psychoeducation into therapeutic intervention, treatment recommendations, crisis assessment, medication guidance, are the clearest exposure. The Pennsylvania case reportedly involves a psychiatry impersonation scenario. That’s the acute end. But the risk doesn’t stop there. Symptom checkers that output probability diagnoses. Wellness apps that recommend clinical interventions. Platforms that use clinical language and licensed-professional framing to establish user trust. All of these require legal review under state unauthorized practice of medicine statutes.

Legal services. DoNotPay established the FTC theory. The unauthorized practice of law theory, which most state bars enforce, is narrower in some states and broader in others, but the risk profile for AI platforms that provide individualized legal advice (not legal information) has been established since 2023. Platforms that generate legal documents for specific situations, provide case outcome predictions, or advise users on their specific legal rights in ways that require legal judgment are operating in this space.

Financial advice. SEC and FINRA jurisdiction covers investment advice. Platforms that provide individualized securities recommendations, “you should buy X” rather than “here’s how to think about X”, face registered investment advisor requirements at the federal level and state-level equivalents. AI platforms operating in wealth management, tax planning, or insurance advice face parallel exposure.

The credential problem. The reported detail about fraudulent license number generation is significant regardless of what’s confirmed about the broader case. If an AI platform’s output includes fabricated professional credentials, a license number, a bar number, a CPA registration, the platform has created a document that could function as evidence of a credential that doesn’t exist. That’s a different legal problem from unauthorized practice. It potentially creates fraud liability, document falsification exposure, and, depending on the jurisdiction, criminal referral risk.

What the enforcement pattern suggests.

Three enforcement actions across three bodies, three legal theories. The FTC consumer protection approach (2023). The state AG consumer protection approach (2024-2026). Now a licensing board unauthorized practice approach (2026).

Each approach found a different mechanism to reach the same underlying conduct: AI platforms operating in licensed professional domains without the constraints that licensed professionals operate under. The diversification of legal theory means there isn’t a single compliance fix. An AI platform could fully satisfy the FTC’s deceptive claims standard while still violating unauthorized practice statutes. It could survive a state AG consumer protection challenge while generating credentialed output that exposes it to fraud liability.

Separately, New York has been reported as considering legislation explicitly prohibiting AI from impersonating licensed professionals, though the status of that proposal hasn’t been independently confirmed for this brief and should be treated as `[LEGAL-INTERPRETATION]`. Federal law doesn’t currently address AI unauthorized practice as a distinct category, that gap means state-by-state exposure analysis is the only available framework.

Warning

Consumer protection compliance programs answer a different legal question than unauthorized practice exposure. A platform that fully satisfies the FTC's deceptive claims standard can still violate state unauthorized practice statutes. These are parallel, not overlapping, compliance obligations.

What to Watch

Primary source confirmation of PA Board of Medicine complaint (PA Dept. of State)Before publication
Independent news coverage corroborating the PA lawsuit beyond JD Supra1-2 weeks
New York legislation on AI impersonation of licensed professionals, status updateQ3 2026
Other state licensing board actions against AI platforms in professional domainsRolling

What compliance teams should be asking now.

This section isn’t legal advice. It’s a framework of questions for your legal counsel.

Does your platform use professional titles, credential language, or license-like identifiers in its output or marketing? Does your product provide individualized assessments that a licensed professional would be required to provide in a clinical, legal, or financial context? Has your legal team conducted a state-by-state review of unauthorized practice statutes in the jurisdictions where you operate? Does your platform’s AI output ever generate credentialed-looking content, license numbers, bar registration numbers, professional certifications, whether through hallucination or by design?

If you can’t answer no to all four questions, the Pennsylvania case is directly relevant to your risk posture.

TJS synthesis.

Consumer protection enforcement gave AI platforms a compliance target: don’t make claims you can’t substantiate, don’t deceive users, don’t cause unfair harm. Unauthorized practice enforcement changes the target entirely. The question is no longer what you claimed, it’s what you did. Platforms that have spent two years building consumer protection compliance programs may not have simultaneously built the professional-practice analysis their legal exposure now requires.

The enforcement bodies are diversifying faster than most AI legal teams anticipated. Licensing boards, unlike the FTC or state AGs, don’t need a showing of consumer harm, a regulatory proceeding timeline, or a large enforcement budget. They operate on complaint-driven processes that can move quickly at the state level. Don’t expect this to be the last licensing board action against an AI platform. The Pennsylvania Board of Medicine case, if it’s confirmed at a primary source level, will be studied by medical licensing boards in every other state. That’s the trajectory.

View Source
More Regulation intelligence
View all Regulation

Related Coverage

More from May 13, 2026

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub