The legal profession has been watching AI hallucinations in court for several years. What’s changed is the trajectory.
Since mid-2023, a growing record of AI-driven errors in legal filings has accumulated across jurisdictions. According to Baker Donelson’s tracking of documented cases, more than 120 instances of AI-driven legal hallucinations have been identified since mid-2023, with at least 58 occurring in 2025 alone. These figures come from a law firm blog and should be treated as directional, the count isn’t exhaustive and may understate the actual incidence. But the trajectory they suggest is consistent with what’s known about AI tool adoption rates in legal practice: more lawyers using AI means more opportunities for AI errors to reach courts.
That trajectory is why the legal profession is now treating AI hallucinations as a professional responsibility problem, not a technology problem. The distinction has consequences.
What the Data Actually Shows
Stanford HAI’s research on AI in legal contexts provides the most rigorous public data available on this question. The findings are specific. Legal AI models, tools built and marketed for legal use cases, hallucinate in 1 out of 6 or more cases. General-purpose chatbots perform worse: Stanford HAI’s prior study found hallucination rates between 58% and 82% on legal queries.
The gap between “legal AI” and “general chatbot” performance matters for understanding the legal profession’s current situation. Many law firms deployed general-purpose AI tools first, before legal-specific tools were widely available. The higher hallucination rates associated with general tools are likely reflected in the case accumulation record.
Stanford HAI’s characterization is direct: general AI tools carry serious risks for legal use. Legal-specific models are better. Neither is reliable enough to operate without lawyer supervision.
From Technology Problem to Professional Responsibility Problem
Here’s why the framing shift matters. If AI hallucinations were purely a technology problem, the correct response would be to wait for better technology. AI tools improve. Hallucination rates will likely decrease. A law firm could reasonably say: we’ll deploy more carefully when the tools are more reliable.
Professional responsibility rules don’t support that logic.
The duty of competence under Model Rule 1.1 requires lawyers to maintain the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. Using an AI tool without understanding its error profile isn’t competent practice. The duty of candor to the tribunal under Rule 3.3 prohibits making false statements of law or fact. Filing an AI-hallucinated case citation isn’t excused by the AI’s involvement. The duty to supervise nonlawyer assistance under Rule 5.3, which courts and bar associations have begun applying to AI tools, requires that lawyers ensure AI output is reviewed and does not violate professional obligations.
These aren’t new rules invented for AI. They’re existing obligations that apply to AI use. The professional responsibility framework was already there. The AI problem walked into it.
What the ABA’s Initial Guidance Establishes
The ABA issued the legal profession’s first formal ethics guidance on lawyers’ AI tool use, a significant step because it establishes the professional responsibility lens as the official frame, not a technology safety lens.
The ABA’s guidance acknowledges the competence, supervision, and candor obligations described above. It does not establish specific technical standards for AI accuracy thresholds, disclosure formats, or documentation requirements. It is a framework document, not a compliance specification.
That gap is exactly what legal technology firms and professional associations are now pushing to fill. The advocacy is for standards that answer the practical questions the ABA’s initial guidance raises but doesn’t answer: what does adequate AI supervision look like? What disclosure is required when AI tools assist in drafting? What documentation should law firms maintain about AI use in client matters?
No bar association has yet formally adopted AI-specific ethics rules beyond the ABA’s initial guidance, though discussions are advancing across multiple jurisdictions.
Court-Level Action Is Moving Faster Than Bar Associations
While bar associations deliberate on comprehensive standards, courts are acting independently. The National Center for State Courts has published practitioner guidance on AI and hallucinations, and several federal and state courts have adopted local rules requiring disclosure of AI tool use in filings.
Court-level rules are faster to implement than bar rules because they don’t require the full bar association rulemaking process. A federal judge can issue a standing order tomorrow. For law firms, this means the first formal AI compliance obligations they face may come from individual courts’ local rules rather than from a comprehensive bar ethics standard.
The practical implication: law firms need court-by-court monitoring of AI disclosure requirements, not just bar association tracking.
What Standards Would Actually Require, A Practical Preview
Drawing from the ABA’s current guidance, court-level disclosure practices, and the professional responsibility analysis above, the AI ethics standards taking shape in the legal profession would likely include:
*Accuracy verification.* A requirement that a licensed attorney review AI-assisted work product for accuracy before reliance, with specific attention to citations, case holdings, and statutory quotations.
*Client disclosure.* Depending on jurisdiction and context, an obligation to disclose to clients when AI tools are used in their representation. The scope and trigger for this disclosure varies significantly across the discussions underway.
*Tribunal disclosure.* Requirements to disclose AI tool use in filings, consistent with the court-level rules already emerging. The ABA’s candor obligation provides the foundation; specific disclosure formats are being developed.
*Documentation.* Internal recordkeeping on AI tool use in client matters, what tools were used, by whom, and what review was conducted. This serves both supervision and liability management purposes.
What Law Firms and Legal AI Developers Should Do Now
The absence of comprehensive formal standards doesn’t create a compliance holiday. It creates a governance gap that firms need to fill on their own terms.
For law firms: the ABA’s existing competence and supervision standards already apply to AI tool use. Firms that document their AI governance process, which tools are approved, what review is required, how errors are caught, are building the record they’ll need when formal standards arrive and when the inevitable disciplinary matter involving AI occurs at a peer firm.
For legal AI developers: the professional responsibility frame means accuracy and explainability aren’t marketing features. They’re prerequisites for the compliance officers, general counsel, and ethics partners who make procurement decisions at law firms. Tools that can document their error rates, provide audit trails of AI-assisted work, and demonstrate supervision workflows will have a structural advantage as formal standards emerge.
TJS Synthesis
The legal profession is moving toward AI ethics standards on a timeline driven by liability, not by technology readiness cycles or legislative calendars. That’s a different clock than the one governing AI regulation in most other sectors. The combination of existing professional responsibility rules, documented hallucination incidents, court-level disclosure requirements, and growing ABA engagement means the compliance environment for legal AI is tightening now, before comprehensive formal rules exist. Firms and vendors that treat formal rule adoption as the starting gun will arrive late to a race that’s already running.