Start with a number that deserves more attention than it gets.
According to a separate analysis from JD Supra and the Electronic Discovery Reference Model, 43% of surveyed firms have no formal AI governance policy. That’s not a projection. That’s the current state of a profession now actively using AI to prepare the arguments heard in those same firms’ courtrooms.
The WVU/NCSC white paper and the industry survey data don’t cite each other. They don’t need to. They describe the same structural problem from two directions, and the gap between them is the story.
What the Judicial Study Found
West Virginia University and the National Center for State Courts released a white paper on approximately April 30 examining how judges are using generative AI in their workflows. According to WVUToday’s coverage of the white paper, the study characterizes judicial adoption as cautious and documents use cases including preparing oral argument questions and summarizing lengthy filings. These are administrative and preparatory functions, not final rulings. The paper is explicit on that boundary.
The white paper recommends, according to the Thomson Reuters Institute’s account of the findings, that AI not be used for final judicial reasoning or decision-making. This is a normative recommendation, not a legal requirement. No jurisdiction has codified that boundary in statute. The white paper is arguing for a guardrail that doesn’t yet exist as enforceable policy.
That distinction matters. “The white paper recommends it” and “the law requires it” describe very different levels of protection. Right now, the former is doing the work the latter hasn’t been written to do.
The Governance Gap in Numbers
The industry data arrives separately and covers different ground. The 8am 2026 Legal Industry Report is the source for two statistics that should be read together: according to that report, 69% of legal professionals report using generative AI for work tasks, and approximately 54% of respondents lacked access to formal AI training. These are findings from a single survey with its own methodology, reported here as 8am’s findings rather than industry consensus.
The 43% figure, firms with no formal AI governance policy, comes from a distinct analysis by JD Supra and EDRM. It is not a finding of the 8am report. These are three data points from two separate sources, and they should not be read as outputs of a unified study. What they share is directional consistency: high adoption, low infrastructure.
Read in sequence, the picture is this. Most legal professionals are using AI. More than half of them are doing it without formal training. Nearly half of their firms have no written policy governing how they do it. And in the courtrooms those firms practice in, judges are using the same category of tools to prepare the questions they’ll ask those lawyers.
Why the Legal Context Raises the Stakes
Governance gaps exist in every sector. The legal sector’s version carries specific consequences that others don’t.
Judicial decisions affect rights. They set precedent. They determine whether evidence is admitted, whether arguments are heard, whether people go to prison or go home. An AI-assisted oral argument question that reflects a hallucinated case summary isn’t a customer service failure. It’s a due process problem.
Law firms advising clients on AI governance are, in many cases, operating without their own AI governance policies. That’s not a hypothetical exposure. It’s a credibility gap that clients are increasingly positioned to identify. The privilege and confidentiality questions that AI use creates in legal practice compound this: a firm without an AI governance policy also lacks the internal guidance that tells attorneys which tools trigger privilege concerns and which don’t.
The Thomson Reuters Institute’s alignment with the WVU/NCSC white paper is notable not because two institutions agreed, but because they represent different parts of the legal ecosystem – academic research on judicial administration and professional research on legal practice, and they’re describing the same gap from their respective vantage points.
What Governance Infrastructure Actually Requires
The white paper’s human-in-the-loop recommendation aligns with principles that already exist outside the legal sector. The NIST AI Risk Management Framework addresses human oversight as a core governance component, specifically, the need for human review in consequential decision contexts. The EU AI Act designates AI systems used in the administration of justice as high-risk, with corresponding requirements for human oversight and transparency. These frameworks weren’t written for law firms. But they describe the architecture that legal AI governance needs to replicate.
A functional governance policy for a law firm using AI isn’t a complex document. It needs to answer four questions: which tools are authorized for which tasks, what training is required before use, how AI-assisted work is reviewed before it reaches a client or a court, and how the firm responds when an AI output is wrong. The 8am data suggests that fewer than half of firms have answered any of these questions in writing.
The certification challenge for agentic AI under existing frameworks is relevant here too: as legal AI tools become more autonomous, drafting, researching, organizing, the governance requirements scale with the autonomy. A firm that hasn’t governed AI-assisted summarization is not prepared to govern AI-assisted brief drafting.
What Legal Technology Buyers and Compliance Officers Should Do Next
The white paper’s release and the survey data together create a useful benchmark moment. The legal sector now has a documented picture of where it stands: high adoption, low governance, and a judicial system that’s already deploying the technology and working out the rules as it goes.
For legal technology buyers, the priority is policy before tools. Procurement without governance creates liability. The 54% training deficit is solvable, training programs for legal AI tools exist and are expanding. The 43% policy gap is a decision, not a resource problem. Writing an AI governance policy doesn’t require budget. It requires someone with authority to make the call and commit it to paper.
For compliance officers at firms already using AI, the white paper’s recommendations provide a defensible reference point. The WVU/NCSC paper isn’t a regulation, but citing it as a basis for internal governance decisions puts the firm’s policy in alignment with the most current institutional thinking on the subject.
For judicial administrators, the white paper’s scope is worth reading carefully. The study’s recommendations exist precisely because courts are already deploying AI faster than the rules governing its use have been written. That’s not a criticism of judges. It’s a description of where the technology is relative to the policy infrastructure.
One thematic thread connects this brief to OpenAI’s Advanced Account Security announcement also covered in today’s Technology cycle: deployment is consistently outpacing governance. OpenAI is now building identity-layer security for enterprise agentic deployments because those deployments arrived before that infrastructure existed. The legal sector’s governance gap is the same pattern at the professional and institutional scale. The tools are in use. The rules are still catching up.