The SEC doesn’t need a new rule to scrutinize your AI disclosures. It already has one, it’s called materiality, and it’s been in the securities laws for decades.
That’s the practical reality of the SEC’s 2026 AI enforcement posture. The agency rebranded its enforcement unit from the Crypto Assets and Cyber Unit to the Cyber and Emerging Technologies Unit (CETU) in 2025, a structural signal that AI and cybersecurity had displaced crypto as the dominant risk areas under active examination. The CETU’s mandate didn’t change the legal standard. It changed what the agency is looking for.
For compliance teams, the implications are significant, and not in a way that a checklist entirely resolves.
What the CETU Is and What It’s Actually Doing
The SEC’s Cyber and Emerging Technologies Unit is the agency’s designated examination function for technology-related disclosure risks. Its 2025 rebrand from the Crypto Assets and Cyber Unit reflected the institutional recognition that AI had moved from a peripheral technology risk to a central disclosure issue for public companies across sectors.
Per available reporting on the SEC’s 2026 examination priorities, AI and cybersecurity have emerged as the dominant examination focus areas. That means registered investment advisers, broker-dealers, public companies making AI capability claims in investor materials, and technology vendors whose products are described in securities filings are all operating within the CETU’s examination scope.
The rebrand also signals something about the SEC’s view of AI risk: it’s not a specialized technology problem to be siloed in an innovation-focused unit. It’s a disclosure problem. If a company says it uses AI in a way that’s material to investors, and fails to disclose it adequately, or overstates its AI capabilities, or makes claims it can’t operationally support, that’s a securities disclosure issue, not just a technology governance question.
The Materiality Standard: Why Principles-Based Enforcement Creates Real Risk
According to reporting from Hunton Andrews Kurth’s Public Company Advisory Blog, SEC Chairman Atkins has affirmed that AI-related disclosures will be assessed under existing materiality principles rather than new prescriptive rules. That position has significant compliance planning implications that are easy to understate.
Materiality is a legal standard, not a checklist. A fact is material if there’s a substantial likelihood that a reasonable investor would consider it important. Applied to AI, this means: if your AI systems are central to your business model, your competitive positioning, or your risk profile, and you haven’t disclosed that adequately, you’re exposed. If you’ve claimed AI capabilities in investor materials that don’t match operational reality, you’re exposed. If your AI governance program has significant gaps that would affect investor assessments of your risk management, and you’ve been silent about them, you may be exposed.
None of this requires a new rule. It requires applying an existing standard to a new category of fact. That’s exactly what the CETU is positioned to do.
The “AI washing” concern sits directly in this frame. Companies that describe AI integrations in marketing and investor materials as more capable, more autonomous, or more impactful than they actually are, whether intentionally or through imprecise language, are making material misstatements if those claims influence investor decisions. The SEC doesn’t need an “AI disclosure rule” to pursue that case. It needs a material misstatement and an investor who relied on it.
The Company-Size Problem
Compliance analysts note that the CETU’s mandate doesn’t distinguish by company size. Mid-market and smaller public companies face equivalent disclosure scrutiny to large-cap issuers, a shift from prior enforcement patterns where AI governance expectations were, in practice, concentrated at companies large enough to warrant intensive examination resources.
That’s a meaningful change. Large-cap companies with sophisticated legal and compliance functions have been building AI disclosure frameworks since the SEC’s 2023 guidance on cybersecurity risk management disclosures. Many mid-market companies haven’t. The CETU’s 2026 examination priorities suggest the agency isn’t waiting for smaller companies to catch up on their own timeline.
The practical effect: if you’re a public company using AI in operations, investor relations materials, or product marketing, your disclosure obligations under existing law are the same regardless of your market cap. The CETU’s examination priorities make clear that enforcement attention is expanding, not contracting.
What an Adequate AI Disclosure Program Looks Like
The absence of prescriptive SEC rules doesn’t mean the compliance path is undefined. It means the path requires judgment rather than just execution. Four elements characterize programs that hold up under examination.
First, accurate capability description. If investor-facing materials describe AI systems, those descriptions need to match operational reality. “AI-powered” means something. If it means “a rule-based system with a machine learning component,” say that. If it means “a large language model integrated into the product workflow,” say that instead. Vague claims create material risk precisely because they’re hard to defend.
Second, governance disclosure proportional to materiality. If AI is central to your business model, your risk disclosures need to address AI governance, what oversight exists, what failure modes have been identified, what the company is doing to manage them. The SEC’s existing cybersecurity risk management disclosure framework provides a useful structural template for AI governance disclosure.
Third, documented processes behind public claims. The CETU’s examination focus means documented evidence of the AI governance practices companies describe publicly will matter. Claims that aren’t supported by internal documentation are both a legal risk and an operational one.
Fourth, size-appropriate programs. The standard is materiality, not complexity. A mid-market company doesn’t need an enterprise AI governance program built for a Fortune 50. It needs a program appropriate to its use of AI and proportionate to how prominently AI features in its investor communications. The link between what you say externally and what you’ve built internally is where examination risk concentrates.
The Broader Context
The SEC’s principles-based approach to AI disclosure sits within a larger US regulatory posture that has consistently favored existing legal frameworks over new AI-specific rules. The hub’s coverage of the White House AI framework’s federal preemption push reflects the same dynamic: the federal government is extending existing authority to cover AI rather than building new regulatory infrastructure from scratch.
That approach has advantages for companies, the rules aren’t changing, the standards aren’t new, and the legal interpretive work on materiality is decades deep. But it has a corresponding disadvantage: there’s no prescriptive safe harbor. You can’t check a box and call yourself compliant. The standard requires judgment, and the consequences of exercising it poorly are the same as any other material misstatement.
The SEC has no AI disclosure rules. That’s not a gap in its enforcement authority. It’s a choice about how to exercise authority it already has.