There are two ways to set AI standards. One is legislation, create rules, let companies challenge them, fight the preemption battle, wait for courts to decide. The other is procurement: tell companies what they must demonstrate to do business with you, and let the market sort itself out. Governor Newsom chose the second path.
That choice is deliberate and consequential. It’s worth understanding why, and what it means for companies that want California government contracts.
The Regulatory Lever Nobody Expected
Newsom’s executive order, signed in early April 2026, directs California state agencies to incorporate AI harm safeguards into their contracting requirements, according to CalMatters reporting. Companies seeking state contracts must demonstrate their AI systems include protections against harmful outputs. State agencies must themselves develop comprehensive AI policies within approximately four months of the order’s signing, per the Digital Watch Observatory’s reporting.
The procurement mechanism matters because it bypasses the preemption problem entirely. The federal government can argue, and may eventually litigate, that federal AI policy preempts state AI legislation. It cannot easily preempt a state’s decisions about its own purchasing. States have broad constitutional authority over their own contracting. This is the same principle that allowed California to set environmental standards for state vehicle purchases that effectively drove auto industry changes, even before federal emissions rules caught up.
Newsom is applying that logic to AI. The order isn’t just about state agency compliance, it’s about creating a compliance market. Companies that invest in AI harm-mitigation systems to qualify for California contracts will have those systems in place for everything else they do. The standard spreads beyond government procurement.
What the Order Reportedly Requires, and What Remains Unconfirmed
The specific requirements described in available reporting are significant, but they must be treated as journalism-sourced until the official order text is reviewed. With that qualification stated explicitly:
According to CalMatters and the Digital Watch Observatory, the order’s requirements for contractors reportedly include safeguards against child sexual abuse material and violent AI-generated content, measures addressing algorithmic bias and discriminatory outputs, and watermarking of AI-generated media. These are technically specific categories, not vague principles. Each creates a distinct compliance obligation if the reporting is accurate.
The CSAM and violent content requirements track closely with the EU AI Act’s prohibited use provisions, prohibiting AI outputs that would constitute prohibited content under that framework. If a company is already building for EU AI Act compliance in those categories, California’s requirements may represent marginal additional work rather than a new compliance program. The algorithmic bias provisions are less predictable: this is an area where state and federal standards diverge significantly, and California’s specific requirements here would need to be assessed against a company’s particular use case.
Watermarking is the most technically demanding requirement in the list. The technology for reliable AI-generated media watermarking exists, C2PA (Coalition for Content Provenance and Authenticity) has developed technical standards, but implementation is uneven across the industry. Companies that haven’t yet integrated provenance signaling into their AI content pipelines face real engineering work to meet this requirement.
All of this should be confirmed against the official California executive order text before compliance planning begins. The Governor’s Office and California Legislative Information are the T1 sources here.
Two Purposes, One Order
The CalMatters reporting framed this order partly as protection for California AI companies from federal regulatory pressure, not just as a harm-mitigation measure. That dual purpose is real and worth naming directly.
Newsom is simultaneously imposing standards on AI contractors and defending California’s AI startup ecosystem from what CalMatters described as the Trump administration’s regulatory stance. These goals don’t contradict each other. A California-based AI company that builds compliant systems gains a market signal, “California-certified” AI, that may be valuable in government procurement beyond California. The order creates a compliance market that California companies are well-positioned to serve.
This is a strategy, not just a policy. The compliance burden falls most heavily on companies that haven’t already invested in AI harm-mitigation systems, which tends to be companies whose AI products are less mature. California-based AI developers who’ve been building with safety and compliance in mind are positioned as the natural vendors for state contracts under these requirements.
The Federal-State Tension in Practice
TJS previously covered the dynamic in which federal AI preemption efforts have failed to stop state-level AI legislation in New York and Washington. California’s executive order adds a new dimension: states aren’t limited to legislation. Procurement-based AI governance is harder to preempt, faster to implement than legislative processes, and directly enforceable through contract terms rather than regulatory enforcement.
The White House’s AI legislative framework, announced earlier this week, asks Congress to preempt state AI laws. That effort faces an obvious gap: it cannot easily preempt state procurement decisions. If procurement-based AI governance becomes the dominant state-level regulatory mechanism, the federal preemption argument, even if legislated, may not reach the most consequential state AI standards.
What Compliance Teams Should Do Now
If your organization has California state government contracts, or is pursuing them, the immediate steps are clear, with the caveat that these are practical considerations, not legal advice, and the specific obligations should be verified against the official order text.
First: obtain and review the official California executive order. The Governor’s Office and California Legislative Information are the authoritative sources. Don’t rely on journalism summaries for compliance planning, including this brief.
Second: assess current AI harm-mitigation posture against the reported categories, CSAM/violent content filtering, algorithmic bias measures, watermarking. Identify gaps. The approximately four-month agency policy development window (reportedly around August 2026) is the first concrete milestone, agencies must have their AI policies developed by then, which means contractor requirements will likely be formalized within that window.
Third: treat this as an early signal of a broader trend. California’s procurement approach is replicable. Other states with significant government contracting markets may adopt similar mechanisms. The compliance investment made for California state contracts has value across a widening set of government procurement contexts.
What to Watch
Three milestones define the near-term arc of this story. First, the official order text, when it is published and what specific requirements it actually contains, rather than what reporting has characterized it as containing. Second, the August 2026 agency policy deadline, whether California agencies meet it and what policies they produce will be the real test of whether this order has operational teeth. Third, whether other states or large municipalities adopt similar procurement-based AI governance mechanisms. California rarely moves alone for long.
The strategic insight here is straightforward: Newsom found a path to AI governance that doesn’t require winning a federal preemption fight. Companies that recognize the mechanism early and build compliance systems proactively will have an advantage in government procurement markets that companies still treating this as a distant regulatory risk will not.