California moved first.
On March 31, 2026, Governor Gavin Newsom signed an executive order requiring AI safeguards for companies seeking state government contracts. StateScoop confirmed the order is “aimed at tightening oversight of artificial intelligence companies contracting with the state” and framed the action as countering the Trump administration’s approach to AI governance. CBS Sacramento called it “first-of-its-kind” for a state executive order on AI contractor requirements.
The order’s core obligation is clear: AI vendors doing business with California state agencies must demonstrate compliance with the new standards or risk losing access to state contracts. Reuters confirmed the order requires firms seeking state contracts to have “safeguards against abuse.” The Guardian’s reporting indicates the order addresses illegal content, harmful bias, discriminatory outputs, and civil rights violations, though the specific enumeration of those categories comes from The Guardian’s coverage and couldn’t be independently confirmed from fetched source text at time of publication.
Two provisions carry additional uncertainty. The order is also reported to require state agencies to implement watermarking and labeling for AI-generated images and video, though that provision couldn’t be independently confirmed from available sources. State agencies reportedly have 120 days to develop new certification and vetting processes for AI vendors, but that timeline comes from a single source and should be treated as reported, not confirmed.
Why this matters for compliance teams: The Newsom EO is in force now. The White House’s competing vision, a National Policy Framework for Artificial Intelligence that proposes federal preemption of conflicting state AI laws, is a legislative recommendation, not enacted law. That distinction matters operationally. Companies holding or seeking California state contracts can’t wait for the federal preemption question to resolve before acting on the EO’s requirements. Both frameworks are on the table simultaneously. One is enforceable today.
The federal-state tension here isn’t hypothetical. The White House framework explicitly references preemption of state laws “mandating deceptive conduct in AI models” and a broader preemption of conflicting state standards. California’s EO does exactly what that framework wants to prevent states from doing unilaterally. Legal analysts describe the federal framework as targeting state laws that impose “undue burdens” on AI development, characterizations worth watching as the preemption case builds.
This is the governance conflict becoming operational, not theoretical. For AI vendors with California state contracts, the immediate question isn’t which framework prevails. It’s what the EO actually requires and how quickly compliance programs need to adapt.
This brief is a direct follow-up to Three Branches, Three Signals: The US AI Governance Conflict Compliance Programs Can’t Ignore, which outlined the structural pattern this executive order now instantiates. The deep-dive accompanying this brief addresses what the two frameworks mean together.