Federal AI procurement is tightening. The GSA and NIST’s Center for AI Standards and Innovation reportedly announced a joint effort to improve how the federal government evaluates AI models and services for procurement, according to available reporting. The announcement came alongside a draft contract clause with compliance teeth.
The GSA reportedly released a draft clause, designated GSAR 552.239-7001 and titled “Basic Safeguarding of Artificial Intelligence Systems”, for public input, according to available reporting. The reported public comment period closed March 20, 2026. If you hadn’t submitted input before reading this, that window has passed.
The draft clause reportedly includes four significant requirements: government ownership of all data inputs and outputs generated through the AI system; a prohibition on vendors using government data to train AI models; a restriction limiting covered systems to “American AI Systems”; and a 72-hour incident reporting window.
That last requirement is notable. Seventy-two hours is a tight window for enterprise incident response, and the “American AI Systems” restriction, if enacted as drafted, would create procurement eligibility questions for vendors whose underlying models or infrastructure have international components.
All reported provisions require primary source confirmation before they can be treated as final regulatory text. The draft clause is expected to be available through regulations.gov. Federal AI vendors should verify the current status directly.
For companies with existing federal contracts or those pursuing them, the practical question is whether current system architectures would satisfy an “American AI Systems” definition and whether incident response processes can meet a 72-hour window.