This isn’t about your chat transcripts. OpenAI has been clear on that distinction. What reportedly changed in the May 1 privacy policy update is the scope of metadata, usage patterns, feature interactions, session data, and related signals, that OpenAI can share with third-party marketing partners for advertising targeting purposes. That’s a narrower change than the alarm-cycle headlines suggested. It’s also not trivial for enterprise compliance teams.
Here’s the gap that matters. Most enterprise AI governance programs were built around one question: is sensitive data leaving the organization through the AI interface? The answer, for chat transcripts, remains no. But metadata sharing creates a second question that most governance frameworks haven’t asked: does my organization’s AI usage behavior itself constitute sensitive information? For companies in regulated industries, financial services, healthcare, defense contracting, the answer may be yes.
A financial services firm whose employees use ChatGPT Enterprise at elevated volume before a regulatory filing creates metadata. A healthcare organization whose clinicians query the platform at patterns correlated with specific patient cohorts creates metadata. A law firm whose associates query at patterns consistent with active litigation creates metadata. None of that is transcript content. All of it is potentially sensitive.
Who This Affects
The enterprise governance question isn’t whether OpenAI violated its terms, the reported update appears to have been disclosed in the policy itself. The question is whether enterprise customers reviewed their own contractual arrangements. ChatGPT Enterprise terms have historically included stronger data handling commitments than consumer terms. Whether those commitments cover the metadata sharing authorized in the May 1 consumer policy update, and whether enterprise customers negotiated carve-outs, are questions compliance teams should be verifying with their vendor contracts right now.
Don’t expect this to stay in the policy-update category for long. Canadian privacy enforcement is already active in the AI space, hub coverage from May 8 addressed the OPC’s consent theory and which AI companies face comparable exposure. GDPR’s lawful basis requirements mean that metadata use for advertising by a company processing data from EU-based users may require a separate legal basis analysis. The intersection of AI vendor policy changes and multi-jurisdiction privacy compliance is exactly where gaps appear.
What to watch
OpenAI hasn’t specified the advertising partners or the full technical scope of shared metadata, according to reporting on the update. The practical risk for enterprise teams is that the metadata categories are defined by OpenAI, not by the customer. That asymmetry, vendor defines what is and isn’t shared, customer bears the compliance exposure, is the structural issue worth addressing in contract renewal negotiations.
The compliance teams that treat this as a vendor terms review will be positioned better than those that treat it as a press story. Next contract renewal is the enforcement mechanism enterprises actually control.