The threshold question is settled. A Commission spokesman, Thomas Regnier, confirmed to Reuters that OpenAI has published user data showing ChatGPT’s search functionality reached an average of 120.4 million monthly users across the EU over the past six months, well above the 45 million monthly active user mark that triggers Very Large Online Search Engine (VLOSE) designation under the Digital Services Act.
What’s new isn’t the number. It’s the record.
Prior reporting covered the Commission’s evaluation of whether ChatGPT would cross the threshold. Now the Commission has said publicly, through a named spokesman, that it already has. That’s a different moment. It means the formal designation process has a factual foundation it didn’t have before, OpenAI’s own published data, acknowledged by the regulator.
The DSA’s VLOSE tier carries the law’s most extensive obligations. Designated platforms must complete systemic risk assessments, implement algorithmic transparency measures, provide researcher access to data, and undergo independent audits. Under the DSA’s fee framework applicable to VLOSE-designated platforms, supervisory fees may reach up to 0.05% of worldwide annual net income, per DSA Article 43, a provision that was written into the law’s architecture from the outset and applies once designation is formally issued.
The Commission is actively evaluating formal VLOSE designation for ChatGPT. That evaluation is advancing. What it isn’t, based on available sourcing, is complete. Reuters’ reporting uses “weighing” language, not “finalizing.” No formal designation decision has been published. The distinction matters operationally: obligations don’t attach until designation is issued, and the clock on any compliance timeline doesn’t start until that date.
What makes this development significant for compliance teams isn’t the designation itself, it’s the trajectory. The Commission has publicly acknowledged a threshold breach via a named spokesman. That puts the process in a different category than informal evaluation. Tech Policy Press reported the 120.4 million figure, corroborating Reuters’ account independently.
One clarification that will matter going forward: the DSA and the EU AI Act are different laws with different enforcement mechanisms. DSA VLOSE obligations, systemic risk assessments, algorithmic accountability, researcher data access, are not the same as EU AI Act requirements, which address AI system risk classifications and prohibited uses. A ChatGPT deployment subject to DSA VLOSE rules still needs a separate EU AI Act compliance analysis. Confusing the two frameworks is an easy mistake with costly consequences.
DSA VLOSE obligations typically take effect within months of formal designation, though the specific timeline depends on the Commission’s formal decision. Any compliance deadline is estimated until the designation date is confirmed, an estimated target of late August 2026 circulates based on an assumed late-April designation, but that figure should not be treated as confirmed.
What to watch: the Commission’s formal publication of a VLOSE designation decision for ChatGPT. That document will set the official compliance clock. Companies with EU-facing ChatGPT integrations should use the current window to map their DSA obligations, risk assessments, transparency documentation, audit preparation, rather than waiting for the official start date. The obligations are knowable now even if the deadline isn’t confirmed.
TJS synthesis: The Commission’s on-record confirmation does something the informal evaluation phase didn’t, it creates a public record that the threshold has been crossed. For businesses using ChatGPT in EU-facing applications, the practical advice hasn’t changed, but the urgency signal has. Designation is no longer a hypothetical to monitor; it’s a proceeding to prepare for.