Two laws. One platform. The confusion is already widespread.
When compliance teams in the EU talk about ChatGPT obligations, the conversation usually defaults to the EU AI Act, prohibited uses, risk classifications, conformity assessments. Those are real. But the Digital Services Act’s VLOSE tier, which ChatGPT is now formally advancing toward, operates on a different legal track with different obligations, a different enforcement structure, and, critically, a different timeline. A business that has mapped its EU AI Act exposure but hasn’t looked at DSA VLOSE is only half prepared.
Reuters reported that Commission spokesman Thomas Regnier confirmed on the record that OpenAI published user data showing ChatGPT’s search functionality averaged 120.4 million monthly EU users over the past six months. The DSA’s VLOSE threshold is 45 million. The gap isn’t narrow, ChatGPT exceeds it by more than 2.5 times. Tech Policy Press independently confirmed the 120.4 million figure.
The Commission has not yet issued a formal VLOSE designation decision. That distinction is important: DSA obligations attach at designation, not at threshold breach. But the formal process is advancing, and the window between “process advancing” and “designation issued” is the window compliance teams have to get ready.
What VLOSE Status Actually Requires
The DSA’s VLOSE obligations, detailed in Articles 33 through 40, are structured around systemic accountability rather than point-in-time compliance. The key requirements for designated platforms break into four categories:
*Systemic risk assessments.* Designated VLOSE platforms must conduct annual assessments of the systemic risks their services generate. These aren’t internal audits, they must be documented, methodology-disclosed, and made available to the Commission on request. For an AI-powered search function like ChatGPT’s, the relevant risk categories include amplification of harmful content, manipulation of information access, and effects on fundamental rights.
*Algorithmic accountability.* VLOSE platforms must maintain a publicly accessible register of the algorithmic systems used to recommend, rank, or restrict content. The register must describe each system’s main parameters and the means users have to modify or opt out of them.
*Researcher data access.* Designated platforms must provide vetted academic researchers with access to data necessary to study systemic risks. This is a meaningful operational requirement, it means building and maintaining a data access framework, not just publishing aggregate statistics.
*Independent auditing.* At least annually, designated VLOSE platforms must commission independent audits of their compliance with DSA obligations. Audit results must be shared with the Commission.
Taken together, these aren’t compliance checkboxes. They’re ongoing operational programs. An organization that deploys ChatGPT at scale in EU-facing applications isn’t directly subject to these requirements, OpenAI is. But procurement and contractual decisions made now will determine whether enterprise customers can obtain the documentation, audit results, and transparency disclosures the DSA requires from OpenAI once designation is issued.
The Fee Structure
DSA Article 43 establishes a supervisory fee framework for designated VLOSE platforms. Under the DSA’s rules applicable to VLOSE designees, supervisory fees may reach up to 0.05% of worldwide annual net income. This is a fee paid to the Commission to fund oversight, not a penalty, though penalties for non-compliance are separate and can reach up to 6% of global annual turnover. The 0.05% figure is structural to the DSA fee framework; it isn’t specific to ChatGPT and wasn’t announced as a new development. It’s the ceiling built into the law.
The Timeline
Designation hasn’t happened yet. What’s confirmed is that the formal process is advancing, following the Commission’s public acknowledgment of the threshold breach. DSA VLOSE obligations typically take effect within a defined period following formal designation, though the specific compliance window depends on the Commission’s designation decision document. An estimated late-August 2026 target has circulated based on an assumed late-April designation date, that figure should be treated as a planning reference, not a confirmed deadline. The actual clock starts when the Commission publishes a formal decision.
For operational planning, the relevant sequence is: formal designation decision published → compliance clock starts → VLOSE obligations take effect. Each step requires Commission action. The current status is: threshold confirmed, process advancing, formal decision pending.
What This Means for EU Businesses Using ChatGPT
The VLOSE obligations belong to OpenAI, not to enterprise customers. But the downstream implications for organizations using ChatGPT in EU-facing products are real.
Procurement teams should review their ChatGPT API agreements now. Once VLOSE designation is issued, enterprise customers will have a legitimate compliance interest in OpenAI’s DSA adherence, particularly the risk assessment documentation and audit results. Whether current contracts provide any access to that information is worth checking before the designation is issued, not after.
Legal teams should clarify one foundational point for their organizations: the DSA and the EU AI Act are different frameworks. The EU AI Act’s compliance timeline runs on its own track and addresses AI system risk classifications, prohibited uses, and conformity assessment requirements. DSA VLOSE addresses systemic platform risks, algorithmic accountability, and researcher access. A company operating an AI application in the EU needs both analyses. Treating one as a substitute for the other is a compliance gap.
Compliance program managers should watch for the formal designation decision, that’s the document that sets the official obligations and timeline. In the interim, the VLOSE obligations in DSA Articles 33 through 40 are public and readable. Mapping organizational exposure now costs less than doing it under a deadline.
How This Fits the Broader VLOSE Pattern
ChatGPT wouldn’t be the first platform to navigate VLOSE designation. The Commission has previously designated large search engines and social platforms under the DSA’s VLOP (Very Large Online Platform) and VLOSE tiers. The enforcement pattern from those designations suggests that the Commission moves methodically, formal designation, compliance period, then audit-based enforcement rather than immediate penalty action. That pattern doesn’t guarantee the same approach with ChatGPT, but it provides a reasonable baseline for timeline expectations.
What’s different about ChatGPT’s VLOSE evaluation is the novelty of the product category: an AI-powered conversational search interface rather than a traditional search engine. That novelty is precisely why the systemic risk assessment requirement matters. The Commission’s existing VLOSE framework was designed for algorithmic amplification risks familiar from social media and search. Applying it to a generative AI search interface raises questions, about what “algorithmic recommender systems” means in a conversational context, about what “systemic risk” looks like when the output is generated rather than retrieved, that the designation process will have to address.
TJS synthesis: The Commission’s on-record confirmation doesn’t create new legal obligations. It does close the gap between “this might happen” and “this is happening.” For compliance professionals advising on ChatGPT deployment in the EU, the question has shifted from whether VLOSE designation is coming to how to prepare before the formal clock starts. The DSA is a different law than the EU AI Act. Prepare for both, they’re not interchangeable, and they don’t share a timeline.