Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Regulation Daily Brief

AI Regulation News: Canada Privacy Watchdog Finds OpenAI Violated Consent Law in ChatGPT Training Data Collection

2 min read Office of the Privacy Commissioner of Canada Partial Moderate
Canada's federal and provincial privacy commissioners concluded a three-year joint investigation with a formal finding that OpenAI failed to obtain valid consent when collecting personal information to train ChatGPT. The consent theory the Office of the Privacy Commissioner applied isn't limited to OpenAI, it reaches any AI company that has collected Canadian user data for training without explicit permission.
3-year joint federal-provincial investigation concluded

Key Takeaways

  • OPC joint investigation concludes: OpenAI failed valid consent standard under PIPEDA and Law 25 for ChatGPT training data
  • Finding is about the legal basis for collection, not data security; applies to standard commercial data practices
  • OpenAI reportedly committed to transparency updates within 3-6 months and quarterly compliance reports, details unfinalized
  • The consent theory applied by OPC reaches any AI company training on Canadian user data without explicit purpose-specific consent

Verdict

OpenAI failed to obtain valid consent for ChatGPT training data collection under Canadian privacy law
CourtOffice of the Privacy Commissioner of Canada (joint federal-provincial investigation)
Date2026-05-06
ImplicationsFinding applies the PIPEDA/Law 25 consent standard to AI training data, reaching beyond OpenAI to any company using Canadian user data for model training

The Office of the Privacy Commissioner of Canada and its provincial counterparts issued their final joint investigation report on May 6, finding that OpenAI failed to obtain valid consent under Canadian privacy law for the collection and use of personal information to train ChatGPT. The investigation ran three years and involved federal and provincial regulators working in coordination.

The core finding: OpenAI’s standard data collection practices, terms of service acceptance, passive data capture, did not meet the “valid consent” standard under Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and Quebec’s Law 25. This is not a finding about data security or breach. It’s a finding about the legal basis for collection itself. OpenAI collected the data lawfully under ordinary commercial law standards. Canada’s regulators concluded those standards don’t satisfy what privacy law requires for training data.

OpenAI reportedly committed to providing transparency updates within three to six months, though the specific technical changes required were not detailed in the published report.Those are preliminary commitments. The remediation plan has not been finalized.

Canadian Privacy Compliance Audit, AI Training Data

  • Identify all Canadian user data in training or fine-tuning datasets
  • Review consent documentation, does it specifically cover AI training as a purpose?
  • Assess whether consent was obtained at or before the time of collection
  • Confirm Quebec Law 25 requirements satisfied for Quebec-resident data
  • Document remediation steps taken in response to OPC finding

The broader significance is structural. Canada’s consent framework under PIPEDA and Law 25 requires that individuals understand and agree to how their information will be used, and that the purpose be specific enough that a reasonable person would expect it. Training a large language model on personal information doesn’t map cleanly onto the consent people gave when they created accounts or used websites. That gap is the finding.

This same gap exists for every AI company that has collected data from Canadian users through ordinary commercial channels and used it for model training. CTV News reported on the finding and noted the OPC’s position that the finding has implications beyond OpenAI specifically.

Context: Italy’s data protection authority suspended ChatGPT in 2023 on consent grounds before reinstating it following OpenAI’s compliance commitments. The Canadian finding follows a similar theory of harm but reaches a final conclusion after a longer investigation. The pattern across jurisdictions, Canada, Italy, the EU under GDPR, suggests that consent for AI training is becoming the central fault line in AI privacy enforcement, not a fringe theory.

What to Watch

OpenAI formal remediation submission to OPC3-6 months
OPC technical guidance on valid consent for AI training under PIPEDA6-12 months
Similar investigations opened against other AI companies by OPC or provincial regulators3-12 months

What to watch

OpenAI’s formal remediation submission (due within the three-to-six month window reportedly committed to); whether the OPC finding triggers similar investigations in other jurisdictions with comparable consent frameworks; and whether Canadian regulators publish technical guidance specifying what valid consent for AI training would actually look like under PIPEDA.

The compliance question worth asking: if your organization uses data collected from Canadian users to train or fine-tune AI systems, does your consent documentation specifically cover that purpose?

View Source
More Regulation intelligence
View all Regulation

Related Coverage

Stay ahead on Regulation

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub