Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

India's DPDPA and AI: Data Protection Rules for AI Systems (2026) | Tech Jacks Solutions

India's DPDPA and AI: Data Protection Rules for AI Systems

India's Digital Personal Data Protection Act sat on the shelf for two years. Enacted in August 2023, it was a law without rules. Then on November 14, 2025, MeitY published the DPDP Rules, and the clock started ticking. Every AI system processing Indian personal data now has a compliance deadline: May 2027 for full compliance, with Phase 1 obligations landing in November 2026. The penalty ceiling is INR 250 crore (approximately $30 million USD).[1] That is not theoretical. The Data Protection Board has statutory authority to enforce it.

This article covers what the DPDPA means for AI systems specifically, what the compliance timeline looks like, and where it differs from GDPR (the two are not the same framework, despite surface similarities).


What Is the DPDPA?

India's Digital Personal Data Protection Act (DPDPA), which received Presidential assent on August 11, 2023 as Act No. 22 of 2023, is India's first dedicated data protection law. The DPDP Rules were notified on November 14, 2025, starting an 18-month phased compliance window. Every AI system processing Indian personal data must reach full compliance by May 2027, with penalties up to INR 250 crore for violations.[2]

The Act applies to digital personal data processed within India and to processing outside India if it involves offering goods or services to people in India. If your model serves Indian users or processes data collected from Indian residents, the DPDPA applies regardless of where your servers sit.[3]

The DPDP Rules operationalize the Act. They define how consent must be obtained, what constitutes adequate security safeguards, how the Data Protection Board will function, and what Consent Managers must do.[1]

Key Facts

  • Full name: The Digital Personal Data Protection Act, 2023
  • Act number: Act No. 22 of 2023[2]
  • Enacted: August 11, 2023 (received Presidential assent)
  • Rules notified: November 14, 2025[1]
  • Administering body: Ministry of Electronics and Information Technology (MeitY)
  • Enforcement body: Data Protection Board of India
  • Maximum penalty: INR 250 crore per violation[2]

7 Core Principles of the DPDPA

The DPDPA is built on seven principles that govern how personal data may be processed. For AI systems, each of these principles creates specific obligations.[3]

1
Consent & Transparency
AI systems cannot scrape Indian personal data without informed consent. No "legitimate interest" basis exists in the DPDPA.
2
Purpose Limitation
Data collected for one AI use case cannot be repurposed for another without fresh, specific consent.
3
Data Minimization
Collect only what is necessary. "More data = better model" requires explicit justification under the DPDPA.
4
Accuracy
Personal data in AI decision-making must be accurate and current. Ongoing obligation for models making recommendations or risk scores.
5
Storage Limitation
No permanent training data archives "just in case." Retention schedules must be defined and enforced for AI pipelines.
6
Security Safeguards
Reasonable security across the full AI data lifecycle: collection, training, inference, and deletion.
7
Accountability
Organizations must demonstrate compliance with auditable records of consent, processing activities, and governance practices.

1. Consent and Transparency

Personal data can only be processed with the informed, specific, and freely given consent of the data principal (the individual). Consent must be requested in clear, plain language. For AI systems, this means you cannot scrape Indian personal data from public sources and feed it into training pipelines without consent. The "legitimate interest" basis that GDPR provides does not exist in the DPDPA. Consent is the primary legal basis, with exceptions under Section 7 for state functions, medical emergencies, employment, publicly available personal data, certain research purposes, and other "reasonable purposes" that the Central Government may prescribe by notification.[3]

2. Purpose Limitation

Data collected for one purpose cannot be repurposed. If a user provides their information to use an AI-powered recommendation engine, you cannot later use that same data to train a separate fraud detection model without obtaining fresh consent. Each distinct processing purpose requires its own consent basis.[2]

3. Data Minimization

Collect only the data necessary for the stated purpose. AI systems are notorious for ingesting more data than they need on the theory that more data produces better models. Under the DPDPA, that approach requires explicit justification. If you are collecting browsing history, location data, and biometric markers to power a chatbot, you will need to explain why each data category is necessary.[3]

4. Accuracy

Data fiduciaries must ensure that personal data is accurate and kept up to date, particularly when it is used to make decisions about individuals. For AI systems making automated recommendations, risk scores, or eligibility determinations, this principle creates an ongoing obligation to verify and refresh the data feeding those models.[2]

5. Storage Limitation

Personal data must not be retained beyond the period necessary for its stated purpose. When the purpose is fulfilled, the data must be erased. This directly affects AI training data pipelines. You cannot maintain a permanent archive of Indian personal data "in case you need it later" for model retraining. Retention schedules must be defined, documented, and enforced.[5]

6. Security Safeguards

Data fiduciaries must implement "reasonable security safeguards" to protect personal data. The Rules specify that these safeguards must be proportional to the sensitivity of the data and the risk of harm from a breach. For AI systems, this covers the entire data lifecycle: collection, storage, processing during training, inference, and eventual deletion.[1]

7. Accountability

Data fiduciaries are responsible for demonstrating compliance. This is not a "trust us" framework. Organizations must be able to show the Data Protection Board that they have implemented appropriate measures, maintained records, and responded to data principal requests. For AI companies, accountability means documenting your data governance practices, consent records, and processing activities in auditable form.[3]


How DPDPA Affects AI Systems

Here is where the seven principles create specific obligations for AI development and deployment.

1
Collection
Clear consent notice required
2
Processing
Purpose limitation applies
3
Training
No legitimate interest basis
4
Inference
Automated decision disclosure
5
Retention / Erasure
Right to erasure, storage limitation
DPDPA requires consent-based processing at every stage. Unlike GDPR, there is no "legitimate interest" basis for AI training data. Organizations must obtain clear, informed consent before collecting personal data and maintain purpose limitation throughout the AI lifecycle.

Training Data Governance

Every piece of Indian personal data in your training dataset needs a consent trail. The DPDPA does not provide a broad "research exemption" or "legitimate interest" basis for AI training the way GDPR does. If you scraped public profiles, forum posts, or social media content containing personal data from Indian users, you have a consent gap.[3]

Organizations building or fine-tuning AI models need to audit their training datasets for Indian personal data and either obtain consent retroactively (where possible), anonymize the data beyond re-identification risk, or remove it. The NIST AI Risk Management Framework provides complementary guidance on data governance controls that map well to DPDPA obligations. The Rules define anonymized data as data that "cannot be used to identify the data principal," and that bar is higher than simple pseudonymization.[1]

Automated Decision-Making Transparency

The DPDPA requires data fiduciaries to provide clear information about the processing being performed. When AI systems make automated decisions that affect individuals -- such as credit scoring, hiring recommendations, insurance underwriting, or content moderation -- data principals have the right to know that automation is involved and what data is being used.[6]

India's approach here is different from GDPR's Article 22, which establishes a general prohibition on solely automated decisions with legal or similarly significant effects (with exceptions for contractual necessity, legal authorization, or explicit consent). The DPDPA does not create an equivalent prohibition or right against automated processing. Instead, it relies on the consent and transparency framework: the individual must consent to the processing (including automated processing) with full knowledge of what that processing involves.[3]

Under GDPR, you may need to provide a human review mechanism for automated decisions. Under the DPDPA, the obligation is on the front end: get informed consent that covers automated processing, and be transparent about what the system does.

Right to Correction and Erasure

Data principals have the right to correct inaccurate personal data and to request erasure of data that is no longer necessary. For AI systems, the erasure right creates a specific technical challenge: how do you remove someone's data from a trained model?[5]

Model unlearning is an active area of research, but production-grade solutions remain limited. In practice, most organizations will need to remove the data from training datasets, retrain or fine-tune without it, and maintain logs showing the erasure request was received and fulfilled.[2]

The DPDPA does not specify how quickly models must be retrained after an erasure request, but the general obligation is to process requests "without unreasonable delay." A 90-day retraining cycle documented in your privacy policy is a practical benchmark that some practitioners use, though no Data Protection Board guidance currently establishes this as a standard. Ignoring erasure requests because "the model already learned from it" is not defensible.

Cross-Border Data Transfer

The DPDPA takes a blocklist approach to cross-border transfers. Data can flow to any country except those specifically restricted by the Central Government. As of early 2026, no restricted countries have been formally designated, but the mechanism exists and could be activated.[7]

For AI companies, this affects:

  • Cloud infrastructure. If your training compute runs in AWS us-east-1 or GCP europe-west1, data transfers are currently permitted but subject to change.
  • Model serving. API calls that send Indian personal data to inference endpoints outside India must comply with whatever restrictions are in place at the time.
  • GCC operations. Global Capability Centers in India processing data for parent companies abroad should monitor the restricted country list and build data localization capabilities as a contingency.

This differs from GDPR, which requires adequacy decisions or Standard Contractual Clauses for every cross-border transfer. The DPDPA's permissive default is simpler but less predictable.[6]

GCC Compliance Requirements

Compliance Timeline

The DPDP Rules establish a phased compliance window from the date of notification (November 14, 2025).[1]

August 11, 2023
DPDPA Enacted Enacted
Act No. 22 of 2023 receives Presidential assent. India's first dedicated data protection law. No implementing rules yet.
November 14, 2025
DPDP Rules Notified Active
MeitY publishes the DPDP Rules. 18-month phased compliance window begins. Consent management, breach notification, and DPO requirements defined.
November 2026
Phase 1 Obligations Upcoming
Data Protection Board operational. DPO appointed by significant data fiduciaries. Consent Manager registration. Breach notification procedures (immediate notification to Board and affected individuals; detailed report to Board within 72 hours). Grievance redressal mechanism established.
May 2027
Full Compliance Deadline Deadline
All remaining obligations take effect: full consent management, data principal rights (access, correction, erasure, nomination), cross-border transfer compliance, security safeguards, retention and deletion schedules, children's data protections (verifiable parental consent for under-18s), and algorithmic transparency disclosures.
DPDPA Compliance Countdown
Phase 1 Deadline
November 14, 2026
-- days
Full Compliance
May 14, 2027
-- days
Rules notified: November 14, 2025

Phase 1: November 2026

Within 12 months of the Rules notification, the following must be in place:[6]

  • Data Protection Board operational (government responsibility)
  • Data Protection Officer (DPO) appointed by significant data fiduciaries
  • Consent Manager registration for entities acting as consent intermediaries
  • Breach notification procedures established (immediate notification to Board and affected individuals without delay, followed by a detailed report to the Board within 72 hours)[1]
  • Grievance redressal mechanism established for data principals

Phase 2: May 2027 (Full Compliance)

Within 18 months of the Rules notification, all remaining obligations take effect:[1]

  • Full consent management systems operational
  • Data principal rights mechanisms functional (access, correction, erasure, nomination)
  • Cross-border transfer compliance verified
  • Security safeguards implemented and documented
  • Retention and deletion schedules enforced
  • Children's data protections operational (verifiable parental consent for under-18s, though DPDP Rules Rule 10 allows the government to exempt specific categories of data fiduciaries or lower the age threshold for certain processing purposes)[8]
  • Algorithmic transparency disclosures in place for automated processing

Penalties

The DPDPA establishes a schedule of penalties, with the maximum at INR 250 crore (approximately $30 million USD). Penalties are assessed per violation. A single data breach affecting multiple provisions could trigger multiple penalties.[2]

INR 250 Crore
Maximum penalty per violation (approximately $30 million USD). Assessed by the Data Protection Board of India with statutory enforcement authority.[2]
Violation Maximum Penalty
Failure to take security safeguards (breach) INR 250 crore~$30M USD
Failure to notify Board of breach INR 200 crore~$24M USD
Non-compliance with children's data obligations INR 200 crore~$24M USD
Failure to fulfill data principal rights INR 50 crore~$6M USD
Non-compliance with other provisions INR 50 crore~$6M USD

Sources: DPDPA Schedule[2]; PIB Press Release[1]; PrivacyEngine DPDPA Summary[8]


DPDPA vs GDPR: Key Differences

The DPDPA is not "India's GDPR." These are separate legal frameworks with different structures, legal bases, and enforcement mechanisms. GDPR compliance does not equal DPDPA compliance.

The DPDPA shares structural similarities with GDPR (both are full-scope data protection laws with consent requirements and individual rights), but the implementation differs in several important ways. Organizations that assume GDPR compliance equals DPDPA compliance will have gaps.[4]

Dimension DPDPA (India) GDPR (EU)
Legal basis for processing Consent is primary; Section 7 exceptions (state, employment, medical, publicly available data, research, Central Government-prescribed reasonable purposes) Six legal bases including legitimate interest
Legitimate interest Not available Available and widely used for AI training
Automated decision-making Not explicit; relies on consent framework Article 22 establishes a general prohibition with exceptions (not an opt-out right)
Cross-border transfers Permitted by default; government blocklist Restricted by default; requires adequacy or SCCs
Data Protection Officer Required for "significant data fiduciaries" only Required for certain categories of controllers
Breach notification Immediate notification to Board and affected individuals without delay; detailed report to Board within 72 hours 72 hours to authority; "without undue delay" to individuals
Consent age (children) Under 18 requires verifiable parental consent (Rule 10 allows exemptions for specific categories) Under 16 (or 13 in some member states)
Penalties Up to INR 250 crore (~$30M) per violation Up to 4% of global annual turnover or EUR 20M
Right to data portability Not included Included (Article 20)
Scope of "personal data" Digital personal data only All personal data (digital and physical)
Enforcement body Data Protection Board of India National supervisory authorities per member state

Sources: IAPP DPDPA vs GDPR Comparison[4]; Future of Privacy Forum[7]; SecurePrivacy[5]

What This Means for AI Companies

The biggest practical difference is the absence of a legitimate interest basis. Under GDPR, many AI companies process personal data for model training under Article 6(1)(f), arguing that training AI models is a legitimate interest balanced against individual rights. That argument, whether or not regulators accept it, is structurally available.[4]

Under the DPDPA, it is not. Consent is the path. If you are processing Indian personal data for AI training, you need consent. The only question is whether the data qualifies for one of the narrow exceptions (government function, medical emergency, employment necessity, legal obligation), and for most AI training scenarios, it does not.[3]

The cross-border transfer approach is also different. GDPR requires a legal mechanism (adequacy decision, SCCs, BCRs) for every transfer outside the EU. The DPDPA allows transfers everywhere unless the government says otherwise, which is simpler operationally but creates regulatory uncertainty.[7]

India vs EU AI Act: Full Comparison

What Organizations Must Do Now

If your AI system processes Indian personal data, here is the practical sequence.[5]

Immediate (Q1-Q2 2026)

  1. Audit your training data. Identify all Indian personal data in your training datasets, fine-tuning data, and evaluation sets. Document the consent basis for each data source.
  2. Map your processing activities. Create a processing register that covers every AI system touching Indian personal data: what data it processes, why, where it is stored, and who has access.
  3. Appoint a DPO. If you qualify as a "significant data fiduciary" (criteria in the Rules include volume of data processed and sensitivity), you must have a DPO by November 2026. The IT certifications hub covers CIPP/E, CIPM, and other credentials relevant to this role.[6]
  4. Review your consent flows. Ensure your consent mechanisms meet DPDPA requirements: specific, informed, freely given, and withdrawable.

By November 2026 (Phase 1)

  1. Establish breach notification procedures. Under the DPDP Rules, data fiduciaries must notify both the Data Protection Board and affected individuals without delay upon becoming aware of a breach, followed by a detailed report to the Board within 72 hours. Define your incident response plan, including who makes the notification and what information must be included.[1]
  2. Set up grievance redressal. Data principals need a channel to raise complaints. Establish prompt acknowledgment and resolution processes as recommended SLA benchmarks (the DPDPA does not prescribe specific statutory timelines for grievance acknowledgment or resolution).[5]
  3. Register as a Consent Manager if you operate consent intermediary services.

By May 2027 (Full Compliance)

  1. Implement data principal rights. Build the technical systems to handle access, correction, erasure, and nomination requests. For AI systems, this means building workflows for model retraining after erasure requests.[2]
  2. Enforce retention schedules. Define how long you keep each category of personal data, and build automated deletion pipelines.
  3. Document everything. The accountability principle means you must be able to demonstrate compliance to the Data Protection Board. Maintain records of consent, processing activities, security measures, breach responses, and rights fulfillment. An ISO 42001 management system provides the documentation structure that satisfies this requirement.[3]

Looking Ahead: The Digital India Act

The DPDPA is one piece of India's digital governance puzzle. The Digital India Act, expected to replace the Information Technology Act of 2000, may introduce additional AI-specific provisions. MeitY has signaled that the Digital India Act could address algorithmic accountability, platform liability for AI-generated content, and governance requirements for frontier AI systems.[10]

The MeitY AI Governance Guidelines released in November 2025 are voluntary. The Digital India Act could give some of those principles statutory force. For organizations building compliance programs today, the practical advice is: design for the DPDPA requirements that are already binding, align with MeitY guidelines as best practice, and build flexible systems that can absorb new requirements when the Digital India Act arrives. Free governance templates provide DPDPA assessment checklists and data fiduciary compliance mapping tools.[9]

India's sector regulators (RBI, SEBI, IRDAI, CERT-In) are also issuing AI-specific guidance within their domains. These rules sit on top of the DPDPA and may impose additional obligations. RBI's FREE-AI Committee Report, for example, requires board-approved AI policies and tiered incident reporting for AI bias.[12]

The DPDPA provides the data protection foundation, MeitY guidelines provide the governance direction, and sector regulators fill in the domain-specific details. Organizations that build a unified data governance framework covering all three layers will be better positioned as the India AI governance landscape matures.[11]


Key Takeaways

Summary
  • The DPDPA applies to any AI system processing digital personal data of Indian residents, regardless of where the system is hosted.[3]
  • Consent is the primary legal basis. There is no "legitimate interest" exception for AI training data.[4]
  • Full compliance is required by May 2027. Phase 1 obligations (DPO, breach notification, Consent Manager registration) land in November 2026.[1]
  • The DPDPA is not India's GDPR. Different consent framework, different cross-border approach, different enforcement structure. The EU AI Act creates overlapping but distinct obligations for organizations in both jurisdictions.[4]
  • Penalties reach INR 250 crore per violation, assessed by the Data Protection Board.[2]
  • Training data audits should start now. Retroactive consent for data already in production models is difficult and may require dataset reconstruction.

For the broader India AI governance picture, including MeitY guidelines, GCC compliance requirements, and sector regulator mandates, see the India AI Governance Hub.

DPDPA AI Impact Assessment Template

Assess how your AI systems process personal data under DPDPA requirements.

Download Free Template
Sources & References (12 sources, 9 primary)
  1. Primary DPDP Rules 2025 Notification -- Press Information Bureau, Nov 2025
  2. Primary DPDPA 2023 Full Text -- MeitY, Aug 2023
  3. Primary Top 10 Operational Impacts of India's DPDPA -- IAPP, 2025
  4. Primary DPDPA Comparative Analysis with GDPR and Other Privacy Laws -- IAPP, 2025
  5. Secondary India DPDPA Compliance Guide -- CookieYes, 2025
  6. Primary India Data Protection -- DPDPA Guide -- Hogan Lovells, 2025
  7. Primary India DPDPA Cross-Border Data Transfers Analysis -- IAPP, 2025
  8. Secondary India Data Protection Laws Overview -- DLA Piper, 2025
  9. Primary India AI Governance Guidelines (Full PDF) -- MeitY / IndiaAI Mission, Nov 2025
  10. Primary MeitY Press Release -- AI Governance Guidelines -- Press Information Bureau, Nov 2025
  11. Secondary India vs Global AI Acts Comparison -- National Law Review, Dec 2025
  12. Primary RBI FREE-AI Committee Report on AI in Financial Services -- Reserve Bank of India, 2025
x