MeitY AI Governance Guidelines: India's 7 Sutras
MeitY named one of its seven governing principles "Innovation over Restraint." Not "balanced innovation." Not "responsible caution." Innovation over restraint, as a named, published principle of national AI governance. No other government on earth has done that.
What Are India's AI Governance Guidelines?
The document has four parts:
-
PART 1
Principles. The seven sutras that ground India's governance philosophy.
-
PART 2
Recommendations. Guidelines across six pillars for responsible AI development.
-
PART 3
Action Plan. Short, medium, and long-term implementation timelines.
-
PART 4
Practical Guidelines. What organizations and regulators should actually do.
This is not binding law. MeitY has stated repeatedly that these guidelines carry no legal enforceability on their own. PIB 2025 The legal teeth come from existing statutes: the IT Act, the DPDPA, consumer protection law, and sector-specific regulations that are already binding. The guidelines sit on top of that foundation, providing direction without adding statutory obligations.
That distinction matters. If you are building a compliance program around MeitY's framework, your enforcement exposure comes from the underlying laws, not the guidelines themselves. The guidelines tell you what good looks like. The laws tell you what happens when you fall short.
What Are the 7 Sutras of India AI Governance?
Each sutra is a named principle with a formal definition in the MeitY document. MeitY 2025 Here is what each one says, what it means for practitioners, and how it maps to frameworks you may already be working with.
1
Trust is the Foundation
"Trust is essential for innovation and adoption across the AI value chain."
Trust is the bedrock principle. MeitY positions it first because without public and institutional trust, none of the other principles can function. This covers trust between AI developers and deployers, between organizations and regulators, and between AI systems and the people affected by them.
In practice, this means building verifiable systems. Not just claiming your AI is trustworthy, but providing evidence: audit trails, testing results, third-party evaluations. For organizations operating in India, this principle maps directly to the documentation and evidence requirements in ISO 42001 (Clause 9, performance evaluation). ISO In the EU AI Act context, trust is operationalized through the conformity assessment process for high-risk systems. EU AI Act
If your center develops AI models for a US or European parent company, the trust principle means you need documented evidence of your development process. ISO 42001 certification (adopted by BIS as IS/ISO/IEC 42001:2023) gives you a certifiable way to demonstrate this to both Indian regulators and global headquarters. BIS
2
People First
"Systems must adopt human-centric design and deployment, with human oversight."
Human oversight is not optional under this framework. Every AI system that affects people should have a human in the loop (or at minimum, on the loop) for decisions with significant impact. This goes beyond the EU AI Act's human oversight requirements in Article 14 by framing it as a design principle rather than a compliance checkbox. EU AI Act Art. 14
MeitY's emphasis on "human-centric design" means considering the needs of Indian populations specifically. That includes accessibility for users across multiple languages, literacy levels, and digital familiarity. India's Digital Public Infrastructure (Aadhaar, UPI, DigiLocker) reaches over a billion people, and AI systems built on top of that infrastructure must account for the full diversity of that user base. IndiaAI
EU AI Act Article 14 (human oversight) + ISO 42001 Annex A controls for human oversight mechanisms
Document your human oversight mechanisms for every AI system. Map who reviews what, at what threshold automated decisions escalate to humans, and how override procedures work. This documentation serves MeitY compliance, EU AI Act Article 14 requirements, and ISO 42001 Annex A controls simultaneously.
3
Innovation over Restraint
"Responsible innovation should take precedence over caution."
This is the principle that sets India apart. Where the EU defaults to precaution (restrict first, permit after assessment), India defaults to permission (innovate first, address harms as they emerge). MeitY explicitly chose to prioritize the economic and social benefits of AI over preemptive restrictions. National Law Review
This does not mean a free-for-all. The word "responsible" is doing real work in that definition. But the burden of proof is different. In the EU, developers must prove their high-risk system is safe before deployment. In India, the assumption is that innovation proceeds unless specific harms are identified.
For practitioners building AI governance programs, this means India's framework will not block deployment the way the EU AI Act might. But it also means you carry more responsibility for self-governance. Without prescriptive rules, your internal policies, risk assessments, and testing protocols become the primary safeguard.
Contrasts with EU AI Act's precautionary approach (conformity assessment before deployment for high-risk). Complements NIST AI RMF's risk-based flexibility.
Use this principle strategically. If your parent company is hesitating on AI deployment due to regulatory uncertainty in India, point to this sutra. MeitY has explicitly signaled that India wants AI innovation to proceed. Pair that signal with your ISO 42001 management system to show responsible governance is in place.
4
Fairness & Equity
"AI must be designed and tested for fair, unbiased outcomes."
Bias testing is a requirement under this principle, not a suggestion. MeitY specifically calls out the need to design and test AI systems for fair outcomes. This maps closely to ISO 42001 Annex A.5 (Assessing Impacts of AI Systems) and the EU AI Act's bias monitoring requirements for high-risk systems in Article 10. ISO 42001 A.5 EU AI Act Art. 10
What makes India's fairness principle distinctive is the context. India's population includes caste-based social stratification, significant gender disparities in economic participation, and 22 officially recognized languages with hundreds more in active use. Bias in an AI system deployed in India can manifest along axes that Western frameworks do not account for. MeitY's guidelines recognize this, calling out vulnerable populations including women, children, persons with disabilities, and marginalized communities. MeitY 2025
Your bias testing protocol needs India-specific test cases. If your AI system processes names, addresses, or demographic data, test for caste-proxy discrimination (names, locations, educational institutions can all serve as proxies). Standard fairness toolkits built for US demographic categories will miss these patterns.
5
Accountability
"Responsibility must be assigned based on functions performed, risk of harm."
MeitY takes a functional approach to accountability. Rather than assigning blanket liability to the deployer (as the EU AI Act does for high-risk systems), India says accountability should follow function. The developer is accountable for development choices. The deployer is accountable for deployment choices. The data provider is accountable for data quality. MeitY 2025
This maps to the NIST AI Risk Management Framework's concept of AI actors and their respective responsibilities. NIST It also aligns with ISO 42001's approach of defining roles and responsibilities within the AI management system (Clause 5.3). ISO 42001 Cl. 5.3
The practical implication: you need a clear RACI matrix for every AI system. Who is responsible for training data quality? Who is accountable if the model produces biased outputs in production? Who is consulted on risk assessments? Who is informed of incidents? MeitY's functional accountability model requires you to answer these questions explicitly.
For centers that build AI systems deployed by a parent company in another jurisdiction, the accountability split is critical. Document which accountability sits with the GCC (development, testing, data processing) and which sits with the deployer (deployment context, user impact, incident response). This protects both parties.
6
Understandable by Design
"Systems require clear explanations and disclosures."
Explainability is not an afterthought. MeitY frames it as a design requirement, not a post-deployment disclosure. This is stronger language than the EU AI Act's transparency requirements (Article 13), which focus on documentation and user-facing information. MeitY wants the system itself to be understandable, not just documented. EU AI Act Art. 13
In practice, this means selecting model architectures and deployment patterns that support explanation. For high-stakes decisions (credit scoring, hiring, medical diagnosis), this may mean choosing interpretable models over black-box alternatives, or implementing robust explanation layers on top of complex models.
The disclosure component requires organizations to tell users when they are interacting with an AI system and to provide meaningful information about how decisions are made. This aligns with the DPDPA's notice requirements for automated decision-making. DPDP Rules 2025
Build explainability into your development pipeline, not your compliance documentation. If your team trains a model that will make decisions affecting Indian citizens, the explanation capability needs to be part of the model specification, tested during development, and documented in your ISO 42001 records.
7
Safety, Resilience & Sustainability
"Incorporate safeguards to minimize risks."
The final sutra bundles three related concepts. Safety covers protections against harm during normal operation. Resilience covers the system's ability to withstand and recover from attacks, failures, or unexpected inputs. Sustainability addresses the environmental and long-term societal impact of AI systems.
This maps to multiple ISO 42001 Annex A controls: A.6.2.6 (Operation & Monitoring), Clause 6.1 (Risk Assessment), and A.6.2 (AI System Lifecycle). ISO 42001 In the EU AI Act, these concerns are addressed across Articles 9 (risk management), 15 (accuracy, robustness, cybersecurity), and recitals on environmental sustainability. EU AI Act Arts. 9, 15
MeitY's inclusion of sustainability is notable. India, as a signatory to the Paris Agreement and a country experiencing significant climate impacts, is signaling that the environmental cost of AI (training compute, data center energy, water usage) is a governance concern, not just an operational one.
Include resilience testing (adversarial inputs, edge cases, failure modes) and sustainability metrics (compute costs, energy consumption per inference) in your AI system documentation. These are increasingly requested by European clients under the EU AI Act's sustainability reporting expectations.
Governance Bodies: AIGG, TPEC, and AISI
MeitY's guidelines propose three new institutional structures. None of these existed before November 2025. MeitY 2025
AIGG Composition
The AIGG coordinates policy across a broad set of government bodies. It does not itself regulate. It ensures that sector regulators are aligned with the seven sutras and that regulatory approaches do not conflict across ministries. MeitY 2025
| Ministry / Body | Role |
|---|---|
| MeitY | Lead ministry, secretariat |
| Ministry of Home Affairs (MHA) | Law enforcement, national security |
| Ministry of External Affairs (MEA) | International AI diplomacy |
| Dept. of Science & Technology (DST) | Research coordination |
| Dept. of Telecommunications (DoT) | Telecom AI regulation |
| NITI Aayog | Policy coordination |
| TRAI | Telecom regulation |
| CCI | Competition oversight |
| Data Protection Board (DPB) | DPDPA enforcement |
| RBI | Financial sector AI |
| SEBI | Securities market AI |
| ICMR | Healthcare AI |
| UGC | Education sector AI |
6 Risk Categories: How India Classifies AI Threats
MeitY defines six categories of AI risk. This differs from the EU AI Act's four-tier system (unacceptable, high, limited, minimal) by focusing on the nature of harm rather than a hierarchy of risk levels. MeitY 2025 EU AI Act
| Risk Category | Description | India-Specific Focus |
|---|---|---|
| Malicious uses | Deepfakes, adversarial attacks, AI-enabled fraud | Gendered deepfakes, election manipulation |
| Bias & discrimination | Unfair outcomes across protected characteristics | Caste bias, language discrimination, gender disparity |
| Transparency failures | Opaque decision-making, undisclosed AI use | Right to explanation under DPDPA |
| Systemic risks | Market concentration, infrastructure dependency | DPI-scale failure scenarios |
| Loss of control | Autonomous systems exceeding intended boundaries | Critical infrastructure automation |
| National security threats | AI-enabled cyber attacks, surveillance misuse | Cross-border data flows, defense AI |
The India-specific additions are significant. Caste bias is a risk category that no other national framework addresses. Gendered deepfakes targeting women and children are called out as a priority harm. Language discrimination, where AI systems perform worse for non-English or non-Hindi speakers among India's 22 official languages, is treated as a fairness failure. MeitY 2025
The EU's four-tier system classifies AI applications by risk level (a social scoring system is "unacceptable," a hiring tool is "high-risk"). India's six-category system classifies risks by type, meaning a single AI application could touch multiple categories simultaneously. A hiring tool could present bias risk, transparency risk, and accountability gaps all at once.
For organizations building compliance programs, this means you cannot simply classify your AI system into one risk tier and move on. You need to assess each system against all six categories.
Compare India's approach to the EU AI Act in detailAction Plan: Short, Medium, and Long-Term
MeitY's Part 3 lays out a phased implementation timeline. Here is what to expect. MeitY 2025
- Establish the AIGG, TPEC, and AISI institutional structures
- Develop risk assessment frameworks aligned with the six risk categories
- Conduct regulatory gap analysis across all sector regulators
- Create the AI incidents database (following OECD incident definitions)
- Launch stakeholder consultations on sector-specific guidance
- Publish sector-specific AI standards and codes of practice
- Operationalize the AI incidents database with mandatory reporting for critical sectors
- Pilot regulatory sandboxes for high-risk AI applications
- Develop certification and testing protocols through AISI
- Expand international cooperation through bilateral and multilateral agreements
- Adopt new legislation where regulatory gaps remain after existing law application
- Establish horizon-scanning capabilities for emerging AI risks
- Deepen global diplomatic engagement on AI governance standards
- Review and update the seven sutras based on implementation experience
The timeline is deliberately open-ended on the long-term items. MeitY is signaling that new AI-specific legislation is possible but not imminent. The current approach is to use existing laws and voluntary guidelines first, then legislate only where gaps persist.
What Organizations Must Do
Part 4 of the guidelines is the most actionable section. It provides separate guidance for AI organizations and for regulators. MeitY 2025
For AI Organizations
-
LAW
Comply with Indian laws. DPDPA, IT Act, Consumer Protection Act, sector-specific regulations. These are binding, not voluntary.
-
VOLUNTARY
Adopt voluntary measures. Internal AI governance policies aligned with the seven sutras.
-
LAW
Grievance redressal. Establish mechanisms for individuals harmed by AI decisions. Mandatory under the Consumer Protection Act for products and services.
-
VOLUNTARY
Transparency reports. Publish information about AI systems, their capabilities, and limitations.
-
TECHNICAL
Techno-legal solutions. Implement technical safeguards: watermarking, content authentication, audit trails.
These are voluntary under the guidelines, but several map to binding requirements under existing law. Grievance redressal is mandatory under the Consumer Protection Act. Transparency about automated decision-making is required under the DPDPA, and organizations managing AI data governance lifecycles need to map these requirements to their data pipelines. DPDP Rules 2025 The guidelines extend these existing obligations to AI-specific contexts. India-specific compliance checklists and assessment templates are available in the templates hub.
For Regulators
MeitY gives sector regulators three directives:
-
DIRECTIVE 1
Pro-innovation posture. Regulators should enable AI adoption, not block it.
-
DIRECTIVE 2
Harm-based prioritization. Focus enforcement on demonstrated harms, not theoretical risks.
-
DIRECTIVE 3
Least burdensome instruments. Use the lightest regulatory tool that achieves the objective (guidance before codes, codes before rules, rules before legislation).
This is remarkable language in a government document. MeitY is explicitly telling regulators to go easy. That signal shapes the enforcement environment for every organization operating in India. For professionals building governance careers around this framework, the AI governance career path and salary benchmarks reflect the growing demand these guidelines are creating.
International Standards Alignment
MeitY did not build this framework in isolation. Annexures 2 and 6 of the guidelines explicitly reference international standards and frameworks. MeitY 2025
ISO 42001 (Annexure 6)
Adopted by BIS as IS/ISO/IEC 42001:2023. BIS MeitY cites it as the recommended management system for AI governance. Organizations that certify to ISO 42001 can demonstrate alignment with the seven sutras through a single, internationally recognized certification. ISO The ISO 42001 Resource Center covers the clause-by-clause implementation details. Professionals pursuing governance credentials should also consider the IAPP AIGP certification, and a full list of relevant credentials is in the IT Certifications Hub.
NIST AI Risk Management Framework (Annexure 2)
MeitY references the NIST AI RMF as an example of a risk-based approach. The NIST framework's four functions (Govern, Map, Measure, Manage) complement MeitY's six risk categories by providing a process for identifying and treating AI risks. NIST
EU AI Act (Annexure 2)
The guidelines include a comparative analysis with the EU AI Act. MeitY acknowledges the EU approach as the most comprehensive binding regulation while explicitly choosing a different path: voluntary guidelines with sector-specific enforcement rather than a single horizontal regulation. EU AI Act
OECD AI Incident Definition
India adopted the OECD's definition of AI incidents for its planned incidents database. This ensures international comparability and allows India to participate in cross-border incident sharing arrangements. OECD
For GCC compliance teams managing multi-jurisdiction requirements, the alignment signals are clear. ISO 42001 certification satisfies the MeitY reference, demonstrates a structured approach for EU AI Act compliance, and aligns with the NIST AI RMF. One management system, three frameworks addressed. Zinnov/NASSCOM
Return to the India AI Governance Hub
Map your organization against the 7 sutras and Part 4 requirements.
Download Free TemplateSources & Citations (12 references)
-
India AI Governance Guidelines (Full PDF) -- MeitY / IndiaAI Mission, Nov 2025. Primary
static.pib.gov.in/... -
MeitY Press Release -- AI Governance Guidelines -- Press Information Bureau, Nov 2025. Primary
pib.gov.in/PressRelease...2186639 -
BIS Adoption of ISO 42001 as Indian Standard -- Bureau of Indian Standards, 2023. Primary
services.bis.gov.in -
DPDP Rules 2025 Notification -- Press Information Bureau, Nov 2025. Primary
pib.gov.in/PressRelease...2190655 -
Network of AI Safety Institutes -- UK DSIT / International, Nov 2024. Secondary
gov.uk/government/publications/... -
Zinnov-NASSCOM India GCC Landscape Report -- Zinnov / NASSCOM, 2025. Primary
zinnov.com/centers-of-excellence/...
MeitY Compliance Checklist
Map your organization against the 7 sutras and Part 4 requirements. Free checklist delivered to your inbox.