Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

MeitY AI Governance Guidelines: India's 7 Sutras Explained (2026) | Tech Jacks Solutions

MeitY AI Governance Guidelines: India's 7 Sutras

MeitY named one of its seven governing principles "Innovation over Restraint." Not "balanced innovation." Not "responsible caution." Innovation over restraint, as a named, published principle of national AI governance. No other government on earth has done that.

What Are India's AI Governance Guidelines?

MeitY's AI governance guidelines, released November 5, 2025 under the India AI Mission, establish seven foundational principles called sutras. The voluntary framework creates three governance bodies (AIGG, TPEC, AISI), defines six AI risk categories, and provides a phased action plan, while relying on existing sector regulators and laws for enforcement. MeitY 2025 PIB 2025

The document has four parts:

  • PART 1
    Principles. The seven sutras that ground India's governance philosophy.
  • PART 2
    Recommendations. Guidelines across six pillars for responsible AI development.
  • PART 3
    Action Plan. Short, medium, and long-term implementation timelines.
  • PART 4
    Practical Guidelines. What organizations and regulators should actually do.

This is not binding law. MeitY has stated repeatedly that these guidelines carry no legal enforceability on their own. PIB 2025 The legal teeth come from existing statutes: the IT Act, the DPDPA, consumer protection law, and sector-specific regulations that are already binding. The guidelines sit on top of that foundation, providing direction without adding statutory obligations.

That distinction matters. If you are building a compliance program around MeitY's framework, your enforcement exposure comes from the underlying laws, not the guidelines themselves. The guidelines tell you what good looks like. The laws tell you what happens when you fall short.

What Are the 7 Sutras of India AI Governance?

Each sutra is a named principle with a formal definition in the MeitY document. MeitY 2025 Here is what each one says, what it means for practitioners, and how it maps to frameworks you may already be working with.

1
Trust is the Foundation
"Trust is essential for innovation and adoption across the AI value chain."

Trust is the bedrock principle. MeitY positions it first because without public and institutional trust, none of the other principles can function. This covers trust between AI developers and deployers, between organizations and regulators, and between AI systems and the people affected by them.

In practice, this means building verifiable systems. Not just claiming your AI is trustworthy, but providing evidence: audit trails, testing results, third-party evaluations. For organizations operating in India, this principle maps directly to the documentation and evidence requirements in ISO 42001 (Clause 9, performance evaluation). ISO In the EU AI Act context, trust is operationalized through the conformity assessment process for high-risk systems. EU AI Act

GCC Application

If your center develops AI models for a US or European parent company, the trust principle means you need documented evidence of your development process. ISO 42001 certification (adopted by BIS as IS/ISO/IEC 42001:2023) gives you a certifiable way to demonstrate this to both Indian regulators and global headquarters. BIS

2
People First
"Systems must adopt human-centric design and deployment, with human oversight."

Human oversight is not optional under this framework. Every AI system that affects people should have a human in the loop (or at minimum, on the loop) for decisions with significant impact. This goes beyond the EU AI Act's human oversight requirements in Article 14 by framing it as a design principle rather than a compliance checkbox. EU AI Act Art. 14

MeitY's emphasis on "human-centric design" means considering the needs of Indian populations specifically. That includes accessibility for users across multiple languages, literacy levels, and digital familiarity. India's Digital Public Infrastructure (Aadhaar, UPI, DigiLocker) reaches over a billion people, and AI systems built on top of that infrastructure must account for the full diversity of that user base. IndiaAI

Framework Mapping

EU AI Act Article 14 (human oversight) + ISO 42001 Annex A controls for human oversight mechanisms

GCC Application

Document your human oversight mechanisms for every AI system. Map who reviews what, at what threshold automated decisions escalate to humans, and how override procedures work. This documentation serves MeitY compliance, EU AI Act Article 14 requirements, and ISO 42001 Annex A controls simultaneously.

3
Innovation over Restraint
"Responsible innovation should take precedence over caution."

This is the principle that sets India apart. Where the EU defaults to precaution (restrict first, permit after assessment), India defaults to permission (innovate first, address harms as they emerge). MeitY explicitly chose to prioritize the economic and social benefits of AI over preemptive restrictions. National Law Review

This does not mean a free-for-all. The word "responsible" is doing real work in that definition. But the burden of proof is different. In the EU, developers must prove their high-risk system is safe before deployment. In India, the assumption is that innovation proceeds unless specific harms are identified.

For practitioners building AI governance programs, this means India's framework will not block deployment the way the EU AI Act might. But it also means you carry more responsibility for self-governance. Without prescriptive rules, your internal policies, risk assessments, and testing protocols become the primary safeguard.

Framework Mapping

Contrasts with EU AI Act's precautionary approach (conformity assessment before deployment for high-risk). Complements NIST AI RMF's risk-based flexibility.

GCC Application

Use this principle strategically. If your parent company is hesitating on AI deployment due to regulatory uncertainty in India, point to this sutra. MeitY has explicitly signaled that India wants AI innovation to proceed. Pair that signal with your ISO 42001 management system to show responsible governance is in place.

4
Fairness & Equity
"AI must be designed and tested for fair, unbiased outcomes."

Bias testing is a requirement under this principle, not a suggestion. MeitY specifically calls out the need to design and test AI systems for fair outcomes. This maps closely to ISO 42001 Annex A.5 (Assessing Impacts of AI Systems) and the EU AI Act's bias monitoring requirements for high-risk systems in Article 10. ISO 42001 A.5 EU AI Act Art. 10

What makes India's fairness principle distinctive is the context. India's population includes caste-based social stratification, significant gender disparities in economic participation, and 22 officially recognized languages with hundreds more in active use. Bias in an AI system deployed in India can manifest along axes that Western frameworks do not account for. MeitY's guidelines recognize this, calling out vulnerable populations including women, children, persons with disabilities, and marginalized communities. MeitY 2025

GCC Application

Your bias testing protocol needs India-specific test cases. If your AI system processes names, addresses, or demographic data, test for caste-proxy discrimination (names, locations, educational institutions can all serve as proxies). Standard fairness toolkits built for US demographic categories will miss these patterns.

5
Accountability
"Responsibility must be assigned based on functions performed, risk of harm."

MeitY takes a functional approach to accountability. Rather than assigning blanket liability to the deployer (as the EU AI Act does for high-risk systems), India says accountability should follow function. The developer is accountable for development choices. The deployer is accountable for deployment choices. The data provider is accountable for data quality. MeitY 2025

This maps to the NIST AI Risk Management Framework's concept of AI actors and their respective responsibilities. NIST It also aligns with ISO 42001's approach of defining roles and responsibilities within the AI management system (Clause 5.3). ISO 42001 Cl. 5.3

The practical implication: you need a clear RACI matrix for every AI system. Who is responsible for training data quality? Who is accountable if the model produces biased outputs in production? Who is consulted on risk assessments? Who is informed of incidents? MeitY's functional accountability model requires you to answer these questions explicitly.

GCC Application

For centers that build AI systems deployed by a parent company in another jurisdiction, the accountability split is critical. Document which accountability sits with the GCC (development, testing, data processing) and which sits with the deployer (deployment context, user impact, incident response). This protects both parties.

6
Understandable by Design
"Systems require clear explanations and disclosures."

Explainability is not an afterthought. MeitY frames it as a design requirement, not a post-deployment disclosure. This is stronger language than the EU AI Act's transparency requirements (Article 13), which focus on documentation and user-facing information. MeitY wants the system itself to be understandable, not just documented. EU AI Act Art. 13

In practice, this means selecting model architectures and deployment patterns that support explanation. For high-stakes decisions (credit scoring, hiring, medical diagnosis), this may mean choosing interpretable models over black-box alternatives, or implementing robust explanation layers on top of complex models.

The disclosure component requires organizations to tell users when they are interacting with an AI system and to provide meaningful information about how decisions are made. This aligns with the DPDPA's notice requirements for automated decision-making. DPDP Rules 2025

GCC Application

Build explainability into your development pipeline, not your compliance documentation. If your team trains a model that will make decisions affecting Indian citizens, the explanation capability needs to be part of the model specification, tested during development, and documented in your ISO 42001 records.

7
Safety, Resilience & Sustainability
"Incorporate safeguards to minimize risks."

The final sutra bundles three related concepts. Safety covers protections against harm during normal operation. Resilience covers the system's ability to withstand and recover from attacks, failures, or unexpected inputs. Sustainability addresses the environmental and long-term societal impact of AI systems.

This maps to multiple ISO 42001 Annex A controls: A.6.2.6 (Operation & Monitoring), Clause 6.1 (Risk Assessment), and A.6.2 (AI System Lifecycle). ISO 42001 In the EU AI Act, these concerns are addressed across Articles 9 (risk management), 15 (accuracy, robustness, cybersecurity), and recitals on environmental sustainability. EU AI Act Arts. 9, 15

MeitY's inclusion of sustainability is notable. India, as a signatory to the Paris Agreement and a country experiencing significant climate impacts, is signaling that the environmental cost of AI (training compute, data center energy, water usage) is a governance concern, not just an operational one.

GCC Application

Include resilience testing (adversarial inputs, edge cases, failure modes) and sustainability metrics (compute costs, energy consumption per inference) in your AI system documentation. These are increasingly requested by European clients under the EU AI Act's sustainability reporting expectations.

Governance Bodies: AIGG, TPEC, and AISI

MeitY's guidelines propose three new institutional structures. None of these existed before November 2025. MeitY 2025

AIGG
AI Governance Group
Proposed permanent inter-ministerial body chaired by the Principal Scientific Adviser. Coordinates policy across all ministries and regulators. Members span MeitY, MHA, MEA, DST, DoT, NITI Aayog, TRAI, CCI, DPB, RBI, SEBI, ICMR, and UGC.
TPEC
Technology & Policy Expert Committee
Advisory body of experts in frontier AI R&D, machine learning, law, public administration, and national security. Briefs the AIGG on emerging risks, evaluates regulatory gaps, recommends policy interventions.
AISI
AI Safety Institute
Hub-and-spoke model. Central hub tests and evaluates AI systems, develops technical standards. Spoke nodes are sector-specific testing facilities. Manages India's AI incidents database following OECD definitions. OECD Member of the international Network of AI Safety Institutes. UK DSIT

AIGG Composition

The AIGG coordinates policy across a broad set of government bodies. It does not itself regulate. It ensures that sector regulators are aligned with the seven sutras and that regulatory approaches do not conflict across ministries. MeitY 2025

Ministry / Body Role
MeitYLead ministry, secretariat
Ministry of Home Affairs (MHA)Law enforcement, national security
Ministry of External Affairs (MEA)International AI diplomacy
Dept. of Science & Technology (DST)Research coordination
Dept. of Telecommunications (DoT)Telecom AI regulation
NITI AayogPolicy coordination
TRAITelecom regulation
CCICompetition oversight
Data Protection Board (DPB)DPDPA enforcement
RBIFinancial sector AI
SEBISecurities market AI
ICMRHealthcare AI
UGCEducation sector AI

6 Risk Categories: How India Classifies AI Threats

MeitY defines six categories of AI risk. This differs from the EU AI Act's four-tier system (unacceptable, high, limited, minimal) by focusing on the nature of harm rather than a hierarchy of risk levels. MeitY 2025 EU AI Act

Risk Category Description India-Specific Focus
Malicious uses Deepfakes, adversarial attacks, AI-enabled fraud Gendered deepfakes, election manipulation
Bias & discrimination Unfair outcomes across protected characteristics Caste bias, language discrimination, gender disparity
Transparency failures Opaque decision-making, undisclosed AI use Right to explanation under DPDPA
Systemic risks Market concentration, infrastructure dependency DPI-scale failure scenarios
Loss of control Autonomous systems exceeding intended boundaries Critical infrastructure automation
National security threats AI-enabled cyber attacks, surveillance misuse Cross-border data flows, defense AI

The India-specific additions are significant. Caste bias is a risk category that no other national framework addresses. Gendered deepfakes targeting women and children are called out as a priority harm. Language discrimination, where AI systems perform worse for non-English or non-Hindi speakers among India's 22 official languages, is treated as a fairness failure. MeitY 2025

The EU's four-tier system classifies AI applications by risk level (a social scoring system is "unacceptable," a hiring tool is "high-risk"). India's six-category system classifies risks by type, meaning a single AI application could touch multiple categories simultaneously. A hiring tool could present bias risk, transparency risk, and accountability gaps all at once.

For organizations building compliance programs, this means you cannot simply classify your AI system into one risk tier and move on. You need to assess each system against all six categories.

Compare India's approach to the EU AI Act in detail

Action Plan: Short, Medium, and Long-Term

MeitY's Part 3 lays out a phased implementation timeline. Here is what to expect. MeitY 2025

2025 - 2026
Short-Term Active
  • Establish the AIGG, TPEC, and AISI institutional structures
  • Develop risk assessment frameworks aligned with the six risk categories
  • Conduct regulatory gap analysis across all sector regulators
  • Create the AI incidents database (following OECD incident definitions)
  • Launch stakeholder consultations on sector-specific guidance
2026 - 2028
Medium-Term Upcoming
  • Publish sector-specific AI standards and codes of practice
  • Operationalize the AI incidents database with mandatory reporting for critical sectors
  • Pilot regulatory sandboxes for high-risk AI applications
  • Develop certification and testing protocols through AISI
  • Expand international cooperation through bilateral and multilateral agreements
2028+
Long-Term Horizon
  • Adopt new legislation where regulatory gaps remain after existing law application
  • Establish horizon-scanning capabilities for emerging AI risks
  • Deepen global diplomatic engagement on AI governance standards
  • Review and update the seven sutras based on implementation experience

The timeline is deliberately open-ended on the long-term items. MeitY is signaling that new AI-specific legislation is possible but not imminent. The current approach is to use existing laws and voluntary guidelines first, then legislate only where gaps persist.

What Organizations Must Do

Part 4 of the guidelines is the most actionable section. It provides separate guidance for AI organizations and for regulators. MeitY 2025

For AI Organizations

  • LAW
    Comply with Indian laws. DPDPA, IT Act, Consumer Protection Act, sector-specific regulations. These are binding, not voluntary.
  • VOLUNTARY
    Adopt voluntary measures. Internal AI governance policies aligned with the seven sutras.
  • LAW
    Grievance redressal. Establish mechanisms for individuals harmed by AI decisions. Mandatory under the Consumer Protection Act for products and services.
  • VOLUNTARY
    Transparency reports. Publish information about AI systems, their capabilities, and limitations.
  • TECHNICAL
    Techno-legal solutions. Implement technical safeguards: watermarking, content authentication, audit trails.

These are voluntary under the guidelines, but several map to binding requirements under existing law. Grievance redressal is mandatory under the Consumer Protection Act. Transparency about automated decision-making is required under the DPDPA, and organizations managing AI data governance lifecycles need to map these requirements to their data pipelines. DPDP Rules 2025 The guidelines extend these existing obligations to AI-specific contexts. India-specific compliance checklists and assessment templates are available in the templates hub.

For Regulators

MeitY gives sector regulators three directives:

  • DIRECTIVE 1
    Pro-innovation posture. Regulators should enable AI adoption, not block it.
  • DIRECTIVE 2
    Harm-based prioritization. Focus enforcement on demonstrated harms, not theoretical risks.
  • DIRECTIVE 3
    Least burdensome instruments. Use the lightest regulatory tool that achieves the objective (guidance before codes, codes before rules, rules before legislation).

This is remarkable language in a government document. MeitY is explicitly telling regulators to go easy. That signal shapes the enforcement environment for every organization operating in India. For professionals building governance careers around this framework, the AI governance career path and salary benchmarks reflect the growing demand these guidelines are creating.

International Standards Alignment

MeitY did not build this framework in isolation. Annexures 2 and 6 of the guidelines explicitly reference international standards and frameworks. MeitY 2025

ISO 42001 (Annexure 6)

Adopted by BIS as IS/ISO/IEC 42001:2023. BIS MeitY cites it as the recommended management system for AI governance. Organizations that certify to ISO 42001 can demonstrate alignment with the seven sutras through a single, internationally recognized certification. ISO The ISO 42001 Resource Center covers the clause-by-clause implementation details. Professionals pursuing governance credentials should also consider the IAPP AIGP certification, and a full list of relevant credentials is in the IT Certifications Hub.

NIST AI Risk Management Framework (Annexure 2)

MeitY references the NIST AI RMF as an example of a risk-based approach. The NIST framework's four functions (Govern, Map, Measure, Manage) complement MeitY's six risk categories by providing a process for identifying and treating AI risks. NIST

EU AI Act (Annexure 2)

The guidelines include a comparative analysis with the EU AI Act. MeitY acknowledges the EU approach as the most comprehensive binding regulation while explicitly choosing a different path: voluntary guidelines with sector-specific enforcement rather than a single horizontal regulation. EU AI Act

OECD AI Incident Definition

India adopted the OECD's definition of AI incidents for its planned incidents database. This ensures international comparability and allows India to participate in cross-border incident sharing arrangements. OECD

For GCC compliance teams managing multi-jurisdiction requirements, the alignment signals are clear. ISO 42001 certification satisfies the MeitY reference, demonstrates a structured approach for EU AI Act compliance, and aligns with the NIST AI RMF. One management system, three frameworks addressed. Zinnov/NASSCOM

7 Sutras to International Framework Mapping
How MeitY's principles align with EU AI Act, NIST AI RMF, ISO 42001, and OECD AI Principles
MeitY Sutras
EU AI Act
NIST AI RMF
ISO 42001
OECD AI Principles
1 Trust is the Foundation +
OECD: Human-centred values -- Trust as the precondition for responsible AI adoption across society
NIST: Govern -- Governance structures that build institutional trust through documented policies and oversight
ISO 42001: Leadership commitment -- Clause 5 requires top management to demonstrate commitment, building trust from the top down
EU AI Act: Conformity assessment -- Article 43 conformity assessment process for high-risk systems operationalizes trust through independent verification
OECD: Human-centred values NIST: Govern ISO: Leadership commitment EU: Conformity assessment
2 People First +
EU AI Act: Human oversight -- Article 14 requires human-in-the-loop for high-risk AI systems
OECD: Inclusive growth -- AI should benefit people and the planet, with broad stakeholder inclusion
NIST: Map -- Context mapping identifies who is affected and how human oversight should be structured
EU: Human oversight OECD: Inclusive growth NIST: Map
3 Innovation over Restraint +
OECD: Inclusive growth -- Innovation as a driver of economic benefit and societal progress
EU AI Act: Regulatory sandboxes -- Article 57 provides controlled environments for AI innovation testing
OECD: Inclusive growth EU: Regulatory sandboxes
4 Fairness & Equity +
EU AI Act: Non-discrimination -- Article 10 requires bias monitoring and mitigation for high-risk systems
NIST: Measure -- Quantify bias, fairness metrics, and disparate impact through systematic measurement
OECD: Human-centred values -- Fairness, non-discrimination, and respect for human rights and democratic values
ISO 42001: Assessing Impacts -- Annex A.5 requires impact assessment including bias testing to verify fair, unbiased outcomes
EU: Non-discrimination NIST: Measure OECD: Human-centred values ISO: AI system testing
5 Accountability +
OECD: Accountability -- Organizations responsible for proper functioning and outcomes of AI systems they operate
ISO 42001: Monitoring & measurement -- Clause 9 performance evaluation with defined roles and audit trails
NIST: Govern -- Governance function assigns accountability through AI actor roles and RACI matrices
OECD: Accountability ISO: Monitoring & measurement NIST: Govern
6 Understandable by Design +
EU AI Act: Transparency -- Article 13 requires documentation, user-facing disclosures, and AI system identification
NIST: Measure -- MF 2.7 assesses AI system trustworthiness, directly supporting explainability and understandability requirements
OECD: Transparency -- Meaningful information about AI systems for stakeholders to understand outcomes
EU: Transparency NIST: Measure OECD: Transparency
7 Safety, Resilience & Sustainability +
EU AI Act: Robustness -- Article 15 requires accuracy, robustness, and cybersecurity safeguards
NIST: Manage -- Risk treatment, incident response, and continuous monitoring of AI system performance
OECD: Robustness -- AI systems should be robust, secure, and safe throughout their lifecycle
ISO 42001: Risk management -- Clause 6 planning addresses risks, Annex A controls for system security and reliability
EU: Robustness NIST: Manage OECD: Robustness ISO: Risk management
EU AI Act
Human oversight
Regulatory sandboxes
Non-discrimination
Transparency
Conformity assessment
Robustness
NIST AI RMF
Govern
Map
Measure
Manage
ISO 42001
Leadership commitment
AI system testing
Monitoring & measurement
Risk management
OECD AI Principles
Human-centred values
Inclusive growth
Transparency
Robustness
Accountability
How to read this diagram: Hover over any sutra or framework principle to see its connections. Click a sutra to expand the mapping rationale. MeitY's 7 Sutras are voluntary principles. Their alignment with binding frameworks (EU AI Act) and standards (ISO 42001) means organizations can build one governance program that satisfies multiple jurisdictions.
Read the full guide on ISO 42001 adoption in India
Return to the India AI Governance Hub
MeitY AI Governance Compliance Checklist

Map your organization against the 7 sutras and Part 4 requirements.

Download Free Template
Sources & Citations (12 references)
  1. India AI Governance Guidelines (Full PDF) -- MeitY / IndiaAI Mission, Nov 2025. Primary
    static.pib.gov.in/...
  2. MeitY Press Release -- AI Governance Guidelines -- Press Information Bureau, Nov 2025. Primary
    pib.gov.in/PressRelease...2186639
  3. India vs Global AI Acts Comparison -- National Law Review, Dec 2025. Secondary
    natlawreview.com/...
  4. IndiaAI Official Portal -- IndiaAI.gov.in / MeitY, 2025. Primary
    indiaai.gov.in
  5. ISO/IEC 42001:2023 -- AI Management System -- ISO, Dec 2023. Primary
    iso.org/standard/81230
  6. BIS Adoption of ISO 42001 as Indian Standard -- Bureau of Indian Standards, 2023. Primary
    services.bis.gov.in
  7. DPDP Rules 2025 Notification -- Press Information Bureau, Nov 2025. Primary
    pib.gov.in/PressRelease...2190655
  8. EU AI Act Full Text -- European Parliament, Jul 2024. Primary
    eur-lex.europa.eu/...
  9. NIST AI Risk Management Framework -- NIST, Jan 2023. Primary
    nist.gov/artificial-intelligence/...
  10. OECD AI Incidents Monitor -- OECD, 2024. Secondary
    oecd.ai/en/incidents
  11. Network of AI Safety Institutes -- UK DSIT / International, Nov 2024. Secondary
    gov.uk/government/publications/...
  12. Zinnov-NASSCOM India GCC Landscape Report -- Zinnov / NASSCOM, 2025. Primary
    zinnov.com/centers-of-excellence/...
x