India AI Sector Regulators: RBI, SEBI, ICMR, IRDAI & TEC
India did not create a new AI regulator. It told six existing ones to handle AI within their domains. The result is a patchwork that is more specific than most countries manage with a single horizontal law.
Who Regulates AI in India?
India did not create a single AI regulator. Instead, six existing sector regulators govern AI within their domains: RBI for finance, SEBI for securities, ICMR for healthcare, IRDAI for insurance, TEC/DoT for telecom, and CERT-In for cross-sector cybersecurity. Each has issued or is developing AI-specific guidance under the MeitY governance framework.MeitY 2025
The MeitY guidelines describe this as a "whole of government" approach. The AI Governance Group (AIGG), chaired by the Principal Scientific Adviser, coordinates across all these regulators. But the AIGG does not itself enforce anything. Enforcement stays with the sector regulators, each operating under their own enabling statute.MeitY 2025
MeitY also gave regulators three directives. First, adopt a pro-innovation posture: enable AI adoption, do not block it. Second, prioritize based on demonstrated harms, not theoretical risks. Third, use the lightest regulatory tool that achieves the objective (guidance before codes, codes before rules, rules before legislation). This graduated approach echoes international frameworks like the NIST AI Risk Management Framework and contrasts sharply with the EU AI Act's prescriptive risk tiers.MeitY 2025
That third directive matters. It means the current regulatory posture across all six bodies is deliberately calibrated toward guidance and voluntary frameworks. As the AI ecosystem matures, MeitY has signaled that some baseline measures may convert into mandatory requirements.
Which Regulator Applies to You?
Answer 4 questions to identify your Indian AI compliance obligations
RBI: AI Governance in Finance
What the RBI Requires
RBI's regulatory architecture has expanded progressively to cover AI. Three documents form the current framework.
The Cybersecurity Framework for Banks (2016) established board-approved cyber policies, continuous monitoring, incident reporting, and resilience planning. These requirements were written before AI was a major factor in banking, but their scope is broad enough to cover AI-enabled services.[4]
The Digital Lending Guidelines (2022) added transparency, consent, and accountability requirements for automated decision-making. These are now expected to incorporate disclosure obligations for AI-driven credit scoring and fairness audits.[5]
The FREE-AI Committee Report (August 2025) is the most significant development. MeitY's own guidelines reference it directly, noting that the FREE-AI Committee's seven principles informed the national framework's sutras.[1]
- Board-approved AI policies covering governance, lifecycle management, vendor oversight, and annual review. RBI-regulated entities are expected to have a formal AI governance policy approved at the board level.[2]
- AI-specific threat integration into cybersecurity protocols. Adversarial attacks, model poisoning, and data manipulation must be addressed in your threat model.[2]
- Tiered incident reporting for AI failures. Includes unintended outcomes, bias incidents, and explainability gaps. Bias and fairness failures are treated as reportable events.[2]
- Vendor oversight requirements for third-party AI systems. If you buy an AI model from a vendor and deploy it in a regulated financial service, you are accountable for that vendor's compliance posture.[2]
What Is the RBI FREE-AI Committee Report?
The Framework for Responsible, Explainable and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report was published in August 2025. It establishes seven foundational principles for AI in the financial sector. MeitY considered these principles significant enough to reference them as a model for the national governance framework.[2]
One notable recommendation: the FREE-AI Committee calls for a "tolerant" stance toward "first time/one-off aberrations." This signals that RBI recognizes AI systems are inherently probabilistic and may produce unexpected outcomes despite reasonable precautions (see model explainability and stochastic outputs in the AI glossary). The tolerance applies to first occurrences, not repeated failures. Pattern failures will draw enforcement attention.[2]
Practical Implications
If you are a bank, NBFC, or digital lender deploying AI in India:
- Get a board-approved AI governance policy in place. It must cover lifecycle management (development through retirement), vendor oversight, and annual review cycles.
- Update your cybersecurity threat model to include AI-specific attack vectors: adversarial inputs, training data poisoning, model extraction, prompt injection.
- Build a tiered incident reporting process for AI. Bias incidents, explainability failures, and unintended outcomes each need documented escalation paths.
- If you use third-party AI models, document your vendor oversight process. You cannot outsource accountability.
- Prepare for disclosure requirements on automated credit scoring. The Digital Lending Guidelines are expected to tighten here.
SEBI: AI in Securities Markets
What SEBI Requires
SEBI's existing Cybersecurity and Cyber Resilience Framework already applies to AI systems in capital markets. Market infrastructure institutions and intermediaries must maintain security operation centers, conduct vulnerability assessments, and submit compliance reports. AI-driven trading algorithms and surveillance systems fall directly under this framework, linking automation to accountability for market integrity.[3]
In June 2025, SEBI released a consultation paper on "Guidelines for responsible usage of AI/ML in Indian Securities Markets." This is the most direct signal that SEBI intends to formalize AI-specific rules for capital markets. While a consultation paper is not binding regulation, it indicates where enforcement is heading.[3]
Does SEBI Regulate AI Trading in India?
Yes. Algorithmic trading systems already operate under SEBI's existing framework for automated trading, which includes pre-trade risk controls, order-to-trade ratio limits, and kill switch requirements. The June 2025 consultation paper extends this to cover AI and ML systems specifically, addressing model governance, testing requirements, and accountability for AI-driven market decisions.[3]
The overlap between SEBI's cyber resilience requirements and AI system governance is significant. If your trading firm deploys an ML model for order execution or market surveillance, you need to demonstrate that the model is covered by your existing cybersecurity compliance program and that AI-specific risks (model drift, adversarial manipulation, data quality and lifecycle management) are addressed.
Practical Implications
If you operate in Indian securities markets and deploy AI/ML systems:
- Ensure your AI trading systems are documented within your cybersecurity and cyber resilience compliance framework.
- Monitor the consultation paper for final guidelines. When published, these will define specific AI/ML governance requirements for market participants.
- Document model governance processes: how models are trained, validated, monitored for drift, and retired.
- Maintain audit trails for AI-driven trading decisions. SEBI's accountability framework connects automated decisions to human oversight.
ICMR: AI Ethics in Healthcare
What ICMR Requires
ICMR's guidelines set baseline expectations for safety, transparency, accountability, fairness, and human oversight in medical AI. The requirements are specific:[6]
- Bias audits are mandatory. Any AI system used in biomedical research or healthcare must be tested for biased outcomes across patient populations. Given India's demographic diversity (caste, language, geography, socioeconomic status), bias testing needs India-specific test cases.[6]
- Independent ethics review is required before deploying AI in clinical settings. This mirrors the existing ethics committee process for clinical trials but applies it to AI systems.[6]
- Data quality checks must verify that training data meets standards for accuracy, completeness, and representativeness. An AI diagnostic model trained primarily on urban tertiary hospital data may perform poorly in rural primary care settings.[6]
- Developer vs. provider responsibility delineation is explicitly addressed. ICMR draws a line between the AI developer's accountability (model design, training, validation) and the healthcare provider's accountability (deployment context, clinical oversight, patient communication).[6]
Practical Implications
If you develop or deploy AI in Indian healthcare:
- Budget for independent ethics review before clinical deployment. This is not a formality. Ethics committees will evaluate your AI system's impact on patient populations.
- Design bias audits that account for India-specific factors: caste-proxy variables, language barriers, urban-rural performance gaps, and socioeconomic disparities in training data representation.
- Document the responsibility split between your development team and the healthcare providers who deploy the system. Both parties need clarity on who owns what.
- Maintain data quality documentation. If your training data comes from a limited geographic or demographic slice, document that limitation and its implications.
IRDAI: AI in Insurance
What IRDAI Requires
IRDAI mandates that insurers and intermediaries comply with the Guidelines on Information and Cyber Security for Insurers. These guidelines have direct implications for three AI use cases that are growing rapidly in Indian insurance:[7]
- AI-driven underwriting. ML models that assess risk profiles and set premiums must operate within IRDAI's fairness and transparency expectations. A model that uses proxy variables to discriminate against protected groups creates compliance exposure under both IRDAI guidelines and the DPDPA framework.[7]
- Claims management. AI systems that automate claims assessment, fraud scoring, or settlement calculations fall under the cybersecurity guidelines. The integrity of claims data, the auditability of automated decisions, and the availability of human review are all within scope.[7]
- Fraud detection. AI-powered fraud detection is one of the most common AI applications in Indian insurance. IRDAI's guidelines require documented security parameters and monitored false positive rates (legitimate claims flagged as fraud).[7]
Practical Implications
If you deploy AI in Indian insurance operations:
- Map your AI systems to IRDAI's cybersecurity guidelines. Underwriting models, claims automation, and fraud detection all require documented security and governance controls. Hiring dedicated AI governance staff is increasingly common in insurance; see current salary benchmarks for AI governance roles.
- Test underwriting models for proxy discrimination. Variables like postal code, occupation, and education level can serve as proxies for caste, religion, or socioeconomic status.
- Maintain human review pathways for AI-driven claims decisions. Automated denials without human oversight create both compliance risk and reputational exposure.
- Document false positive rates for fraud detection systems and establish thresholds that trigger model review.
TEC/DoT: AI Standards for Telecom
What TEC Has Published
TEC is structuring pathways for trustworthy AI assessment in telecommunications and critical digital infrastructure. Three standards define the current landscape:[8]
- Voluntary Standard for Fairness Assessment and Rating of AI Systems is published. It provides a methodology for assessing whether AI systems produce fair outcomes across different user groups. Applies to any AI in telecom: network optimization, customer service bots, resource allocation.[8]
- Standard for Assessing Robustness of AI Systems in Telecom Networks is under development. Will address reliability requirements for AI managing network operations. A network optimization algorithm that fails under load is a robustness problem with real-world consequences.
- Draft Standard for AI Incident Database Schema is under development. Aims to create a structured incident reporting framework for AI failures in telecom covering network optimization, service quality, and critical infrastructure.
Practical Implications
If you deploy AI in Indian telecom infrastructure:
- Apply TEC's fairness standard to your AI systems voluntarily. "Voluntary" today may become mandatory as the Telecommunications Act 2023 rules are notified.[12]
- Prepare for robustness requirements. When TEC's robustness standard is published, you will need documented testing results for AI systems that manage network operations.
- Build incident reporting capabilities for AI failures. The draft incident database schema signals where TEC is heading on mandatory reporting.
- Track the Telecommunications Act 2023 rule notifications. New rules for cybersecurity, critical infrastructure, and incident reporting will strengthen AI governance requirements.[12]
CERT-In: Cross-Sector AI Incident Reporting
What Are CERT-In Reporting Requirements for AI Systems?
CERT-In's requirements are the most broadly applicable of all six regulators. They cross every sector boundary.[9]
The CERT-In Directions (2022), issued under the Information Technology Act 2000, mandate three things:
- Report cybersecurity incidents within six hours. Not 24 hours. Not 72 hours. Six hours from noticing or being brought to notice of the incident. This is one of the tightest reporting windows in the world, shorter than the EU's NIS2 24-hour early warning requirement (part of a 3-stage process: 24h early warning, 72h notification, 1-month final report) and significantly shorter than GDPR's 72-hour window.[9]
- Retain logs for 180 days. System logs, network logs, and security event logs must be maintained for six months and made available for audit on demand.[9]
- Enable audits. CERT-In can request access to your systems and logs. Your infrastructure must support this capability.[9]
The MeitY guidelines clarify that these requirements "directly cover AI systems integrated into cloud platforms, fintech, or critical infrastructure." This is not an extension or reinterpretation. The CERT-In Directions were written broadly enough to cover AI from day one. MeitY is simply confirming what the text already says.[1]
Working alongside CERT-In, the NCIIPC Rules (2014) designate critical information infrastructure sectors and require mandatory safeguards, monitoring, and incident response. These provisions are directly relevant to AI deployment in energy, telecom, and transport.[10]
The Six-Hour Window in Practice
Six hours is not much time. For an AI system failure, the clock starts when you notice the incident or are brought to notice of it, not when you finish investigating it. That means your incident detection and reporting pipeline needs to be fast enough to classify an AI failure, determine if it meets CERT-In's reporting threshold, and submit notification within that window.[9]
What counts as a reportable incident for AI systems? CERT-In specifies 20 enumerated incident types including unauthorized access, data breaches, denial of service attacks, and attacks on critical infrastructure. Not every AI failure triggers mandatory reporting. An AI model that is compromised through adversarial inputs, leaks training data, or fails in a way that disrupts a critical service would fall within the enumerated categories. Routine model performance issues or accuracy degradation, absent a cybersecurity nexus, would not.[9]
Practical Implications
If you deploy AI systems in India (any sector):
- Confirm your incident response process can meet the six-hour reporting window. If it cannot, fix that before anything else.
- Implement 180-day log retention for all AI system operations, including model inputs, outputs, performance metrics, and access logs.
- Map your AI systems to CERT-In's incident categories. Know in advance which failure modes trigger mandatory reporting.
- If your AI systems touch critical infrastructure (energy, telecom, transport, financial systems), layer NCIIPC requirements on top of CERT-In obligations.[10]
- Test your reporting pipeline. Run tabletop exercises that simulate an AI system failure and measure whether your team can detect, classify, and report within six hours.
Cross-Regulator Compliance: The Practitioner Challenge
The reality of India's sector-based approach is that many organizations fall under multiple regulators simultaneously. A fintech company using AI for credit scoring faces RBI requirements (FREE-AI framework, Digital Lending Guidelines), CERT-In requirements (incident reporting, log retention), and DPDPA obligations (consent, purpose limitation, data minimization). A health insurance company deploying AI for claims processing answers to both IRDAI and ICMR, with CERT-In layered on top.
There is no single compliance checklist that covers all six regulators. Organizations operating across sectors need to map their AI systems to each applicable regulatory framework and identify where requirements overlap, where they conflict, and where gaps exist. Professionals holding the IAPP AIGP certification are trained specifically for this kind of multi-framework gap analysis.
ISO 42001 helps here. The management system approach (Plan-Do-Check-Act cycle, documented controls, risk treatment methodology) provides a single internal governance framework that can satisfy multiple external regulatory requirements. The Bureau of Indian Standards has adopted ISO 42001 as IS/ISO/IEC 42001:2023, and MeitY's guidelines reference it directly in Annexure 6 as the recommended management system for AI governance.[1]
Mapping Regulators to ISO 42001 Controls
| Regulator | Key AI Requirement | ISO 42001 Mapping |
|---|---|---|
| RBI | Board-approved AI policy | Clause 5.1 (Leadership), Clause 5.2 (AI Policy) |
| RBI | AI incident reporting | A.6.2.6 (Operation & Monitoring), Clause 10 (Improvement) |
| SEBI | Model governance and audit trails | A.6.2.4 (Verification & Validation), A.8 (Information for Interested Parties) |
| ICMR | Bias audits and ethics review | A.5 (Assessing Impacts of AI Systems), A.6.2.4 (Verification & Validation) |
| IRDAI | Cybersecurity for AI systems | A.6.2.6 (Operation & Monitoring), Clause 6.1 (Risk Assessment) |
| TEC | Fairness assessment | A.5 (Assessing Impacts of AI Systems), A.6.2.4 (Verification & Validation) |
| CERT-In | 6-hour incident reporting | Clause 10.2 (Nonconformity), A.6.2.6 (Operation & Monitoring) |
This is not a complete mapping. Each regulator has requirements that go beyond what ISO 42001 covers. But the management system gives you the documentation backbone, the risk treatment process, and the continuous improvement cycle that makes multi-regulator compliance manageable instead of chaotic. The ISO 42001 resource center has the full Annex A control catalogue with implementation guidance.
Browse the full ISO 42001 control mapping for IndiaWhat Happens Next
MeitY's action plan includes a short-term regulatory gap analysis across all sector regulators (2025-2026), followed by sector-specific AI standards and codes of practice in the medium term (2026-2028). The long-term plan (2028 and beyond) includes new legislation where regulatory gaps remain after existing law application.[1]
The current posture is "guidance first." Every sector regulator is operating in advisory or consultation mode on AI-specific requirements. That will change as enforcement data accumulates, AI incidents occur, and the political environment shifts. Organizations that build compliance programs now, while the requirements are still forming, will be better positioned than those who wait for mandatory enforcement. Free governance templates can help you start documenting policies before the requirements harden.
The DPDPA compliance deadlines (Phase 1 by November 2026, full compliance by May 2027) will also force the issue. As data protection enforcement begins, the intersection between DPDPA obligations and sector-specific AI requirements will create new compliance pressure points.
One-page summary of each regulator's AI requirements.
Download Free TemplateView all 12 sources
- Primary MeitY / IndiaAI Mission. "India AI Governance Guidelines." Nov 2025. PDF
- Primary Reserve Bank of India. "FREE-AI Committee Report." Aug 2025. PDF
- Primary SEBI. "Guidelines for Responsible Usage of AI/ML in Indian Securities Markets (Consultation Paper)." Jun 2025. Link
- Primary Reserve Bank of India. "Cybersecurity Framework for Banks." 2016. Link
- Primary Reserve Bank of India. "Digital Lending Guidelines." 2022. Link
- Primary ICMR. "Ethical Guidelines for Application of AI in Biomedical Research and Healthcare." 2023. Link
- Primary IRDAI. "Guidelines on Information and Cyber Security for Insurers." 2023. Link
- Primary TEC / DoT. "Voluntary Standard for Fairness Assessment and Rating of AI Systems." 2024. Link
- Primary CERT-In / MeitY. "CERT-In Directions 2022." 2022. Link
- Primary NCIIPC. "NCIIPC Rules 2014." 2014. Link
- Secondary National Law Review. "India vs Global AI Acts Comparison." Dec 2025. Link
- Primary DoT / Government of India. "Telecommunications Act 2023." 2023. Link
Sector Regulator Quick Reference
RBI, SEBI, ICMR, IRDAI, TEC, CERT-In requirements on one page. Know which rules apply to you.