ARTICLE
8 December 2025

India's AI Guidelines: Enabling Responsible AI Innovation And Driving Proactive Compliance By The Companies

L
Lexplosion Solutions Private Limited

Contributor

Lexplosion Solutions is a leading Legal-Tech company providing legal risk management solutions in areas of compliance management, audits, contract lifecycle management, litigation management and corporate governance. Lexplosion merges disruptive technology with legal domain expertise to create solutions that have increase efficiency and reduce costs.
India AI Governance Guidelines ("Guidelines") issued by the Ministry of Electronics and Information Technology ("MeitY") have now been in the public domain for several weeks...
India Technology
Amiya Mukherjee’s articles from Lexplosion Solutions Private Limited are most popular:
  • in India
Lexplosion Solutions Private Limited are most popular:
  • within Technology, Tax and Insolvency/Bankruptcy/Re-Structuring topic(s)
  • with Inhouse Counsel
  • with readers working within the Securities & Investment and Law Firm industries

India AI Governance Guidelines ("Guidelines") issued by the Ministry of Electronics and Information Technology ("MeitY") have now been in the public domain for several weeks, giving policymakers, industry leaders and legal practitioners time to study the document in depth. With the initial reactions settling, this is the moment when the real work begins – understanding what these Guidelines actually mean for businesses, where the compliance expectations are headed and how organisations should prepare.

Over the past few weeks, multiple interpretations have emerged across media, industry forums and advisory circles. However, what many organisations still need is a clear, consolidated view of the Guidelines, stripped of noise, grounded in the text and focused on the practical implications for compliance, governance and risk.

This blog attempts to do exactly that. It distils the key elements of the Guidelines, highlights their legal and operational impact and outlines what companies must start doing today to remain aligned with India's emerging AI governance architecture.

India's AI Governance Guidelines: A Balanced Vision for the Future

While the goal is to encourage innovation and adoption, protecting individuals and society from the risk of harm caused by the development or use of AI cannot be ignored. So, the twin goals of India AI Governance Framework are:

  1. Enable innovation
  2. Mitigate risks to individuals and society

Keeping the above in view, India's approach focuses on governing AI applications rather than regulating the underlying technology. This adaptive strategy aims to:

  • Promote innovation and investment
  • Expand domestic capacity and digital inclusion
  • Drive sectoral adoption across healthcare, agriculture, education, finance, public service and manufacturing
  • Enable responsible deployment with minimal regulatory burden

To achieve the objective, it has been decided that the governance structure must be guided by the following core approaches:

  • Flexible and pro-innovation
  • People-first, trust-centric and transparent
  • Built on sector-specific accountability
  • Supported by well-structured legal and compliance mechanisms.

Yet the complexity of existing laws, such as, IT Act, DPDP Act, sectoral regulations, consumer protection frameworks, copyright laws, combined with upcoming changes, makes compliance challenging for organisations.

It has been suggested that some of the AI related harms, such as deepfake impersonation can be prosecuted under the IT Act and the Bharatiya Nyaya Sanhita, while the use of personal data without consent for training AI models is governed by the Digital Personal Data Protection Act. On the other hand, there are some areas where new regulatory interventions are required. Several gaps exist in sensitive sectors such as radiology and finance. For example, the PCPNDT Act must be examined to address risks arising from AI-enabled analysis of radiology images that could facilitate unlawful sex determination. Similarly, in finance and other priority sectors, regulatory gaps should be rapidly identified and resolved through targeted amendments and sector-specific rules.

Further, the Guidelines recommends that the ethical foundation for how AI systems should be designed and deployed should be based on the seven principles:

  1. Trust must be embedded across the entire AI value chain in the technology.
  2. A people-first approach ensuring human oversight and final control wherever possible, supported by ethical safeguards.
  3. AI governance enabling innovation, prioritising responsible progress over excessive restraint.
  4. AI must promote inclusion and avoid exclusion or discrimination.
  5. Developers and deployers to remain visible and accountable, with responsibilities assigned based on role, risk and due-diligence requirements.
  6. AI systems to offer clear explanations and disclosures, helping users and regulators understand system behaviour and intended outcomes, as far as technically feasible.
  7. AI development to be environmentally responsible, encouraging energy-efficient and lightweight models.

The Compliance Mandates on the Way

The recommendations of the Review Committee can be broadly classified based on six pillars:

  1. Infrastructure building for access to data, compute and digital public infrastructure.
  2. Policy and Regulation to facilitate agile, balanced frameworks that support innovation and introduce targeted amendments where necessary.
  3. Capacity Building by education, skilling and public awareness to build trust and enable responsible AI adoption.
  4. Risk Mitigation by creating risk assessment framework, promoting voluntary compliance and applying stronger safeguards for sensitive sectors and vulnerable groups.
  5. Accountability to be imposed by implementing graded liability based on role and risk and ensuring transparency across the AI value chain.
  6. Institutions to adopt a whole-of-government approach with coordinated bodies like the AIGG, TPEC and AISI.

So, one thing is clear from the above recommendations that the AI-powered services must demonstrate accountability and transparency, while complying with a wide range of legal instruments and emerging expectations such as:

A. Risk assessment and incident reporting frameworks- The major risks associated with AI are:

  1. Malicious uses which include misinformation via deepfakes, trojan attacks, model or data poisoning and adversarial inputs targeting critical infrastructure
  2. Bias and discrimination, such as unfair employment decisions driven by manipulated datasets.
  3. Lack of transparency due to inadequate disclosures, for example, using personal data to train AI systems without consent.
  4. Systemic risks including disruptions due to market concentration or unstable regulatory environments.
  5. Loss of control where AI systems behave unpredictably, threatening public order and safety.
  6. National security threats such as AI-driven disinformation campaigns, cyberattacks on critical infrastructure impacting sovereignty and counter-terrorism efforts.
  7. Risks to vulnerable groups including
    • harms to children from algorithmic manipulation and exposure to harmful content and
    • harms to women from AI-generated deepfakes, sometimes referred to as 'revenge porn'.

Now, to understand AI-related risks in the Indian context, there is a need to collect empirical data about the harms caused by AI. In this context, 'incident reporting' can be a useful measure. An 'AI incident' is an event, circumstance, or series of events where the development, use, or malfunction of one or more AI systems directly or indirectly leads to a specific harm. These harms include injury to health, disruption of critical infrastructure, human rights violations, or damage to property, communities, or the environment. Under this AI Guidelines, it has been recommended that incident reporting systems should be designed to encourage participation from public and private organisations, sectoral regulators and individuals, enabling cross-sector analysis of emerging risks. Organisations should be encouraged to report voluntarily through confidentiality-protective protocols. The database should promote reporting without fear of penalties, focusing on identifying harms, assessing their impact and enabling mitigation through an inclusive approach.

B. Voluntary standards, certifications and transparency reports Voluntary measures, such as industry codes of practice, technical standards and self-certifications, offer flexibility without legal enforceability. They support a pro-innovation approach by enabling responsible AI development without excessive regulation and can be tailored to India's diverse context. Insights generated from these measures can be used for future binding rules and may eventually evolve into mandatory requirements. Voluntary frameworks should be proportionate to risk. The low-risk uses may only need transparency and grievance processes, while high-risk sectors like health and finance require stronger safeguards. Some incentives for voluntary compliance would be-

  • Access to regulatory sandboxes
  • Public recognition through certifications or ratings
  • Investment preference for responsible innovators
  • Technical support and toolkits

C. Grievance redressal mechanisms: To remain accountable, the organisations deploying AI systems should establish accessible and effective grievance redressal mechanisms. These processes must be easy to use, protect complainants and be available in multiple languages with timely responses. Feedback from grievances should be analysed and used to improve systems, creating a continuous learning loop. These mechanisms should operate separately from the AI Incidents Database.

D. Techno-legal solutions and compliance-by-design: A techno-legal approach to governance uses technology architectures to embed legal requirements directly into system design. It is both a design philosophy and family of architectures that makes regulatory principles automatically enforceable in practice. In a techno-legal approach, specific policy measures are codified and embedded directly into the underlying system through technical standards and protocols. To the extent that it is possible to use technology measures to give effect to regulatory principles, it supports 'compliance-by-design'.

E. Mitigating Loss of Control by clear liability distribution across the AI value chain and sector-specific safety and ethical practices: AI systems can evolve unpredictably, creating risks of losing control. To address this, the Committee highlights the need for mechanisms to maintain oversight and prevent harm. As far as possible, human-in-the-loop controls should be built into critical decision points so that outputs can be reviewed, overridden, or supplemented by human judgment. In situations where real-time human intervention is not feasible (such as, high-speed algorithmic trading) safeguards like circuit breakers, automated checks and system-level constraints should be deployed. In critical sectors, regular monitoring, testing, audit trails and reporting protocols are essential to ensure systems operate within defined limits, identify risks early and enable timely mitigation.

Conclusion:

Having the above context in view, the Guidelines strongly recommends early preparation for AI Governance Group (AIGG), National AI Incident Database, Regulatory sandboxes, Common safety and content authentication standards, Master circulars and structured compliance rules to handle the issue of AI governance in a pragmatic manner. Therefore, businesses cannot wait for a reactive approach and must start preparing now.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More