- within Antitrust/Competition Law and Privacy topic(s)
- with readers working within the Accounting & Consultancy and Law Firm industries
Artificial Intelligence (AI) has reached an inflection point. Over the past couple years, it has accelerated into a system-shaping technology. In many domains, AI systems already outperform traditional automation tools, and in others areas, they are approaching human-competitive capabilities. India recognised its challenge to not only adopt AI, but to govern it responsibly.
In early 2024-2025, under the umbrella of IndiaAI Mission, to govern AI development in India, a high-level advisory group was constituted, chaired by the office of the Principal Scientific Advisor to the Government of India. Under this group, a subcommittee on "AI Governance and Guidelines Development" was formed to examine governance challenges, conduct a gap analysis of existing legal/regulatory frameworks, and propose a draft framework tailored to India's context. The subcommittee completed its draft report and published it for public consultation, wherein over 2,500 submissions were received. After reviewing this feedback, a drafting committee prepared the final document – the India AI Governance Guidelines (the Guidelines). On 5th November 2025, the Guidelines were formally released by the Ministry of Electronics and Information Technology (MeitY), under the IndiaAI Mission.
A notable aspect of this is that India began not by drafting binding regulations, but by first creating a governance framework, which later evolved into these Guidelines. This makes it clear that India's primary intention at this stage is not to impose strict regulation or compliance-heavy obligations, but to guide the development and use of AI responsibly. The movement from a preliminary framework to principle-driven Guidelines demonstrate a deliberate choice to encourage AI use gradually rather than legislating prematurely.
The Guidelines provide a structured framework for how the country should approach the development and use of AI. The intention is to create clarity and ensure that AI systems work safely and fairly for everyone.
Grounded in the seven guiding "sutras", being Tryst, People First, Innovation Over Restraint, Fairness & Equity, Accountability, Understandability and Safety, the first part of the Guidelines sets the tone for a governance framework that addresses the dual nature of AI as both an enabler and as a risk vector. Rather than prescribing fixed rules, the sutras stand more in a guiding role across sectors.
Beyond the guiding principles, the Guidelines shift into the practical policy and regulatory approach required to manage real-world AI deployments. The Guidelines emphasise the use of existing legal frameworks such as the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023 (DPDPA), Consumer Protection Laws, Intellectual Property statutes, and Criminal Laws, recognising that many harms arising from AI, such as privacy violations, deepfakes, discrimination, fraud, or unsafe products, could already be possibly regulated by sectoral or general laws.
A key part of the Guidelines is the risk mitigation framework. This is built on the idea that AI systems vary significantly in their potential for harm, depending on the context in which they are used. The framework identifies different categories of risk such as malicious use, discrimination and bias, lack of transparency, systemic risks, and national security threats. It emphasises evaluating AI not only based on technical design but also based on how, where, and by whom it is deployed. High-risk systems may require stronger safeguards, including human oversight, circuit breakers, audit trails, and documented testing.
To make risk evaluation effective, the Guidelines introduce an 'AI Incidents Reporting System'. This System is intended to capture and analyse cases where AI causes or is likely to cause harm. The system is designed to work closely with existing entities such as CERT-In and law enforcement agencies. The Guidelines also propose an institutional structure to support governance, wherein the AI Governance Group (AIGG) will serve as the apex body responsible for providing strategic direction and coordination across government. Supporting it will be the Technology and Policy Expert Committee (TPEC), which will bring together technical specialists, legal experts, and policy practitioners to develop standards, evaluate risks, and advise on regulatory needs. In addition, the Guidelines call for the establishment of the AI Safety Institute (AISI). The AISI will focus on testing, red-teaming, safety evaluations, and the development of technical benchmarks for both general-purpose and high-risk AI systems.
The central themes in the Guidelines are as follows:
- Infrastructure
- Capacity Building
- Policy and Regulation
- Risk Mitigation
- Accountability
- Institutions: AIGG and AISI
On liability, the Guidelines do not create any new penalties or offences. Instead, they rely on existing laws to determine consequences for harm. For example, if an AI system violates data protection requirements, the DPDPA would apply; if it facilitates fraud, Criminal law would apply; and if it results in unfair trade practices, Consumer Protection laws would apply. The intention is to avoid adding new compliance burdens unnecessarily, while still ensuring that wrongdoing enabled by AI does not escape legal scrutiny. However, the Guidelines do acknowledge that liability questions may evolve as AI systems become more autonomous.
AI development and deployment often involve multiple actors such as model developers, data providers, deployers, end-users, etc. The Guidelines, therefore, recommend a graded approach to accountability, where each actor is responsible for the parts of the lifecycle they can control. Some recommendations are as follows:
- Developers may be expected to document training data practices, evaluate model behaviour, and disclose known limitations; and deployers may assess whether an AI system is appropriate for the intended use case, ensure that safeguards are in place, and monitor outcomes. This layered approach is designed to prevent situations where responsibility is unclear or shifted entirely to one party.
- Since AI systems can be complex, the Guidelines recommend that companies publish transparency reports, provide clear descriptions of how their systems function, and disclose known risks where appropriate.
- These mechanisms should be accessible in multiple languages and must allow people to challenge or seek explanations for decisions that significantly affect them.
- Importantly, the grievance systems meant for end-users are distinct from the AI Incidents Reporting System meant for tracking systemic or technical failures.
- The Guidelines also set out practical steps for industry to adopt responsible AI practices. Companies are encouraged to comply with all applicable laws, maintain proper documentation, conduct testing and bias assessments, and implement human oversight in high-risk contexts.
- Industry actors are also expected to maintain value-chain visibility, ensuring that downstream users understand the limitations and appropriate use of the AI systems they deploy.
- For regulators, the Guidelines also advise evidence-based interventions.
By and large, the Guidelines present a structured and balanced starting point for India's approach to AI governance. One of their strengths is the decision to avoid premature, rigid regulation. By relying initially on existing legal frameworks and sectoral laws, the Guidelines reduce unnecessary compliance burdens. This avoids the pitfalls seen in some jurisdictions where overly broad or technology-specific rules have unintentionally restricted research and market entry. Nevertheless, the institutional structure proposed for the introduction of AIGG, TPEC and AISI is a significant step as AI governance increasingly will require specialised expertise that traditional regulatory bodies may not possess.
Another positive aspect is the risk-based framework. It recognises that AI systems do not all carry the same degree of risk and that context matters. The distinction between low-risk and high-risk applications, coupled with requirements such as human oversight, audit trails, and bias testing, offers a practical approach that can be applied across multiple sectors without creating a one-size-fits-all framework. Notably, the introduction of an AI Incidents Reporting System if implemented well, can serve as a valuable source of information to guide future regulatory decisions.
However, the Guidelines also come with its limitations. While relying on existing laws avoids overregulation, it also means that certain gaps remain unaddressed. Many existing statutes were not designed with modern AI systems in mind, particularly general-purpose models and autonomous systems. Questions around causation, foreseeability, and liability may become more challenging as AI systems operate with higher levels of autonomy or are integrated into critical sectors such as in smart contracts, arbitrations, court proceedings, pharmaceuticals, telecommunications, social sectors, etc. The Guidelines acknowledge these gaps but stop short of providing concrete solutions.
The approach to accountability, although well intentioned, relies heavily on voluntary compliance and self-assessment. This approach may not work well without stricter enforcement mechanisms in place. Similarly, the suggestion of transparency measures and grievance redressal systems may vary widely across industries without proper trigger bases defined.
Another concern is the broad reliance on future institutional structures. Bodies such as AIGG, TPEC, and AISI will be crucial, but without clarity on how these institutions will operate, there is a risk that they may remain advisory in nature rather than becoming active regulators.
Overall, while the Guidelines succeed in laying a foundation, they may prove inadequate as AI systems rapidly evolve and integrate into essential economic functions, so the approach seems pragmatic only for the short term. In conclusion, the Guidelines offer a decent start by promoting and encouraging risk-aware deployment. They create a flexible framework suited to India's diverse culture as well. However, their reliance on existing laws, voluntary measures, and future institutions leaves open several important questions about enforceability, liability, and long-term oversight. India may eventually need clearer statutory interventions and stronger regulatory tools to ensure that AI systems are not only inclusive, but also safe, accountable, and rights-respecting in practice. If the aim is to position India as a global leader in AI governance, the current framework provides momentum, but further refinement and stronger legal base will likely be required as the technology matures.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.