ARTICLE
29 April 2026

Understanding MeitY’s Emerging Framework For AI-Generated Content In India

LP
Legitpro Law

Contributor

Legitpro is a leading international full service law firm providing integrated legal & business advisory services, operating through 5 locations with 100+ people. Our purpose is to deliver positive outcomes with our colleagues, clients and communities. The firm proudly serves a diverse clientele, including multinational corporations, foreign companies—particularly those from Japan, China, and Australia and dynamic startups across various industries. Additionally, the firm is empanelled with the Competition Commission of India (CCI) to represent it before High Courts across India. Our Partners also serve as Standing Counsel for prestigious institutions such as the Government of India (GOI), the National Highways Authority of India (NHAI), Serious Fraud Investigation Office (SFIO) and the Union Public Service Commission (UPSC).
MeitY’s latest proposal on AI-generated content appears to be a straightforward tightening of disclosure norms. The language “ensure that users are informed when content is artificially generated” suggests a familiar regulatory instinct. However, a closer reading reveals something far more structural. This is not merely about telling users that content is AI-generated. It is about ensuring that such disclosure is inseparable from the content itself. In doing so, MeitY is not just prescribing what platforms must say rather it is beginning to influence how platforms must be build.
India Technology
Helen Stanis Lepcha’s articles from Legitpro Law are most popular:
  • within Technology topic(s)
  • in United States
Legitpro Law are most popular:
  • within Technology, Real Estate and Construction, Food, Drugs, Healthcare and Life Sciences topic(s)
  • with readers working within the Accounting & Consultancy and Law Firm industries
  1. Introduction

MeitY’s latest proposal on AI-generated content appears to be a straightforward tightening of disclosure norms. The language “ensure that users are informed when content is artificially generated” suggests a familiar regulatory instinct.

However, a closer reading reveals something far more structural. This is not merely about telling users that content is AI-generated. It is about ensuring that such disclosure is inseparable from the content itself. In doing so, MeitY is not just prescribing what platforms must say rather it is beginning to influence how platforms must be build.

The shift, therefore, is subtle in form but significant in consequence. It reflects an evolving regulatory philosophy, one that recognises that in an AI-driven ecosystem, transparency cannot remain a peripheral obligation.

  1. The Legal Framework: Where Does This Obligation Come From?

The proposal sits squarely within the framework of the Information Technology Act, 2000 (“the Act”) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“the Rules”). Under Section 79 of the Act, intermediaries are granted safe harbour protection for third-party content, provided they comply with due diligence requirements. Over time, these requirements have evolved from passive obligations such as responding to notices to more active responsibilities involving monitoring and prevention.

MeitY’s latest proposal builds directly on this trajectory. By incorporating AI disclosure within the due diligence framework, it effectively converts a transparency expectation into a condition for retaining statutory immunity. This is where the legal significance lies. Non-compliance is no longer a matter of regulatory non-conformity, it rather becomes a potential trigger for loss of safe harbour, exposing platforms to direct liability.

  1. What is actually changing?

Aspect

Earlier Position (“Prominent Labelling”)

Proposed Position (“Continuous & Clearly Visible Labelling”)

Legal & Practical Impact

Nature of obligation

Disclosure required but flexible in presentation

Disclosure must persist throughout content consumption

Moves from subjective to objective standard

Placement of label

Captions, descriptions, metadata, or entry points

Embedded within the content itself (e.g., watermark/overlay)

Eliminates peripheral compliance methods

Duration of visibility

One-time or contextual visibility

Continuous visibility for entire duration of content

Requires real-time, persistent display mechanisms

Compliance layer

Policy-driven (guidelines, disclosures)

Technical/design-driven (product architecture)

Shifts responsibility to engineering and UI design

Enforceability

Interpretative, harder to audit

Measurable, auditable, and enforceable

Increases regulatory scrutiny and litigation exposure

  1. The Real Shift: From Policy Compliance to Product Design

Dimension

Earlier Regulatory Approach

Emerging MeitY Approach

Practical Implication

Core compliance philosophy

Policy-led compliance through guidelines and moderation

Design-led compliance embedded within product architecture

Compliance shifts from legal teams to product and engineering functions

Method of ensuring disclosure

User instructions, disclaimers, and post-facto moderation

System-driven enforcement ensuring AI content cannot exist unlabelled

Platforms must build technical safeguards into content creation and display

Role of platform design

Secondary to policy and governance

Central to regulatory compliance

UI/UX, rendering layers, and backend systems become compliance tools

Nature of regulation

Focus on user conduct and platform response

Focus on system design and content lifecycle

Moves from reactive to preventive compliance

Risk perspective

Misuse by users

Enablement through platform design

Liability linked to how systems are built, not just how they are used

 

  1. Synthetic Content as Defined Risk

Another important element of the proposal is the recognition of AI-generated content as a distinct category, often described as “synthetically generated information.” This includes content that is either entirely generated by artificial intelligence or materially altered in a manner that makes it appear authentic. Deepfakes, voice cloning, and AI-generated media fall squarely within this category. By formally identifying such content, MeitY provides a clearer foundation for regulation. It removes ambiguity around what constitutes AI-generated content and allows for more precise enforcement. It signals a shift in regulatory focus from general concerns about misinformation to specific risks associated with synthetic media.

  1. How is the Compliance Burden Evolving?

The proposed changes are not just confined to disclosure as an isolated requirement, they are part of a broader shift in the compliance architecture governing digital platforms. Under the earlier regime, disclosure obligations were contextual and flexible, allowing platforms to rely on captions, disclaimers, or metadata-based labelling. Compliance itself was largely policy-driven, supported by internal guidelines and post-facto moderation mechanisms, with intermediaries positioned as passive hosts benefiting from safe harbour protection.

The emerging framework alters this position fundamentally.

  1. Disclosure is now expected to be continuous and embedded within the content itself, leaving little room for interpretative flexibility.
  2. Compliance correspondingly shifts from a policy-based approach to a system and design-driven obligation, requiring platforms to build technical safeguards into their products.
  3. Intermediaries are no longer passive conduits but active compliance gatekeepers, responsible for ensuring that content adheres to regulatory standards at the point of creation and dissemination.

This transformation also recalibrates liability exposure, making safe harbour protection conditional upon strict and demonstrable compliance.

  1. Who does this Impact?

Stakeholder

How This Impacts You

Wayforward

Platforms / Intermediaries (social media, marketplaces, hosting platforms)

Risk of losing safe harbour if AI content is not continuously labelled; increased scrutiny on product design

Embed default labelling mechanisms (watermarks/overlays); update content moderation systems; revise user terms to mandate AI disclosures

SaaS & AI Tool Providers (GenAI tools, APIs, enterprise AI platforms)

Liability may arise if tools enable creation of unlabelled AI content; clients will demand compliance-ready tools

Build auto-labelling features at output level; provide compliance-ready APIs; include AI disclosure clauses in client contracts

Content Creators / Influencers

Personal exposure for publishing unlabelled AI-generated or altered content; potential takedowns or account restrictions

Clearly declare AI usage in content; use tools/platforms that ensure built-in labelling; maintain audit trail of content creation

Enterprises Using AI (Marketing, HR, Customer Engagement)

AI-generated campaigns, ads, or communications may become non-compliant if not labelled; reputational and regulatory risk

Implement internal AI usage policies; review all outward-facing AI content; align vendor contracts with disclosure requirements

Developers / Product Teams

Compliance shifts to product architecture; failure to design for disclosure = regulatory risk

Integrate labelling at system level (UI + backend); ensure disclosure persists across formats (video, image, text)

Legal & Compliance Teams

Traditional policy-based compliance becomes insufficient; increased need for tech-integrated oversight

Work closely with product teams; update risk frameworks, contracts, and SOPs; conduct AI compliance audits

 

  1. Contractual and Commercial Consequences

The proposed regulatory shift is likely to have a direct and immediate bearing on the structuring of contractual arrangements across the digital and technology ecosystem.

  1. As disclosure obligations transition from policy-based requirements to design-level mandates, parties will be compelled to revisit and recalibrate existing contractual frameworks to appropriately allocate regulatory risk.
  2. Platforms and intermediaries, in particular, are expected to tighten their terms of service and platform policies to impose explicit obligations on users with respect to the creation, upload, and dissemination of AI-generated content. This will likely include mandatory disclosure undertakings, restrictions on the use of synthetic media without appropriate labelling, and enhanced enforcement rights in cases of non-compliance. Such provisions will not merely be declaratory in nature but will operate as a first line of defence to preserve safe harbour protections under the Act.
  3. Enterprise users of AI tools are likely to demand a higher degree of contractual assurance from technology providers. This is expected to translate into more robust representations and warranties concerning the capability of such tools to ensure compliant output, including built-in labelling functionalities and adherence to applicable regulatory standards.
  4. Vendors may also be required to provide ongoing compliance support, audit rights, and indemnities against regulatory exposure arising from deficiencies in product design or functionality.

The allocation of liability, therefore, becomes significantly more complex. Unlike traditional content regulation, where responsibility could be more readily attributed to either the platform or the user, the present framework introduces a multi-layered risk structure involving platform operators, tool providers, and end users. Contracts will need to clearly delineate responsibility for disclosure, define the consequences of non-compliance, and establish mechanisms for remediation, including takedown obligations, indemnification triggers, and limitation of liability provisions.

This marks a shift from standardised, form-based contracting to more negotiated and risk-sensitive arrangements, particularly in enterprise and SaaS contexts. Parties will need to ensure that contractual language is not only aligned with current regulatory expectations but is also sufficiently flexible to accommodate further developments as the MeitY framework evolves.

  1. Cross- Border Considerations

The implications of MeitY’s proposed framework extend well beyond domestic intermediaries and will have a material impact on global technology companies offering services in India.

  1. Unlike conventional compliance obligations that can be addressed through policy disclosures or jurisdiction-specific terms, the requirement of continuous and embedded labelling necessitates product-level localisation. This effectively compels global platforms to either redesign their systems to accommodate India-specific requirements or deploy geo-specific versions of their services, both of which carry operational and legal complexity.
  2. A key challenge arises from the divergence in regulatory philosophies across jurisdictions. While India is moving toward output-level regulation through mandatory disclosure embedded in content design, other jurisdictions, particularly in the EU and parts of the US, are adopting a mix of risk-based classification and transparency obligations that do not uniformly mandate continuous visibility. This creates a fragmentation risk, where a single piece of AI-generated content may be compliant in one jurisdiction but non-compliant in another purely on account of how disclosure is implemented.
  3. From a contractual and compliance standpoint, this divergence will require global platforms to reassess their cross-border data and content governance frameworks. Questions around which jurisdiction’s standards apply, how conflicts of law are resolved, and whether compliance can be standardised or must be localised will become increasingly relevant. In certain cases, platforms may need to adopt the highest common denominator approach, effectively applying India’s stricter disclosure standards globally to avoid regulatory inconsistency and enforcement risk.
  4. There are also enforcement-related considerations. India regulates digital platforms through intermediary liability, which means compliance is closely tied to their ability to operate in the country. In simple terms, if platforms do not follow these rules, they risk losing legal protection or facing restrictions on their services in India. Given the size and importance of the Indian market, most global platforms are unlikely to take that risk. As a result, they are expected to prioritise compliance with Indian requirements, even if it means making product or system-level changes specifically for India.

By focusing on the presentation and perception of AI-generated content, rather than solely on system-level risks, the Indian model introduces a distinct regulatory lens that may influence policy development in other jurisdictions. As regulatory convergence around AI remains limited, India’s emphasis on design-level compliance could emerge as a reference point, particularly for jurisdictions seeking enforceable and user-centric transparency mechanisms.

  1. Conclusion and Our Recommendation

So, what does MeitY really signal through this proposal?

At one level, it requires platforms to disclose when content is AI-generated. However, at a deeper level, it marks a structural shift in regulatory thinking. Transparency in an AI-driven ecosystem is no longer intended to be optional, contextual, or post-facto. It is expected to be continuous, embedded, and inseparable from the content itself. In that sense, MeitY is not merely refining disclosure norms; it is redefining accountability, moving it from the margins of policy into the core of product design.

It is equally important to recognise that the framework remains under consultation, with further refinements expected prior to finalisation. That said, the regulatory direction is clear. The emphasis on continuous disclosure and proactive compliance is unlikely to change, with the final rules expected to focus primarily on clarifying implementation and enforcement.

Businesses should begin identifying where AI-generated outputs exist across their operations and assess whether existing systems can support continuous disclosure. Platform operators and AI tool providers should prioritise integrating labelling mechanisms at the product level, while enterprises should review vendor arrangements and internal policies to ensure alignment with the proposed framework. Early action will mitigate regulatory risk and avoid last-minute compliance disruption.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More