ARTICLE
26 February 2026

Update: Overview Of The IT Intermediary Amendment Rules, 2026

AP
Argus Partners

Contributor

Argus Partners is a leading Indian law firm with offices in Mumbai, Delhi, Bengaluru and Kolkata. Innovative thought leadership and ability to build lasting relationships with all stakeholders are the key drivers of the Firm. The Firm has advised on some of the largest transactions in India across various industry sectors. The Firm also, regularly advises the boards of some of the biggest Indian corporations on governance matters. The lawyers of the Firm have been consistently regarded as the trusted advisors to its clients with a deep understanding of the relevant business domain, their business needs and regulatory nuances which enables them to clearly identify the risks involved and advise mitigation measures to protect their interests.
On February 10, 2026, the Ministry of Electronics and Information Technology ("MeitY") notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules...
India Media, Telecoms, IT, Entertainment
Jitendra Soni’s articles from Argus Partners are most popular:
  • with Inhouse Counsel
  • with readers working within the Aerospace & Defence and Technology industries

On February 10, 2026, the Ministry of Electronics and Information Technology ("MeitY") notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 ("2026 Amendment Rules"), further amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ("IT Rules 2021"). The 2026 Amendment Rules shall come into force from February 20, 2026, onwards.

The 2026 Amendment Rules have been introduced by MeitY's as a response to the rapid increase in deepfakes and other AI-created or AI-modified content, and to address the concerns around impersonation, non-consensual intimate imagery, misinformation and fraud. In order to address this concern, the 2026 Amendment Rules have incorporated a regulatory framework governing synthetically generated information ("SGI"), impose additional due diligence obligations on intermediaries that enable or facilitate such content, and significantly compress takedown and grievance-redressal timelines for unlawful content more generally. The 2026 Amendment Rules are the culmination of a policy process that began with MeitY's 2025 draft amendments on SGI and deepfakes. Those proposals were prompted by a series of high-profile incidents involving impersonation of celebrities and public figures, circulation of non-consensual intimate imagery, and the use of manipulated content in political and financial fraud. The note below sets out (i) the key features of the 2026 Amendment Rules, with particular focus on SGI; and (ii) our preliminary analysis of the implementation challenges and likely impact for intermediaries, especially those offering generative-AI or multimedia services in India.

KEY AMENDMENTS:

1. Introduction of Key Definitions Relating to Synthetic Content:

The 2026 Amendment Rules have incorporated the following key definitions under Rule 2:

(a) Synthetically Generated Information: Rule 2(1)(wa) defines SGI as audio-visual information that is artificially created, generated, modified, or altered using a computer resource in a manner that appears real, authentic, or true and depicts an individual or event in a manner likely to be perceived as indistinguishable from a natural person or real-world event.

Exceptions: The definition has a proviso which excludes certain usage from the ambit of SGI:

i. Routine or good faith editing including formatting and technical corrections, that does not materially distort or misrepresent the substance context, or meaning of the underlying audio, visual or audio-visual information. For instance, cropping, color correction, noise reduction, re-formatting used in ordinary content creation; or

ii. Routine or good faith creation or preparation of content like educational material, PDFs, presentations and other such illustrative materials which does not generate a false document or electronic record. For instance, training material, educational content, presentations, marketing creatives or other illustrative material that is clearly used for explanation or demonstration and not to pass off as a real document or record); or

iii. Usage of computer resources to enhance the quality or accessibility of any material without generating, altering, or manipulating any material part of the underlying audio, visual or audiovisual information, for instance, adding captions, translations, transcripts or audio descriptions).

(b) Audio, visual or audio-visual information: Under the inserted Rule 2(1)(ca), Audio, visual or audio-visual information includes any audio, image, video, sound recording, or similar content created, generated, modified, or altered using a computer resource.

(c) Treatment of SGI as "information" and impact on Safe Harbour Protection: Two new provisions, Rules 2(1A) and 2(1B), clarify how SGI fits into the broader scheme of the IT Rules 2021 and Section 79 of the Information Technology Act, 2000 ("IT Act"):

Rule 2(1A) states that references to "information" used to commit unlawful acts under the IT Rules 2021 will also cover SGI, unless the context requires otherwise. This makes it clear that SGI is not a separate category outside the IT Rules 2021, but is treated as a type of "information" for the purpose of intermediary obligations.

Rule 2(1B) then provides that when an intermediary removes or disables access to information, including SGI, using reasonable technical measures (including automated tools) in compliance with the IT Rules 2021, this action by itself will not be treated as a breach of the conditions for safe harbour under Section 79(2)(a) and (b) of the IT Act. This clarification is significant because it reassures intermediaries that they may rely on technical tools (including automated tools) to detect, label and remove SGI, without being treated as "having selected or modified" the content in a manner that would disqualify them from safe harbour, provided such measures are aligned with the 2026 Amendment Rules.

2. Enhanced Obligations for Intermediaries:

The 2026 Amendment Rules have introduced enhanced due diligence measures specifically for intermediaries that offer computer resources which permit, enable or facilitate the creation, modification, or dissemination of SGI.

(a) Periodic User Notification Obligations:

Rule 3(1)(c) has been substituted to require intermediaries to inform users at least once every 3 (three) months (earlier, once a year), in a simple and effective manner, that: (i) users are required to comply with the intermediary's rules, regulations, user agreement and privacy policy; (ii) failure to do so may lead to suspension or termination of access to the services and removal of content; and (iii) where the content amounts to an offence under law (including under the Bharatiya Nagarik Suraksha Sanhita, 2023 ("BNSS") or the Protection of Children from Sexual Offences Act, 2012 ("POCSO")), the intermediary will report such content or offence to the concerned authorities. This obligation is not limited to SGI, it is a general obligation applicable to all intermediaries and all types of content.

(b) Notification on User Created SGI Content:

Where an intermediary "offers its computer resource to enable or facilitate" the creation, generation, modification, alteration, publication, transmission, sharing or dissemination of SGI (for instance, AI-based image or video tools, voice-cloning tools, deepfake applications, etc.), the 2026 Amendment Rules introduce additional disclosure and enforcement duties. Under Rule 3(1)(ca), such intermediaries must inform users that: (i) asking the tool to create or modify SGI in breach of applicable laws can expose the user to punishment or penalty under a range of statutes (including the IT Act, the BNSS, POCSO, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986 and others); and (ii) in such cases, the intermediary may remove the SGI, suspend or terminate the user's account, disclose the offending user's identity to the complainant (who is the victim or represents the victim) and, where required, report the matter to the authorities.

(c) Expeditious Action on Violating SGI Content:

Rule 3(1)(cb) mandates expeditious and appropriate action by the intermediary when it becomes aware of violating SGI content on its own accord or through a receipt of actual knowledge or any grievance, complaint of information received under these rules.

(d) Mandatory Preventive Measures on Creation/Dissemination of Violating SGI:

Under the newly inserted Rule 3(3), intermediaries which facilitate SGI must deploy reasonable and appropriate technical measures, which includes automated tools and mechanisms, to prevent creation or dissemination of SGI that violates any applicable law. It includes SGI which:

  1. Contains sexually exploitative material on children or non-consensual intimate imagery;
  2. Creates a false electronic or documentary records;
  3. Relates to preparation or procurement of explosives, arms, ammunition, etc.
  4. Impersonates a natural person through voice, conduct, action or misrepresents a real event which would likely deceive people.

(e) Mandatory Labelling of SGI:

The newly inserted Rule 3(3) also requires mandatory labelling of SGIs that do not fall within the abovementioned prohibited categories. Intermediaries must prominently label through visual means where it is easily noticeable. In case of audio content, it should include a prefixed audio disclosure indicating that the content is SGI. Such labelling should be embedded with permanent metadata or other provenance mechanisms like a unique identifier to identify the computer resource used to create or alter such information. The intermediary is prohibited from enabling modification, suppression or removal of the label, or its permanent metadata and unique identifier.

3. Enhanced Due Diligence for Significant Social Media Intermediaries ("SSMIs"):

The 2026 Amendment Rules have inserted a new Rule 4(1A) imposing special obligations on SSMIs. Under Rule 4(1A), before any information is uploaded, published or made available on the SSMI's platform, the SSMI must: (i) require the user to state whether or not the content is SGI; and (ii) use suitable technical measures, including automated tools, to check this statement, keeping in view the nature, format and source of the content. Where information proves to be SGI after verification, it must be clearly and prominently displayed with an appropriate label or notice. If an SSMI knowingly permit, promote or fail to act upon the SGI in breach of the Rules, or does not act in line with Rule 4(1A), it will be treated as not having complied with the due-diligence requirements under the IT Rules 2021. This can expose the SSMI to loss of safe harbour protection under Section 79 of the IT Act in relation to such content.

4. Reduction of Take Down and Grievance Redressal Timeline:

Apart from SGI-specific obligations, the 2026 Amendment Rules also shorten several timelines that apply to all intermediaries. In summary:

Provisions

Timeline under existing framework

Timeline under the 2026 Amendment Rules

Takedown of unlawful information upon receipt of actual knowledge

[Rule 3(1)(d)]

36 hours

3 hours

Resolution of complaint by Grievance Redressal Officer from date of its receipt

[Rule 3(2)(a)(i)]

15 days

7 days

Expeditious resolution of complaints pertaining to prohibited content listed under Rule 3(1)(b)

[First Proviso to Rule 3(2)(a)(i)]

72 hours

36 hours

Removal of content pursuant to a complaint regarding intimate imagery from time of its receipt

[Rule 3(2)(b)]

24 hours

2 hours

These expedited timelines are applicable to complaints received on any information, including but not limited to SGIs.

ANALYSIS:

The 2026 Amendment Rules represent a decisive shift in India's intermediary liability framework, reflecting an attempt to adapt existing diligence norms to the technological realities of generative AI. While these amendments have attempted to address the ambiguity around deepfakes and similar AI-generated content, it has also broadened the scope of intermediary compliance. By specifically defining SGI, treating it expressly as "information" under the IT Rules 2021, and listing specific offences and statutes that are likely to be triggered, the new framework makes it easier for intermediaries to explain to users what is not acceptable, to align their terms of use and design internal controls in accordance with clear legal standards, and to take firm action against harmful synthetic content, instead of dealing with such cases on an ad hoc basis.

Further, the duty to build technical checks into SGI tools, and the requirement to mark and label lawful SGI, may help ordinary users better understand when a video or audio clip has been created or altered by AI. If implemented well, this can help reduce the impact of impersonation, fake news videos, and non-consensual intimate content. Additionally, the shorter takedown and complaint timelines for categories such as child sexual abuse material and non-consensual intimate imagery are likely to lead to quicker relief for victims. These harms can spread very quickly, and a 3 hour or 2 hour window, though demanding for platforms, is aligned with the need to limit damage early.

At the same time, the 2026 Amendment Rules give rise to certain practical concerns. First, the since the definition of SGI depends heavily on how content "appears" to users, the threshold is inherently subjective. Satire, parody or commentary that looks very realistic may be treated as SGI, prompting intermediaries (especially SSMIs) to err on the side of caution. This could result in excessive labelling or removal of content that is not truly misleading, and may indirectly narrow the space for creative speech. Secondly, the rules require technical controls to block certain SGI at the time of creation or upload and to tag all other SGI with clear, lasting markers. Current tools are still evolving, may not work equally well across formats, and may be bypassed by users who alter or move content across platforms, limiting how far these tags can practically travel with the content. Thirdly, permanent tags that link SGI to the tools or services used to create it can assist in investigating serious offences, but they also raise questions on tracking and profiling if not carefully limited. Intermediaries will have to align these requirements with data-protection principles such as data minimisation and purpose limitation, and be transparent with users about how such information is used.

Finally, shorter timelines (3 hours for takedown and 2 hours for certain complaints) are likely to be very demanding. Large platforms may be able to comply by investing in round-the-clock teams and automated processes, but this leaves less room for careful case by case review. Smaller intermediaries may find these timelines difficult to meet without resorting to blanket blocking which in turn increases the risk of over-removal and user dissatisfaction. Ultimately, the effectiveness of the 2026 Amendment Rules will largely depend on their implementation. Clear technical standards, proportionate enforcement, and interpretative guidance will be essential to ensure that the framework mitigates synthetic media harms without unduly constraining lawful innovation or expression.

Please find attached a copy of the 2026 Amendment Rules, here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More