- with Senior Company Executives, HR and Finance and Tax Executives
- in United States
- with readers working within the Law Firm industries
Introduction
India faces an escalating deepfake crisis. Malicious individuals have established illicit websites to misuse AI-altered voice technology for financial fraud and public figure impersonation. The Ministry of Electronics and Information Technology acted decisively on October 22, 2025 and announced amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.1
These amendments mark India's initial explicit statutory framework addressing artificially generated information. They mark a turning point in digital governance. The regulatory architecture seeks to balance innovation protection with deepfake control. It accomplishes this by providing explicit definitions, transparency mechanisms, and platform accountability requirements.2
Understanding Synthetically Generated Information
The amendments provide formal statutory recognition of "synthetically generated information" defined as information "artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true" This expansive definition encompasses deepfake audio with manipulated vocal features, algorithmically altered photographs, fabricated metadata structures and synthetically generated text.
Regulators understand that deepfakes represent an evolving threat. Traditional legal frameworks can't keep pace. Generative AI tools have democratised creation capabilities. Sophisticated deepfake technology is now accessible to even actors with minimal technical expertise. The amendments use technology neutral language to address this reality. By avoiding technology specific definitions so the rules won't become obsolete as innovation accelerates.
Core Regulatory Provisions
Statutory Definition and Scope
New Rule 2(1)(wa) codifies synthetically generated information recognition throughout the IT Rules framework. New Rule 3(1A) makes it clear that every mention of "information" in regulatory terms covering rules regulating illegal acts, intermediary due diligence and grievance mechanisms now knowingly includes synthetic content, thereby avoiding jurisdictional disputes that deepfakes are beyond regulatory purview.3
Mandatory Labeling Requirements
The Amendment mandates comprehensive labeling of synthetic content. New Rule 3(3) mandates systematic labeling by intermediaries providing content generation tools. Synthetically generated information must display permanent, unique metadata. For visual content, labels must occupy a minimum of ten percent of screen area and must remain permanently visible. For audio content, labels must occupy initial ten percent of duration and labels cannot be modified or removed by intermediaries or end users.
This transparency distinguishes synthetic from authentic content on first glance. Deepfake videos display prominent "AI-Generated" labels covering one-tenth of frame area throughout duration. Audio deepfakes include mandatory audible disclosure during opening seconds thereby enabling viewers to evaluate the credibility of content. This mechanism represents the regulatory judgment that informed consumer constitutes a crucial harm-mitigation strategy.
Intermediary Obligations and SSMI Requirements
Tool providers must mark synthetic outputs prior to user release and embed unique identifiers enabling traceability and must ensure that labels remain conspicuous and permanent. This upstream responsibility shifts accountability from passive platforms to active technology providers and addresses proliferation at the source rather than through downstream distribution.
Significant Social Media Intermediaries (SSMIs) with five million or more monthly active users must secure user declarations on synthetic content generation. Platforms must deploy reasonable technical measures verifying declarations through automated content analysis or human review protocols. The statutory safe harbour in new Rule 3(1A) protects SSMIs removing synthetic content in good faith based on reasonable efforts or user grievances. This protection acknowledges that platforms cannot achieve instantaneous removal in pandemic-scale content spaces but encourages rapid response mechanisms balancing user rights with platform capabilities.
The Amendment applies only to publicly displayed content and exempt private messages and unpublished drafts. This scope limitation of scope reflects legislative judgment balancing privacy protection with effective regulation.
Existing Legal Framework
IT Act, 2000
The IT Act has multiple provisions criminalizing deepfake conduct. Section 66C targets identity theft through fraudulent impersonation. Violators face imprisonment up to three years plus fines up to one lakh rupees. Whereas Section 66D addresses cheating by impersonation and the punishment is identical. Section 66E tackles privacy violations particularly of non consensual intimate deepfakes. This carries imprisonment up to three years plus a minimum fine of two lakh rupees. Sections 67 and 67A deals with obscene content. Section 67 provides for three-year imprisonment plus five lakh rupees fine whereas Section 67A raises the stakes with five year imprisonment plus ten lakh rupees fine. Section 79 gives platforms a safe harbor when they act in good faith. It protects them from liability for user-generated content removed in compliance with law.4
Bharatiya Nyaya Sanhita, 2023
Section 353 addresses statements conducing to public mischief. It prescribes three-year imprisonment with fine for statements causing fear or alarm. The provision also covers statements inciting enmity between groups based on religion, race, birth, residence, language or community. The provision applies directly to deepfakes. False arrest claims spread through synthetic content? Covered. AI-generated content inciting communal violence? That falls within Section 353's scope.
Section 111 recognizes organised cyber crime involving coordinated deepfake campaigns. Section 212 criminalizes furnishing false information through synthetic content. These provisions create layered accountability enabling prosecution under on multiple statutory theories and providing prosecutorial flexibility that addresses varied scenarios of deepfake abuse5
Digital Personal Data Protection Act, 2023
Deepfakes that process biometric data without meaningful consent violate fundamental DPDP Act principles. The law sets the cap on the fine of two hundred and fifty crore rupees for major violations which categorically applies to deepfakes employing facial recognition extraction methods, voice biometric capture, or iris scanning technologies without lawful consent basis.6
Landmark Jurisprudence: The Sadhguru Case
Delhi High Court judgment in Sadhguru Jagadish Vasudev and Anr. v. Igor Isakov and Ors. (CS (COMM) 578/2025), dated May 30, 2025 by Justice Saurabh Banerjee, represents pathbreaking jurisprudence on protection of personality rights against AI misuse. Defendants operated rogue websites using AI-morphed deepfakes of Sadhguru's voice, image, and distinctive appearance. Fraudulent content included false arrest claims and fake endorsement content were posted to gain financially at plaintiff's expense from his reputation.7
Justice Banerjee issued unprecedented "dynamic+" injunction protecting plaintiff's name, image, likeness, voice, and all aspects of persona from AI misuse. The judgment recognizes that contemporary deepfake technologies threaten personality rights in fundamentally new ways thereby moving beyond traditional intellectual property infringement analysis into novel digital harm categories. Justice Banerjee observed: "The rights of a plaintiff cannot be rendered otiose in this world of rapidly developing technology," acknowledging static legal frameworks are insufficient for pandemic-scale digital harm requiring responsive judicial remedies.
The order established 36-hour takedown schedule for content and directed YouTube and similar platforms to deploy automated technology detecting and deleting identical infringing content. This judicial direction is a reflection of acknowledgment that human takedown procedures are inadequate for scope and require technological solutions to counter technological threats.
Implementation Challenges
Critics argue the Amendment risks to turn SSMIs from passive conduits to active arbiter of authenticity that can violate Shreya Singhal v. Union of India (2015) principle that intermediaries need not proactively monitor all content. The "reasonable and appropriate technical measures" threshold remains undefined and creates compliance uncertainty regarding verification system adequacy.8
Can AI verification reliably distinguish synthetic from authentic content? False negatives facilitate fraud proliferation whereas false positives suppress legitimate satire and artistic expression. Implementation burden disproportionately falls on resource constrained intermediaries that lack sophisticated verification infrastructure. The exemption for private content creates potential loopholes as malicious actors distribute deepfakes through private channels before public amplification.
Global Context
The European Union AI Act imposes obligations on high risk AI systems but it lacks deepfake specific provisions. The United States? The DEEPFAKES Accountability Act remains unenacted. China has similar disclosure requirements as India. India's October 2025 Amendment is the most comprehensive deepfake regulatory framework globally, integrating definition clarity, transparency requirements, platform accountability, and safe harbour protections without blanket censorship. This positions India as regulatory thought leader in emerging technology governance.
Conclusion
India's October 2025 IT Rules Amendment addresses deepfake misuse with unprecedented regulatory specificity. The framework establishes a clear statutory definition of synthetically generated information and creates visible transp
arency through mandatory labeling, imposes upstream responsibility on AI tool providers, establishes platform accountability, and protects good-faith removal actions. The Sadhguru judgment demonstrates judicial capacity for adapting personality rights doctrine to digital contexts through innovative injunction mechanisms.
Unanswered questions persist regarding verification reliability and cross-border enforcement. Nevertheless, this Amendment marks India's emergence as sophisticated technology regulator, recognizing that deepfakes pose threats to information integrity, electoral legitimacy, and public trust. The regulatory architecture lays necessary governance foundation for controlling deepfake proliferation while preserving legitimate innovation and expression.
Footnotes
1 Ministry of Electronics and Information Technology, "Explanatory Note," October 22, 2025. https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf
2 MeitY Public Notice, October 22, 2025. https://www.meity.gov.in/static/uploads/2025/10/38be31bac9d39bbe22f24fc42442d5d1.pdf
3 IT (Intermediary Guidelines) Rules 2021, Rule 2(1)(wa), Rule 3(3), Rule 4(1A). https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf
4 Information Technology Act 2000, Sections 66C, 66D, 66E, 67, 67A, 79. https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
5 Bharatiya Nyaya Sanhita 2023, Sections 353, 111, 212. https://www.indiacode.nic.in/bitstream/123456789/20062/1/a2023-45.pdf
6 Digital Personal Data Protection Act 2023 https://www.indiacode.nic.in/show-pdf?id=DPDP2023
7 Sadhguru Jagadish Vasudev v. Igor Isakov, CS(COMM) 578/2025, May 30, 2025. https://delhihighcourt.nic.in
8 Shreya Singhal v. Union of India, (2015) 5 SCC 1.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.