ARTICLE
22 April 2026

The Rising Tide Of Deepfakes In India

RS
Remfry & Sagar

Contributor

Established in 1827, Remfry & Sagar offers services across the entire IP spectrum with equal competence in prosecution and litigation. Engagement with policy makers ensures seamless IP solutions for clients and contributes towards a larger change in India’s IP milieu. Headquarters are in Gurugram, with branches in Chennai, Bengaluru and Mumbai.
Kartik Aaryan is a popular Indian film actor who has been in the news lately on account of a Bombay High Court ruling. The judgment upheld his personality rights, and one of the more serious concerns raised before the Court involved the use of artificial intelligence (AI) and deepfake tools. In this regard, on April 21, 2026, the Ministry of Electronics and Information Technology (MeitY) proposed a significant tightening of disclosure norms for AI-generated content.
India Delhi Technology
Sonal Goel’s articles from Remfry & Sagar are most popular:
  • within Technology topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • with readers working within the Law Firm industries

Kartik Aaryan is a popular Indian film actor who has been in the news lately on account of a Bombay High Court ruling. The judgment upheld his personality rights, and one of the more serious concerns raised before the Court involved the use of artificial intelligence (AI) and deepfake tools. In this regard, on April 21, 2026, the Ministry of Electronics and Information Technology (MeitY) proposed a significant tightening of disclosure norms for AI-generated content. Under newly drafted amendments (open for stakeholder comments until May 7) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, intermediaries will be required to ensure that AI labels are not just prominent, but continuously visible throughout the entire duration of the content.

Deepfakes are among the more troubling by-products of the march of technology. They are now appearing with increasing frequency in India, from celebrity and political impersonations to misleading content involving journalists, doctors and influencers, blurring the line between truth and fabrication. While the technology is relatively new, India’s legal response has been fairly swift.

Legal framework

India does not yet have a standalone deepfake statute. Instead, the response is spread across a layered legal and enforcement framework.

The Information Technology Act, 2000 (IT Act) addresses harms such as impersonation, identity theft, privacy violations and obscene or sexually explicit content. The government is also authorised to issue blocking orders to intermediaries pursuant to Section 69A and mandates intermediaries to remove illicit content upon notification under Section 79.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021) enhance intermediary responsibility by imposing requirements for due diligence, swift removal of illegal information, user awareness responsibilities, and grievance resolution processes. Significant Social Media Intermediaries (SSMIs) with 5 million users or above are mandated to adhere to stringent requirements, including automated content surveillance, compliance documentation, and aid to law enforcement agencies.

The newly implemented Digital Personal Data Protection Act, 2023 enhances protections against the exploitation of personal data, including AI-generated deepfakes, by necessitating authorised processing predicated on permission, imposing security requirements, and stipulating fines for infractions.  

Simultaneously, the Bharatiya Nyaya Sanhita, 2023 - which has replaced the Indian Penal Code (IPC) - criminalises the fabrication and transmission of false or misleading material and stipulates the punishment of organised cybercrimes, including those employing deepfakes.

Government advisories issued from time to time reinforce statutory frameworks, reminding intermediaries of their obligations under the IT Rules.

Enforcement is backed by many institutional systems. Grievance Appellate Committees (GACs) offer customers an appeal platform to contest intermediary inaction. The Indian Cyber Crime Coordination Centre (I4C) oversees state-level enforcement and issues orders for content removal or disablement, whilst the SAHYOG portal enables automated, centralised removal notices to intermediaries. Citizens may report deepfake occurrences, financial fraud, and online exploitation using the National Cyber Crime Reporting Portal (https://cybercrime.gov.in) or the dedicated hotline 1930 which has special focus on cybercrimes against women and children. CERT-In, the national nodal organisation for cybersecurity, disseminates regular technical alerts regarding AI-related dangers, such as deepfakes, while law enforcement is tasked with the on-the-ground investigation of cyber offences.

Copyright, trademark and personality-rights claims may also arise where deepfakes reproduce copyrighted footage, misuse protected brand assets, or exploit an individual’s name, image, voice or likeness for unlawful gain.

The tide of deepfake litigations in India

In September 2023, the Delhi High Court granted ex parte relief to actor Anil Kapoor, restraining the unauthorised commercial exploitation of his name, image, voice and other elements of his persona, including through AI-enabled impersonation. The order directed takedowns and required intermediaries and internet service providers to disable access to infringing links. Around the same time, the circulation of a manipulated clip involving actress Rashmika Mandanna prompted a criminal investigation, underscoring that deepfakes may attract not only civil remedies but also penal consequences.

The judicial response has since expanded across sectors. In 2022, Amitabh Bachchan obtained protection against the unauthorised use of his celebrity persona to promote third-party goods and services. In 2025, Aishwarya Rai Bachchan also approached the Delhi High Court against websites allegedly monetising her persona without authorisation. These matters show that personality rights are increasingly being deployed as a frontline remedy against digital misuse of celebrity identity.

More recent cases show that the issue is no longer confined to entertainers. In 2025, the Delhi High Court acted against deepfake videos falsely portraying renowned cardiologist Dr Naresh Trehan as dispensing medical advice and promoting supposed cures. The Court passed a John Doe order directing the creators and disseminators to remove the content within 24 hours, requiring intermediaries to take it down within 36 hours of receiving the order, and directing disclosure of identifying information linked to the videos. The case was especially significant because the harm extended beyond reputational injury to public health and consumer safety.

A similar concern arose in the financial context. In May 2025, the Delhi High Court granted John Doe relief to entrepreneur and influencer Ankur Warikoo after AI-generated videos falsely portrayed him endorsing WhatsApp groups and fraudulent stock-tip schemes. The Court restrained the unknown defendants from further circulating the content and directed platforms to remove the offending material. The case is notable as one of the earliest Indian judicial responses to deepfake-enabled financial scams causing tangible monetary harm.

The issue has also generated pressure for a more systemic response. In January 2025, Rajat Sharma, Chairman and Editor-in-Chief of India TV, approached the Delhi High Court after manipulated videos using his likeness and voice, including one promoting medicines, began circulating online. Through a public interest petition, he sought broader regulatory measures, including the identification and blocking of platforms and software enabling deepfakes, the appointment of a dedicated nodal officer for swift complaint handling, and mandatory disclosures or watermarking for AI-generated content. The Delhi High Court acknowledged deepfakes as a serious problem and called upon the government to respond.

The government submitted status reports to the court outlining efforts to address deepfake challenges, including funding research projects for detecting fake content and constituting a nine-member committee to deliberate on deepfakes. The report also emphasized the difficulty in defining deepfakes legally and in detection methods, calling for public awareness campaigns and development of indigenous tools, particularly for Indian languages and contexts. Its response was also crystallised in a Press Note of August 8, 2025, which captured the existing enforcement framework.

Recent developments

Recent developments have significantly tightened India’s intermediary framework for AI and synthetic content. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, effective February 20, 2026, now expressly recognise ‘synthetically generated information’, covering AI-created or AI-manipulated audio, visual and audio-visual content that appears real, while excluding routine editing, accessibility changes and certain good-faith uses.

The amendments impose fresh obligations on platforms that enable or facilitate synthetic content. Such intermediaries must adopt reasonable technical measures, including automated tools, to prevent unlawful synthetic content, prominently label lawful AI-generated content, embed provenance metadata where technically feasible, and prevent tampering with those labels. User-facing disclosures have also been strengthened: platforms must now notify users every three months, warn of statutory penalties and mandatory reporting consequences, and inform users that misuse of synthetic-content tools may lead to disclosure of identity to victims where legally permissible.

Significant social media intermediaries face additional duties, including requiring users to declare whether content is AI-generated, technically verifying such declarations, and prominently labelling such content.

The amendments also sharply compress compliance timelines: grievance resolution is reduced from 15 days to 7 days; the 72-hour removal window to 36 hours; the 24-hour timeline for sensitive content to 2 hours; and compliance with authorised takedown directions to 3 hours (down from 36 hours).

The amendments further require intermediaries to act promptly when they become aware- either independently or upon complaint - of violations involving synthetically generated information, including disabling access, suspending accounts and reporting offences where legally mandated. Notably, it is clarified that removal or disabling of access to synthetically generated information in compliance with the obligations of the revised rules does not jeopardise safe harbour protection under Section 79(2) of the IT Act.

On March 30, 2026, another set of amendments were proposed, to which a fresh proposal mandating that an appropriate label be visible on screen for as long as the AI-generated content runs was added on April 21, 2026.

Taken together, there is a clear shift from a general intermediary due-diligence framework to a far more specific and proactive regime for deepfakes and other synthetic media.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More