- within Technology topic(s)
- in Canada
- with readers working within the Securities & Investment and Law Firm industries
- within Technology, Real Estate and Construction and Strategy topic(s)
India has formally moved to regulate deepfakes and AI generated content by notifying the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. These amendments introduce a structured regime for “synthetically generated information”, including a three hour takedown deadline for certain categories of harmful AI content and new labelling, traceability and due diligence requirements for intermediaries. This article explains what has changed under the IT Rules 2026 deepfake regulation, who is affected, and how in house legal and compliance teams can practically respond.
1. What Has Changed Under The IT Rules 2026 Deepfake Regulation
A. Notification timeline and scope of the amendment
The Central Government announced its most recent amendment to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 on 10 February 2026 which includes important provisions for AI generated content and deepfake technology. The amendment does not establish an entirely new system but it strengthens existing due diligence requirements which intermediaries must follow through specific rules that govern synthetically generated information and dangerous content removal procedures. As a result, any intermediary operating in India; social media platforms, video sharing sites, messaging services, hosting providers and certain AI tools, must revisit their current compliance posture under the 2021 Rules in light of the 2026 changes.
B. New definitions – “synthetically generated information” and “deepfake”
The amendment introduces and defines the concept of "synthetically generated information" (SGI) as its most important element. The term SGI describes all content that results from AI technologies and similar technologies which alter or produce new material to create false representations of actual events and people and information. Within this, the rules focus on “deepfakes” and other misleading synthetic content, including fabricated videos and audio, manipulated images, impersonation material, forged documents and deceptive synthetic news or messages. By expressly recognising SGI and deepfakes as a regulatory category, the IT Rules 2026 give ministries and courts a clearer hook to impose obligations and assess liability in cases involving AI generated content.
C. The three hour takedown rule for AI generated content
The most attention grabbing change is the compressed three hour takedown window for certain AI generated content once an intermediary receives a lawful order or notice from the government or its authorised agencies. The previous practice required intermediaries to act within a 36 hour timeframe but now India has adopted faster content removal processes which aligns with some of the most rapid international standards for removing harmful and illegal material. Platforms must develop systems that enable them to process flagged deepfake content and prohibited SGI material within a hours because this requirement affects all aspects of their operations including staffing needs and tool requirements and escalation procedures and documentation.
2. Key Obligations For Intermediaries Under The Deepfake Rules
A. All intermediaries must fulfil the due diligence obligation
The amendment requires all intermediaries to maintain fundamental due diligence requirements which include publishing clear terms of use and informing users about illegal content sharing restrictions and complying with legitimate government and court orders. The new rule now requires intermediaries to safeguard their operations against risks that arise from both AI produced content and synthetic material. Intermediaries are expected to periodically notify users about the legal consequences of creating or sharing prohibited deepfakes and other harmful SGI, and to ensure that their rules and policies clearly prohibit such behaviour. Intermediaries who do not conduct proper due diligence face risks of losing their safe harbour protection which results in increased exposure to both regulatory and criminal risks under Indian law.
B. Enhanced duties for intermediaries enabling SGI creation or sharing
Intermediaries that specifically enable the creation, modification or wide scale sharing of synthetically generated information, such as generative AI tools, video editing apps, image and voice cloning services and certain content platforms face heightened expectations under the 2026 amendment. They are required to warn users against generating or disseminating categories of “Prohibited SGI”, including child sexual exploitation material, non-consensual intimate imagery, impersonation or fraud oriented deepfakes, deceptive political or electoral content and forged documents capable of misleading the public or authorities. Such intermediaries must also ensure that their product design, interfaces and user flows do not actively nudge users towards harmful deepfakes and that they can implement rapid restrictions or suspensions where repeated misuse is detected.
C. Technical and procedural measures – labelling, traceability and automated tools
The IT Rules 2026 require intermediaries to implement reasonable technical and organisational measures for detecting and controlling deepfakes and other prohibited SGI content which they will handle through methods that include AI content labelling, detailed log maintenance and automated pattern recognition tools. Platforms have to implement content labelling systems which use provenance technologies to enable users to differentiate between synthetic and authentic media content. They need to build strong audit systems which will help track their operations while proving they meet the three hour content removal standard. From an operational standpoint, this demands closer collaboration between legal, policy, engineering and trust and safety teams to design workflows that are technically feasible yet defensible if regulators or courts later question their response.
3. Compliance Checklist For Deepfake And AI Generated Content
For in house teams, the amendment is best treated as a trigger to overhaul AI related content governance rather than as a narrow takedown rule.
A. Governance and policy
- Update terms of use, community guidelines and internal content policies to define “synthetically generated information” and “Prohibited SGI” in line with the rules.
- Build or revise a written standard operating procedure (SOP) for handling government and law enforcement notices relating to AI generated content, with explicit three hour timelines and escalation triggers.
B. Product and technology
- Assess the feasibility of AI content labelling or watermarking for content created or hosted on the platform, at least for high risk features or tools.
- Implement logging and alert systems that allow teams to quickly identify, isolate and act on flagged deepfake or SGI content within the mandated time frame.
C. Contracts and third party relationships
- Review and update agreements with content creators, influencers, advertisers and enterprise customers to allocate risk for AI generated content, with representations, warranties and indemnities around deepfake misuse.
- Ensure vendor contracts for moderation, AI tooling or infrastructure support your three hour takedown and logging obligations.
D. Training and incident response
- Train moderation, customer support, legal and PR teams on identifying deepfakes, understanding the categories of prohibited SGI and following the new SOPs.
- Run tabletop exercises to test whether the organisation can realistically meet three hour deadlines in different scenarios, including holidays and high volume events.
E. Documentation and audit trail
- Maintain detailed records of notices received, actions taken, content identifiers, timestamps and rationales so that the organisation can demonstrate compliance if challenged.
- Periodically review these logs to identify patterns (recurring abuse vectors, repeat offenders, systemic delays) and feed those insights back into product design and policy updates.
4. Open Issues, Constitutional Concerns And Future Litigation
The IT Rules 2026 deepfake regime faces criticism because it restricts free speech rights and fails to maintain proportionality according to commentators who state that platforms will delete excessive content to avoid penalties which result from the strict three hour deadline. The shortened schedule of the process prevents proper analysis of context and satire and public interest journalism which results in the silencing of valid expression and political discourse. At the same time, the rules sit against a backdrop of rising deepfake enabled fraud, sexual exploitation and disinformation, making it likely that courts will be asked to balance competing rights and to interpret what “reasonable” technical measures and “Prohibited SGI” mean in practice.
Several definitional and enforcement grey areas remain unresolved. For instance, it is not yet clear how satire, parody, artistic experimentation or anonymised synthetic training data will be treated when complaints are made or when electoral sensitivities are involved. There is also little guidance on how responsibility will be apportioned between different layers of the stack, for example, between front end apps, model providers and infrastructure intermediaries, where the interplay of SGI and safe harbour is likely to be tested in future litigation.
Finally, The deepfake regulations intersect with several legal systems which include criminal laws that address impersonation and obscenity and cybercrime and the developing standards of evidence for digital and AI produced content and India's data protection framework. In contentious matters, courts will need to grapple with questions around authenticity, admissibility and the weight to be given to AI generated or AI detected signals, especially where takedown decisions are later challenged by affected users.
5. Key Takeaways For In House Counsel And Compliance Teams
India's first comprehensive effort to control deepfakes and AI generated content, which requires three hour content removal and extended SGI labelling and traceability and user warning responsibilities, begins with the IT Rules 2026. Intermediaries and AI enabled platforms should treat this as an immediate compliance priority because it helps them maintain safe harbour protection and reduces their exposure to reputational damage and litigation risks in a rapidly changing regulatory landscape. For legal teams, the priority over the coming months is to harden governance frameworks, close operational gaps in incident response and prepare for the inevitable round of constitutional and interpretive challenges that will shape how India’s deepfake rules are ultimately applied.
6. Practical FAQs On India’s Deepfake Rules 2026
Q. Who has to comply with the IT Rules 2026 deepfake regulation?
The amended rules apply to “intermediaries” under the IT Act, which cover a wide range of entities including social media platforms, video sharing and streaming services, messaging apps, web hosting providers, search engines, certain cloud services and AI tools that host or transmit user content. Entities that enable users to create or widely disseminate synthetically generated information face enhanced expectations and scrutiny, even if they do not fit the traditional image of a social network.
Q. How quickly do platforms need to take down deepfake content in India?
For specified categories of prohibited deepfake or SGI content flagged through lawful government or court directions, intermediaries are now expected to act within roughly three hours of receiving the order or notice. This is significantly shorter than earlier norms and requires intermediaries to have round the clock processes to ingest, review and implement takedown actions.
Q. Do the IT Rules 2026 apply to private messaging and cloud platforms?
The IT Rules 2021 already applied broadly to intermediaries that store or transmit user content. The 2026 amendment of the law maintains its original scope which includes all services without excluding private messaging and cloud services. The enforcement authorities may initially prioritize on major public platforms, but messaging and infrastructure providers must prepare for their upcoming obligations.
Q. What counts as a “deepfake” or “synthetically generated information” under the rules?
The rules use “synthetically generated information” as an umbrella for content created or significantly altered using AI or similar tools, and deepfakes are treated as a particularly harmful subset. This includes fabricated or morphed images and videos, cloned voices, impersonation content and forged documents that misrepresent reality, especially when they fall into prohibited categories like sexual exploitation, deception, fraud or electoral manipulation.
Q. How should brands and influencers using AI content adapt their contracts?
The parties involved in brand development and agency operations and influencer marketing should update their contracts to establish explicit terms which govern how AI generated content will be handled and which audience disclosure requirements must be followed and which types of SGI content creation and distribution should be forbidden. The agreements must specify who will bear responsibility for any regulatory violations which will require immediate content removal and deepfake misuse situations and allowing contract termination or indemnity rights when deepfake technology is misused and creates enforcement risks for the brand or platform.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.