- within Media, Telecoms, IT and Entertainment topic(s)
- in South America
- with readers working within the Banking & Credit and Technology industries
- within Employment and HR, Environment and Immigration topic(s)
As New Delhi embraced the cascading effects of the India-AI Summit 20261 and the new amendment of 10.02.2026 to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 20212 ("2026 Amendment")3 in the same month, its ripple effect will be seen in balancing the rights to digital privacy with commercial objectives. The recent influx of petitions to safeguard the personality rights of celebrities against commercial exploitation through AI-generated depictions or to regulate satirical content amplification against present standards of morality, the debate on Privacy and Policy not aligning with constitutional jurisprudence in India's AI Transition is reignited.
While the Indian law and policy framework is playing catch-up with the fast-evolving media spaces led by AI generative content, the legal contours of personality rights vis-à-vis commercial innovation and free speech concerns in India, continue to be at a nebulous stage. Up until last year, the country's data-driven market was oscillating between the initial Information Technology regime and the forthcoming Digital Personal Data Protection Act 2023 ("DPDP Act") and has now suddenly seen a relatively swift introduction of the 2026 Amendment. Whilst at first brush, this may land as a much-needed welcome step, but it appears less as the culmination of a fully stabilised digital protection ecosystem and more as a reactive intervention. The regulatory thresholds have left intermediaries with an unsettled and evolving compliance architecture.
The Revised Take-down obligation
In the hype to protect one's digital personalities, particularly against deepfakes, the 2026 Amendment has now significantly eased the mechanism for take-down obligations by newly introducing terms such as "synthetically generated information" (SGI) i.e., AI art, videos, audio, etc, subject to "good faith" standard. However, the onus to interpret this broad term with the said standard is inter alia left open to the intermediaries (YouTube, Meta, etc.).4
This new amendment has now compressed the timeline to takedown potentially harmful content as per Rule 3(b) and (d), to merely 3 hours from 72 hours i.e., a 92% reduction in response time after receiving notice either via court order or the relevant authority.5 While one could argue that this may incentivise intermediaries to pre-emptively remove potentially harmful content and undertake deliberate review, this also has the potential of making the Intermediaries "passive conduits". Furthermore, the safe harbour protection enjoyed by the Intermediaries under Section 79 of the IT Act, 20006 , will now be subject to their active response to the revised take-down and grievance timeline, compliance with the added obligations and good faith. The grey area here, however, is that the amendment does not explain what happens if these commercially driven intermediaries fail to meet the deadlines or the disputes are mishandled due to technical errors. Such omissions/uncertainties will risk the safe harbour protection and make intermediaries more conservative to avoid any legal scrutiny.
In addition, earlier, the IT Rules 2021 had used terms like "endeavour to deploy" technologybased measures to verify the accuracy of the declaration by the users, which makes it directory.7 However, the current 2026 Amendment uses the term "shall", making it mandatory. This can be applauded as a forward-looking measure and creates a serious approach to handling AI-led misinformation. In furtherance of the said requirements, platforms now must incorporate technical measures to accurately verify and clearly label the AI-generated content before it goes live. However, when it comes to determining the standard of accuracy of a user's declaration on synthetically generated content under Rule 2(1A) of the IT Rule 2026, the same is not clearly defined.8 The Intermediaries are yet to figure out how the accuracy will be checked and what the sufficient threshold will be to show compliance by the creators.
While the above looks attractive, does it meet the vires test?
Strictly analysing from a constitutional perspective, this imposition of quick redressal may push the regulatory framework towards prior restraint, for blocking a content even before publication, which may impinge upon a citizen's fundamental right to freedom of speech and expression, as it will be subject to clearance by private intermediaries before entering the public domain. It is understandable that the objective is prevention at source, but merely flagging the synthetic origin (for the sake of compliance) without any assessment on illegality or misinformation will not meet the ends of justice. There is a lack of guidance in the assessment of such content, which might delay or discourage dialogue rather than addressing the consequences after publication of alleged misleading content.
Furthermore, the acting mechanism against the AI-generated content is a quick takedown, but it being the only method to curb the harm caused by misleading SGI is far-fetched. It rather appears to create an imbalance of rights and responsibilities, and would have a direct impact on the constitutionally guaranteed right to freedom of speech. The Supreme Court's decision in Shreya Singhal v. Union of India remains instructive in the present context.9 While striking down Section 66A of the IT Act 200010, for vagueness and overbreadth, the Court highlighted that restrictions on online speech must fall strictly within the scope of Article 19(2)11 .
The intermediaries will now be incentivised to censor without much reasoning to meet the newly set deadlines, which were never part of the initial draft of the amendment proposed by the government in 2025. It is also interesting to note that while promoting the "Democratic diffusion of AI" at the Impact Submit 2026,12 the government failed to constructively consult the stakeholders on the new amendments as compared to the ones proposed before.13 This lack of transparency may have resulted in an unopposed insertion of the new sub-clauses of Rule 3 that impose due diligence obligations in relation to SGI, particularly where content either depicts a person or an event "in a manner that is likely to deceive".14
This imbalance of rights by skewing power to block accounts and take down content is further exacerbated by the central government's recent notification of 30.03.2026 calling for public consultation on additional amendments to the IT Act 2021, which would confer explicit power to the executive/ministry over online content regulation.15 (Explained in Part II)
Last year, the Supreme Court took to the whip to put in order the issue of algorithmic amplification of content creation and its impact on free speech. First, in March 2025, while striking down Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 for the vague usage of terms like "Fake" and "Misleading", leading to misguided discretion on the fact-checking units before elections.16 Second, in November 2025, the Court expressly advised the Union Government that any framework to regulate online content should not be reactionary, but responsive. In the contemporary setup, Comedy and satire provide a "safety valve" for sensitive social issues and have emerged as a fifth pillar of a healthy democracy.17 It should not curtail the freedom of speech and expression as a mere response to a singular incident, particularly considering the controversy surrounding "India's Got Latent".18
The impact of over regulation can be drawn from the recent case before the Delhi High Court, wherein the central government's order dated 10.03.2026 directing platform X to block twelve (12) accounts was challenged. The Petitioner stated that the impugned action was undertaken without adhering to the procedure prescribed under section 69 A of the IT Act 2000 and without providing an opportunity to be heard.19 The Court vide its order dated 09.04.2026 directed that the Petitioner should appear before a review committee, and the blocked accounts on Platform X should be restored if they do not satisfy the standards of law.20 The court further directed the review committee to pass directions in line with the principles laid down by the Division Bench of the Delhi High Court in Tanul Thakur v. Union of India & Ors. W.P.(C) 13037/2019 vide its Order dated 11.05.2022.21
In this constantly evolving terrain of technological innovation, commercial incentives and administrative restraint, the questions of proportionality, procedural fairness, and narrowly tailored enforcement remain central to any assessment under Articles 19(1)(a) and Article 21 of the Constitution. Ultimately, the State will be the deciding authority as to what remains online, and any contrary perspective will take a back seat. While the creator economy has experienced a rapid surge, these rules place the reins firmly in the hands of the State, potentially curbing creativity, expression, and the dynamic exchange of ideas that fuels innovation. In effect, the executive authority becomes the primary arbiter of online legality, and intermediaries, facing liability risks, are incentivised to comply mechanically rather than scrutinise constitutional validity, thereby recreating the very chilling effect that the Court cautioned against. It appears that in its zeal to curb the problem of deep fakes and misinformation, the government has somehow also missed the bus, in so much as the 2026 Amendment clearly lacks the very threshold of content classification between satire vs. misleading content vs. morality, as discussed widely all over the country.
In view of the overbreadth and enforcement challenges in the 2026 Amendment, the focus from reactive takedown mechanisms should shift to a more holistic framework. One such sustainable approach can be active mitigation by addressing the platform design/ recommendation algorithm of these intermediaries themselves by way of algorithmic audits, reducing how harmful content is "amplified and transmitted"22 based on the number of likes, comments and shares, in the first place.23 This can be done by complementing content regulation to the data governance framework, i.e., the DPDP Act. As data is the oil to keep the algorithmic activities up and running, therefore the use of the collected data should be restrictive and transparent. This will directly strike the precision of recommendation systems, which will eventually reduce the circumstances where users are being pushed into echo chambers or extremist content loops. While this cannot address the biggest challenge of identifying harmful content at source, it can potentially limit this reach and engagement. As the harmful content cannot be perfectly defined, it can be divided into categories: a) Explicitly illegal, such as child abuse, terrorism; and b) Legal but can damage social values, such as misleading information, dark humour or hate speech.
Jurisdictions such as the European Union, through the Digital Services Act,24 address the latter by managing the amplification or systemic risk, and Australia, through initiatives led by the eSafety Commissioner, by embedding safety considerations directly into platform architecture.25 Such architectural mitigation requires a deep understanding of the algorithms making any content viral and a significant in-house capacity building, for both implementation and compliance.26 The unresolved question, however, is whether sufficient alignment of incentives exists among the State, platforms, and creators, to prioritise long-term systemic safety over short-term compliance and commercial gains.
Footnotes
1. https://impact.indiaai.gov.in/
2. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-MediaEthics-Code-Rules-2021-updated-06.04.2023-.pdf.
3. https://www.meity.gov.in/static/uploads/2026/02/550681ab908f8afb135b0ad42816a1c9.pdf
4. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 3(1)(b)(ii), G.S.R. 120(E).
5. Aayushman Gaikwad & Smruti Mishra, Three Hours to comply: India's New Rules for AI Generated Content and Deepfakes, livelaw.in (Feb.21,2026,12:29 PM), https://www.livelaw.in/articles/ai-generated-content-deepfakes524064#footnote-4.
6. Section 79 in The Information Technology Act, 2000.
7. Explanatory Note, Proposed Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information, 22.10.2025, https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf
8. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 2(1A), G.S.R. 120(E).
9. Shreya Singhal v. Union of India, 2015 SCC Online SC 248.
10. Section 66A, The Information Technology Act, 2000.
11. Article 19 in Constitution of India.
12. India AI Impact Summit 2026: Landmark Global Declaration and Major AI Investment Commitments, 02. 03.2026,https://www.pib.gov.in/PressReleasePage.aspx?PRID=2234343®=3⟨=1#:~:text=AI%20research%20collab orations.,The%20India%20AI%20Impact%20Summit%202026%20concluded%20with%20the%20adoption,democratic%20 access%20to%20emerging%20technologies.
13. BIF urges consultations on new IT Intermediary Rules ahead of implementation, communicationstoday.co.in,(Feb. 19,2026), https://www.communicationstoday.co.in/bif-urges-consultations-on-new-it-intermediary-rules-ahead-of-implementation/.
14. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 3(1)(IV), G.S.R. 120(E).
15. https://drive.google.com/file/d/143ZKTK4WTMMBjsztU2QD7YJj4H9YYwge/view?ref=static.internetfreedom.in
16. Advay Vora, Challenge to IT Rules 2023 | Supreme Court stays Union government's notification establishing a fact check unit, scobserver.in (Mar. 21,2024), https://www.scobserver.in/reports/challenge-to-it-rules-2023-supreme-court-puts-a-stayon-union-governments-notification-establishing-a-fact-check-unit/.
17. HFHR Voluteer based in India, It's a Tragedy when Governments Fear Comedy, hindusforhumanrights.org,https://www.hindusforhumanrights.org/en/blog/its-a-tragedy-when-governments-fearcomedy#:~:text=THE%20VIEWS%20AND%20OPINIONS%20EXPRESSED,being%20arrested%20under%20manipulated %20charges.
18. Advay Vora, Samay Raina, Sonali Thakkar and other comedians must publish unconditional apology online, Supreme Court Orders, scobserver.in, (Aug. 25,2025), https://www.scobserver.in/journal/samay-raina-sonali-thakkar-and-other-comedians-must-publish-unconditional-apologyonline-supreme-court-orders/.
20. Indumugi C., Delhi High Court orders restoration of blocked X accounts of '@DrNimoYadav' and '@Nehr_Who', https://internetfreedom.in/delhi-high-court-orders-restoration-of-blocked-x-accounts-of-drnimoyadav-andnehr_who/#:~:text=Prateek%20Sharma%20and%20Kumar%20Nayan,proceedings%20before%20the%20Delhi%20HC.
21. Order dated 30 March 2026 in Prateek Sharma v. Union of India & Ors., WP(C) 4070/2026
22. How To Amplify Social Media Reach With Hashtags, https://medium.com/giveaway-com/how-to-amplify-social-mediareach-with-hashtags-3b49eb6dffcf
23. From clicks to chaos: How social media algorithms amplify extremism, Soumya Awasthi, https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
24. The Digital Services Act, digital-strategy.ec.europea.eu,https://digital-strategy.ec.europa.eu/en/policies/digital-services-act.
25. Safety by Design puts user safety and rights at the centre of the design and developmemt of online products and services, safety.gov.au, https://www.esafety.gov.au/industry/safety-by-design.
26. A guide towards collaborative AI frameworks, digitalregulation.org, (Sept. 02,2025), https://digitalregulation.org/a-guidetowards-collaborative-ai-frameworks/.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.