ARTICLE
2 August 2024

Revised MeitY Advisory On Deployment Of AI Models

J
JSA

Contributor

JSA Advocates and Solicitors is a top-tier, full-service Indian law firm. Established in 1991, at the start of India’s economic liberalisation, the firm has built a strong reputation for handling complex and high-stakes legal and commercial matters. The firm is organised around specialist practice areas and industry sectors. It works closely with leading Indian corporates, Fortune 500 companies, global financial institutions, and government and statutory bodies on important corporate, financing, and disputes mandates. JSA has a team of over 700 legal professionals, including 180+ partners, and operates from 10 offices across seven cities in India: Ahmedabad, Bengaluru, Chennai, Gurugram, Hyderabad, Mumbai, and New Delhi. The firm is consistently recognised as a top-tier practice by leading international legal directories, including Chambers & Partners (Asia-Pacific and Global), Legal 500, and AsiaLaw.
On March 1, 2024, the Ministry of Electronics and Information Technology ("MeitY") issued an advisory ("Old Advisory") in continuation to the advisory dated 23 December 2023 ("December Advisory")
India Technology
Akshaya Suresh’s articles from JSA are most popular:
  • in Ireland
JSA are most popular:
  • within Law Department Performance topic(s)

On March 1, 2024, the Ministry of Electronics and Information Technology ("MeitY") issued an advisory ("Old Advisory") in continuation to the advisory dated 23 December 2023 ("December Advisory") directing all intermediaries and platforms to label any under-trial/unreliable artificial intelligence ("AI") models, and to secure explicit prior approval from the government before deploying such models in India. For a detailed analysis, please refer to the JSA Prism of March 7, 2024.

In light of the ambiguities arising in the Old Advisory, on March 15, 2024, MeitY issued a revised advisory on deployment of AI models ("Revised Advisory") which effectively replaces the Old Advisory without modifying the December Advisory. The Revised Advisory has done away with mandatory prior government approval, submission of action taken-cum status report, extended the scope of due diligence to all AI intermediaries and platform and retain certain requirements from the Old Advisory.

Provisions of the Revised Advisory

The Revised Advisory reinforces some requirements from the Old Advisory namely: a) users need to be explicitly informed about the unreliability of the output by way of a "consent pop up" mechanism or any other equivalent mechanisms; b) all intermediaries and platforms are required to inform the users about the ramifications of dealing with unlawful content; and c) all intermediaries and platforms are required to utilize labels, metadata, or unique identifiers to identify content or information that is AI generated, modified, or created using synthetic information. The Revised Advisory also reiterates the importance of compliance with the Information Technology Act 2000 ("IT Act") and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) 2021 ("Intermediary Guidelines") like the Old Advisory.

The Revised Advisory has introduced some changes, namely:

  1. seeking explicit prior permission from the Government for the deployment of any unreliable or under tested AI models has been done away with. Instead, unreliable or untested AI models are to be made available to the users only after notifying them of the unreliability of the generated output.
  2. the Revised Advisory has eased the requirement for submission of an action cum status report to be submitted.
  3. the due diligence requirement extends to all intermediaries and platforms, including compliance requirements related to the use and deployment of AI tools by the intermediaries and platforms as opposed to "significant/large" platforms mentioned in the Old Advisory and the clarification issued thereafter;
  4. the scope of "unlawful content" that all intermediary and platform should ensure is not published / hosted / displayed / transmitted / stored / updated or shared extends beyond the Intermediary Guidelines and the IT Act and also encompasses content that is deemed unlawful under other laws in force;
  5. the Revised Advisory serves a reminder that the intermediaries, platform and its users may face penal consequences under criminal laws for non-compliance with IT Act and its rules;
  6. the labelling requirements in the Old Advisory to be followed by the intermediaries and platforms has extended to include identification of not just the first creator or the originator of misinformation or deepfake but also the user or computer resource that has caused any change or modification to such information.

Conclusion

Although the Revised Advisory is seen as a welcome change, the ambiguity around the legal provision basis which MeitY has issued such advisories raises questions about its enforceability and binding value. Similar to the Old Advisory, the measure for determining what is "unreliable" or "under-tested" still remains unclear thereby making compliance difficult. Though the requirement of intermediaries and platforms to label AI models is carried forward from the Old Advisory to the Revised Advisory with some changes, there is no clarity on what the acceptable forms of labelling are to be followed by the intermediaries and platforms. Further, the Revised Advisory, concurrently, mentions that a "consent pop-up" may be used to inform the users about the unreliability of the output generated when, the purpose of a "consent pop-up" is to obtain consent from the users and not just intimating about the fallibility of the output generated.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More