ARTICLE
14 January 2026

Defining AI Deepfakes And Voice Cloning In The Digital Age

AL
Aarna Law

Contributor

Aarna Law was founded with a steadfast commitment to delivering quality-driven, value-based legal services, fostering deep and enduring relationships with those we serve. We dedicate time and effort to understanding our clients’ businesses and commercial objectives, enabling us to craft solutions that are both contextually relevant and strategically sound.

Our approach is innovative and business-conscious, underpinned by a team of seasoned lawyers who are commercially astute, hands-on, and solution-oriented.

Advanced artificial intelligence techniques like voice cloning and AI deepfakes allow for the production of synthetic speech by mimicking a person's voice.
India Technology
Aarna Law’s articles from Aarna Law are most popular:
  • within Technology topic(s)
  • in United States
  • with readers working within the Automotive, Securities & Investment and Law Firm industries
Aarna Law are most popular:
  • within Technology, Family and Matrimonial and Transport topic(s)
  • with Senior Company Executives and HR

Advanced artificial intelligence techniques like voice cloning and AI deepfakes allow for the production of synthetic speech by mimicking a person's voice. Voice cloning is the process of analysing audio samples to create a digital copy of a person's voice. This digital copy can then produce new speech with a similar tone, pitch, accent, and rhythm. It is frequently used for moral and advantageous purposes, such as audiobooks, personalized virtual assistants, customer support systems, and helping people who have lost their ability to speak due to medical conditions. Similar methods are used by AI deepfakes, especially in audio format, although they are frequently linked to dishonest or malevolent goals, such as political manipulation, financial fraud, impersonation, misinformation, and defamation.

Celebrity personal rights are increasingly being violated in India by voice cloning and AI deepfakes, which have sparked ethical and legal discussions about consent, reputation, and economic control of identity. Although there is not a stand-alone law specifically regulating AI-generated content, Indian common law principles and constitutional rights like dignity and life (Article 21) protect celebrities' personality rights, which include the commercial use of their name, image, voice, likeness, and other identifying traits. Indian courts have acknowledged that the use of AI tools to create deepfake content or clone a celebrity's voice without their consent can violate their personality rights because it appropriates and manipulates key elements of their public persona, which could result in financial loss, reputational damage, or deceptive endorsements.

Judicial Safeguards Against AI-Enabled Misuse and Deception

In light of this, Indian courts particularly constitutional courts—have taken on a crucial role in mitigating the negative effects of AI deepfakes and voice cloning by citing constitutional protections, personality rights, and privacy as well as by issuing novel remedies like takedown orders and dynamic injunctions. By examining judicial trends, relevant legal requirements, and the changing balance between technological innovation and the Défense of individual rights, this essay explores how the Indian judiciary is handling the difficulties presented by these new technologies.

Indian courts have increasingly acknowledged that voice cloning and deepfakes produced by AI represent a real and immediate breach of personality rights rather than a potential concern in the future. In order to prevent the unapproved use of a person's picture, voice, likeness, and general identity in AI-generated content, the Delhi High Court and the Bombay High Court in particular have frequently given temporary injunctions. These injunctions usually forbid the ongoing distribution of deepfake videos or cloned audio, order intermediary platforms like YouTube, X (formerly Twitter), and Instagram to remove infringing content, and, in some situations, mandate that these platforms reveal the identities of anonymous uploaders engaged in the misuse.

As evidenced by high-profile cases involving public figures like, Asha Bhosle, Suniel Shetty, Akshay Kumar, Hrithik Roshan courts have significantly broadened the traditional definition of "persona" beyond name and physical likeness to include distinctive vocal characteristics, speech patterns, and mannerisms. The issuance of "dynamic" or "dynamic+" injunctions, which allow court orders to automatically extend to future instances of AI-based misuse, is a noteworthy judicial innovation in this field. This reduces the need for repeated litigation and provides more effective protection against the rapidly evolving nature of deepfake technologies.

Court Interventions Protecting Celebrities from AI Deepfakes and Voice Cloning

The Bombay High Court has granted interim relief to singer Asha Bhosle against unauthorised AI cloning of her voice and image. These ruling highlights growing legal challenges around AI's impact on celebrity personality rights in India. Veteran singer Asha Bhosle has been granted ad-interim relief by the Bombay High Court, which prohibits various platforms and individuals from unlawfully replicating her voice or using artificial intelligence systems to profitably exploit her image, likeness, and other aspects of her personality.

Suniel Shetty was given temporary protection by the Bombay High Court in October against the unlawful exploitation of his image and appearance, especially through deepfakes and impersonations created by artificial intelligence. The historic ruling establishes a precedent for personality rights in the digital era by requiring platforms to delete unlawful information and extending "John Doe" protection to his family.

The goal of Kumar's lawsuit is to stop the continuous infringement and unapproved commercial exploitation of his publicity and personality rights, which include his name, screen name "Akshay Kumar," picture, likeness, voice, unique performance style, mannerisms, and other recognizable characteristics. The lawsuit stems from the widespread misuse of Kumar's persona through deepfake and AI-generated photos and videos, fake goods, misleading advertisements, false brand endorsements, and social media profile impersonation on websites like YouTube, Facebook, Instagram, X (formerly Twitter), and other e-commerce sites.

The Delhi High Court sets a significant precedent in India's battle against AI deepfakes and digital impersonation by protecting Hrithik Roshan from the use of his voice, image, and persona in a historic ruling. The Delhi High Court intervened on October 15, 2025, providing him with immediate protection against the widespread use of his voice, picture, and whole character. It's a historic occasion that demonstrates how far technology has taken us.

Existing Legislative Tools to Combat AI-Enabled Misuse

Courts in India have successfully relied on current legislative provisions and common law theories to restrict the exploitation of such technology, despite the lack of a particular act that addresses deepfakes or AI-based impersonation. One important piece of legislation in this area is the Information Technology Act of 2000. Identity theft and false impersonation, such as using cloned voices or altered videos to misrepresent a person, are covered by Section 66C. While Section 66E is often used in cases of privacy infractions, especially when it comes to non-consensual intimate deepfakes, Section 66D deals with cheating by personation using digital means.

Furthermore, the publication and dissemination of pornographic or sexually explicit deepfake content are prohibited by Sections 67 and 67A. In order to maintain safe harbor protection, digital platforms must remove illegal AI-generated content after getting sufficient notification, which is another important function of Section 79, which governs intermediary liability.

Indian courts have applied common law and constitutional considerations in addition to statutory law to address the wider harms created by voice cloning and deepfakes. To protect people's autonomy, dignity, and reputation against AI-generated abuse, Article 21 of the Constitution which covers the right to life, personal liberty, and the legally recognized right to privacy has been invoked. Simultaneously, when deepfake content falsely implies endorsement, association, or economic exploitation, common law theories like passing off and trademark infringement have been applied, strengthening legal protection against dishonest and unapproved AI methods.

Conclusion

Artificial intelligence–driven technologies such as deepfakes and voice cloning have emerged as powerful tools capable of replicating human likeness, speech, and mannerisms with alarming precision. While these technologies offer legitimate applications in fields such as entertainment, accessibility, and digital innovation, their misuse has generated serious legal and ethical concerns, particularly relating to identity theft, misinformation, defamation, and the erosion of personal autonomy. In India, the rapid proliferation of AI-generated content has outpaced the development of a dedicated statutory framework, compelling courts to respond through creative interpretation of existing laws.

Frequently Asked Questions

What constitutes an AI deepfake?

An AI deepfake refers to synthetically generated or manipulated audio-visual content created using artificial intelligence techniques, particularly deep learning models, to realistically imitate a real person's appearance, actions, or speech, thereby obscuring the distinction between authentic and fabricated media.

How is voice cloning defined in the context of artificial intelligence?

Voice cloning is an AI-driven process that replicates the unique vocal characteristics of an individual using limited voice samples, enabling the generation of new speech outputs that closely resemble the original speaker.

How do AI deepfakes impact the right to privacy and personality rights?

The unauthorized use of an individual's likeness or voice infringes upon the right to privacy, dignity, and personality, particularly when such use results in reputational damage or commercial exploitation without consent.

How have Indian courts addressed the misuse of AI deepfakes and voice cloning?

Indian courts have adopted a progressive approach by expanding the scope of personality rights, issuing dynamic injunctions, and directing online intermediaries to remove infringing content and disclose relevant user information.

Which provisions of the Information Technology Act, 2000 are relevant in cases of AI misuse?

Sections 66C (identity theft), 66E (violation of privacy), 67 and 67A (publication of obscene or sexually explicit material), and Section 79 (intermediary liability) are commonly invoked in cases involving AI-generated misuse.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More