- within Intellectual Property topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- in European Union
- in European Union
- with readers working within the Advertising & Public Relations, Pharmaceuticals & BioTech and Securities & Investment industries
ByteDance's New AI Model, Seedance 2.0, Creates Shockwaves in Hollywood
A new Artificial Intelligence ("AI") video model was released by ByteDance and sparked immense backlash in the entertainment industry over copyright and intellectual property. The model, Seedance 2.0, can develop videos with quality comparable with blockbuster films after just a few lines of prompts from users. This opened the AI developer to widespread copyright infringement litigation.
The problem with this recent development is that Seedance 2.0's ability to create realistic depictions of actors, like Tom Cruise and Brad Pitt, stems from unauthorized use of copyrighted materials. While it's unclear what datasets ByteDance used in training Seedance 2.0's latest model, it is clear they did not have authorization to use many of the actors' likeness or films.
Disney and Paramount Skydance immediately reacted to the videos generated by Seedance 2.0 with cease and desist letters. The Screen Actors Guild – American Federation of Television and Radio Artists union for actors also denounced the model as harmful to many actors' careers. Other organizations, like the Human Artistry Campaign described Seedance 2.0's abilities as a direct "attack on every creator around the world."
While AI continues to make new advancements at an accelerated pace, the legal guidelines meant to keep it in check are murky at best. From copyright to corporate governance and fiduciary duties, AI might affect every aspect of the corporate structure. While the entertainment industry deals with the implications of Seedance 2.0, every business should be considering how their organization fits in the legal landscape around AI.
Copyright, Licensing, and the Multi-Faceted Nature of AI
Developers of AI models require tremendous amounts of data to train their Large Language Models. The sets of information typically come from data on the internet, but the true sources remain untold by companies like Meta and OpenAI.
There has been a rise in lawsuits against AI developers for using copyrighted content without the authors' and owners' consent. The New York Times recently sued both OpenAI and Microsoft for using the newspaper's copyrighted articles into their training datasets without permission. Artists, publishers, and other authors are doing the same.
Licensing agreements might bridge the gap between content creators and AI developers. By signing into agreements that allow AI companies to use copyrighted content for their model's training datasets, many publishers receive compensation not otherwise available.
However, extrapolating this licensing solution to the scale of internet-wide scraping might not be realistic. There are only a few publishers large enough to entice AI companies to deal with, since they can cost millions of dollars. There are immeasurable sources of content on the internet from individuals and smaller creators with no license agreement to be used in AI datasets.
Fair Use Doctrine Strained Under AI Overreach
Previously, AI developers have emphasized that the content they used in training was available under the fair use doctrine. By scraping information from the internet, they argued that the data was publicly available and therefore free to draw from for training large language models. However, if the content used is paywalled or content restricted, then the AI developers would need to get approval from the publishers.
Additionally, AI developers are using information from their internet scrapes in other ways. If, for instance, an AI-generated response includes information from articles, it may include a link to the source it draws from within an integrated summary of that source. Yet paraphrasing can toe the line of copyright infringement.
Where licensing between publishers and AI developers covers training datasets content, it currently doesn't include guidelines for other uses of their LLMs. Publishers without licensing agreements could still be used during internet scrapes without permission. Thus licensing agreements cannot be a catch-all solution to the copyright infringement issue.
AI's Copyright Infringement Impact on Hollywood
Along with creating scenes using celebrities and familiar animated characters, Seedance 2.0 is capable of meeting Hollywood cinematic standards. An AI content creator shared a clip from the "F1" movie beside a copy generated by Seedance, comparing the similarities. They claimed that Seedance "remade the most expensive shot ... for 9 cents."
While AI might be more cost effective than practical in the filmmaking industry, there are still barriers to implementing AI content therein. Actors like Matthew McConaughey have already started trademarking their likeness to deter AI from mimicking him in generated content.
Other barriers to implementation include how the licensing agreements only apply to the training data for the AI models. Licensing agreements thusfar do not include provisions for profits based on the end result of AI generated content.
However, defending intellectual property from mimicry and theft is crucial for creators in their careers. Navigating copyright, trademark, and licensing litigation can be difficult, but Miller Shah can help. Our attorneys have experience in a breadth of matters in commercial litigation, from state courts to federal courts.
Looking Forward: AI Copyright Policy and Legislation
While many AI developers are facing lawsuits, policymakers are trying to keep up. The ever-changing landscape of AI's capabilities is challenging to adapt to.
While none of the bills explored below have been passed by Congress, they show that policymakers are working towards regulating AI activities in the US. These are just a few of the policies proposed by representatives about AI, with several others gaining momentum.
No AI FRAUD Act
The No AI Fake Replicas & Unauthorized Duplication Act, also known as the No AI FRAUD Act, establishes safeguards to protect against AI copying a person's individuality. As Seedance 2.0's videos have shown, AI models are capable of depicting individuals saying or behaving in ways they have not.
The entertainment industry depends on people being able to establish careers based on their features, talents, and original creative content. Applying this legislation could require AI companies to prevent generated content from making clones and deepfakes of real people. Artists, actors, and creators worldwide would be protected from unauthorized use of their likeness.
Generative AI Copyright Disclosure(?) Act
The Generative AI Copyright Disclosure Act would establish requirements for AI model developers to create a summary detailing any copyrighted works used in its training dataset and its URL, if public. They would be required to submit this summary to the Register of Copyrights, and any noncompliance would result in a civil penalty of $5,000 or more.
This proposal aims to demystify the training datasets used in generative AI models. Currently, there is no publicly available information about the content ByteDance used in training datasets for the Seedance 2.0 model. By requiring companies to disclose any and all content they used that was copyrighted, users would be able to see where the generated content was pulling from.
AI LEAD Act
The Aligning Incentives for Leadership, Excellence, and Advancement in Development Act, also known as the AI LEAD Act, takes a different approach to regulating AI models. The AI LEAD Act would place liability on the AI developers and deployers.
The Act addresses concerns that AI chatbots generate dangerous and sometimes deadly content, such as details about self-harm, suicide, eating disorders, and substance abuse. Policymakers seek to place accountability on the companies creating tools that may ultimately harm users.
Domestic and foreign AI developers and deployers would be held liable for designing or modifying the products resulting in their ability to produce unsafe content. The victims would be able to seek recourse through the state Attorneys General and the US Attorney General.
Managing AI Risks Through Proper Company Management
Several companies are already moving towards AI implementation in their business models. The benefits of higher efficiency, wider skillsets, and sophisticated results are appealing to many. However, there are multiple risks associated with using AI for business operations, and many operate within a legal gray area.
SEC risk factor disclosure requirements
As companies race to adopt and advertise their AI capabilities, there is a new risk of exaggerating claims. "AI-washing" occurs when companies make statements about their AI-enabled tools that were not using AI as described. This impacts investors, regulators, and consumers' choices about the company.
Further, lying about how your company uses AI may trigger securities liabilities. Litigation could result from overhyping just how great your company is at implementing AI models for efficiency.
Misrepresentation and deception are not only within the securities liability framework, they may also fall into the jurisdiction of the Federal Trade Commission and state attorneys general. Any AI involved in company activities should be scrutinized and strictly monitored to ensure that it is being used as promised.
As securities litigation continues to rapidly change around AI regulations, navigating the legal side of things can be challenging. Miller Shah LLP has represented a variety of securities litigation both as plaintiffs and defendants. If you or your business are in disputes of regulatory proceedings and investigations, we may be able to help.
Board-level liability, C-Suite-level liability
Board members face liability in companies that implement AI in their business models. If they fail to create a reporting or compliance system, or consciously ignore red flags within an existing system, the Board members could be held accountable.
Developing systems for monitoring AI use in the company is not enough. Testing how those systems handle inappropriate use, how the behavior is corrected, and how those incidents are documented are the baseline. Educating workers on how to use the reporting system is crucial, and ongoing updates should be incorporated as needed.
Failure to make systems and handle red flags could result in litigation against the board members and business executives, depending on who is assigned responsibility. There should be clear, documented responsibility assigned to whoever is in charge of these processes to avoid confusion or finger-pointing.
Fiduciary standards at risk with AI
As leaders in business are responsible for overseeing compliance functions, AI implementation may pose a risk. Using AI generated content to make decisions about heavily regulated operations could be seen as a "mission-critical" risk. Human oversight, and human decision-making, should remain the ultimate result of compliance practices.
However, the level of risk most likely correlates with the level of dependence on AI generated content. The more of a role that AI plays in decision-making, the higher the risk. For companies with limited application of AI in their business, they might be at a lower level of risk.
Coordinating engineering, legal, and compliance teams
For companies that actively develop, modify, and deploy AI models, engineering teams have great influence over the compliance of the model itself. Establishing connections and clear communication between the engineering department, legal department, and compliance monitoring team would avoid confusion.
By having direct lines of communication to the ones creating and updating the AI system that is to be used throughout the enterprise, companies can streamline changes. Should new policies be passed and enacted, this pipeline would allow quick updates where necessary. As litigation continues to shift the AI legal landscape, adaptability is critical to long-term success.
Conclusion
AI impacts all industries from engineers to actors. As generative AI models grow more advanced with each generation, the legal landscape continues to shift.
If you see cybersecurity and IT issues regarding privacy invasions, unauthorized use, or software negligence, you can bring consequences to the perpetrators as a whistleblower. Miller Shah navigates cybersecurity and IT whistleblower cases with confidentiality, representing our clients with the best legal avenues possible.
Whether you are defending your intellectual property, seeking guidance on proper corporate governance, or seeking accountability for misuse of AI, Miller Shah has experience in emerging technology litigation. Consider contacting us for a consultation on the details of your case.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]