- within Insolvency/Bankruptcy/Re-Structuring and Antitrust/Competition Law topic(s)
The EU "Digital Omnibus" Proposal: A Strategic
Pivot for AI and Data Compliance
Joe Cahill
On November 19, 2025, the European Commission published a package of legislative proposals known as the "Digital Omnibus." Designed to introduce flexibility into the EU's digital regulatory framework, the proposal aims to simplify the General Data Protection Regulation (GDPR), the AI Act, and the Data Act to encourage innovation and reduce administrative burdens. The most immediate operational impact for businesses is a significant restructuring of the compliance timeline for high-risk AI systems. Under the current AI Act, obligations for high-risk systems listed in Annex III were set to apply from August 2, 2026. The proposal replaces this fixed date with a conditional timeline: high-risk obligations would apply six months after the Commission formally confirms that the relevant harmonized technical standards are available, with a deadline of December 2, 2027, at the latest.
The proposal also introduces amendments to the GDPR to facilitate the processing of personal data for AI model training, confirming that "legitimate interest" is a valid legal ground for processing personal data in the context of developing and operating AI systems, provided specific safeguards are met. Additional relief measures include extending the deadline for providers of AI systems generating synthetic content to comply with machine-readable marking requirements from August 2, 2026, to February 2, 2027, and extending the simplified compliance regime to "Small Mid-Cap" enterprises with fewer than 750 employees and less than €150 million in revenue. It is important to note that the Digital Omnibus is currently a legislative proposal that must be reviewed and adopted to become law.
Read more here.
Italy's AI Law: A National Framework Bounded by the
EU AI Act
Joe Cahill
On October 10, 2025, Italy became the first EU member state to enact comprehensive national AI legislation when Law No. 132/2025 entered into force. The law establishes governance structures, sector-specific rules, and enforcement mechanisms for AI systems operating in Italy. However, its scope is explicitly bounded by EU Regulation 2024/1689 (the EU AI Act). Article 1(1) directs that the Italian law must align with and operate within the European AI regulatory structure, while Article 3(5) prohibits imposing obligations that exceed those already established under EU law. This interpretive framework arose from the European Commission's scrutiny during the TRIS notification process and fundamentally constrains how the law must be applied.
The law's most significant contribution lies in its governance architecture rather than substantive obligations.
Read full post here.
Getty Images Succeeds on Trademark Infringement Claim
Against Stability AI in UK Court
Coleman Strine
On November 4, 2025, the High Court of Justice in London ruled that older versions of Stability AI's Stable Diffusion system infringed Getty's trademarks when it produced images bearing Getty-branded watermarks. The court found that the inclusion of Getty's watermarks in Stable Diffusion's output images would lead consumers to think the images had been sourced directly from Getty, or that an economic link, such as a licensing arrangement, existed between the companies. However, the ruling was limited in scope, as newer versions of Stable Diffusion were trained to avoid reproducing similar watermarks. Getty had also brought tarnishment and unfair advantage claims, which failed, as Getty was unable to show that Stable Diffusion generated the watermarks in connection with reputationally damaging images in the real world.
Significantly, the court also found that Getty's secondary copyright infringement claim failed because the Stable Diffusion models were not, themselves, infringing copies of Getty's works under the UK Copyright, Designs, and Patent Act ("CDPA"). In particular, the court noted that "model weights are not themselves an infringing copy and they do not store an infringing copy. They are purely the product of the patterns and features which they have learnt over time during the training process." Accordingly, where a model is trained outside the UK, but its model weights are made available for download within the UK, liability for secondary copyright infringement may be avoided, as importing the model weights is not considered importation of an "infringing copy" under the CDPA.
While this decision opens the door for importation of AI models and systems into the UK, companies engaging in AI model training or distribution should carefully monitor regional developments to determine any risk exposure prior to engaging in business activities.
Regulating Minors' Interactions with AI Companion
Chatbots
Ariel House
The rampant usage of AI chatbots among teens and younger adults—and the attendant risks associated with interacting with these chatbots—has raised concerns among lawmakers. A new law was recently introduced in Congress that would impose significant new requirements on companies providing AI chatbots, including a total ban on all minors using certain kinds of AI chatbots. The introduction of the GUARD Act, with its broad bipartisan support, marks a departure from Congress's previous deregulatory stance when it comes to AI development. The proposed bill signals that Congress may be ready to mandate stricter rules on AI when it comes to the safety of minors. This law, if passed, could potentially reshape how AI companies design and manage their chatbots, particularly with respect to requiring user accounts, incorporating disclosures into interactions, and implementing age-verification measures (along with addressing the associated privacy.
Meanwhile, California recently became the first state to pass a law imposing some moderate protections on minors interacting with certain AI chatbots. concerns). AI companies should promptly determine whether their chatbots are within the scope of the new California law and, if so, ensure their compliance with the new legal requirements. While California enacted a fairly minor set of new requirements on AI companion chatbots under SB 243, the potentially onerous requirements of the proposed GUARD Act—including the complete ban on their use by minors—should make AI companies seriously consider updates to the design and management of their chatbots. AI companies should keep a close eye on the rapidly evolving legislative landscape and potentially serious issues associated with the deployment of their technology.
Quick Links
For additional insights on AI, check out Baker Botts' thought leadership in this area:
- Navigating the New Landscape of AI Patent Protection: IP Partner Chris Palermo examines the challenges for patent prosecution and provides strategic recommendations.
- Sustainable Density in the Age of AI: Partner Mona Dajani discusses how the relentless demand of artificial intelligence is fundamentally restructuring the energy industry.
- Director Squires Designates Ex Parte Desjardins Decision as Precedential: Senior Associate Nick Palmieri provides a short update on the decision.
- Power Is the New Data: The Infrastructure Shift Driving the Next Wave of Digital Growth: According to Partner Mona Dajani, the new race is not only for computing dominance, but for the energy infrastructure capable of sustaining it.
- Canada's Competition Bureau Offers Practical Guidance on Algorithmic Pricing: Senior Associate Nick Palmieri reviews the Bureau's position statement and the takeaways for AI in Canada.
- AI Counsel Code: Stay up to date on all the artificial intelligence (AI) legal issues arising in the modern business frontier with host Maggie Welsh, Partner at Baker Botts and Co-Chair of the AI Practice Group. Our library of podcasts can be found here, and stay tuned for new episodes coming soon.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.