- within Transport, Media, Telecoms, IT, Entertainment and Family and Matrimonial topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
The House of Lords Communications and Digital Committee (the "Committee") has published its report on AI, copyright and the creative industries. The headline is clear: the Committee backs a licensing-first model, rejects a commercial text and data mining (TDM) exception, and calls for mandatory transparency on content used for training, protections for personality rights and "sovereign AI" (the UK's domestic AI capability). However, the report’s most significant contribution may be its candid treatment of the enforcement problem - in particular, whether a licensing-first regime can reach models trained in permissive jurisdictions abroad and deployed in the UK via model weights that may not store copies of the underlying works, as was the case in the High Court's decision in Getty Images v Stability AI [2025] EWHC 2863 (Ch) (see our post on this decision here) now under appeal.
The House of Lord's Committee's report landed shortly before the deadline for the Government’s statutory deadline to publish its economic impact assessment and policy report under the Data (Use and Access) Act 2025 ("DUAA") - of 18 March 2026. In the DUAA, the Government committed to providing its report including a formal response to the IPO's consultation on Copyright and AI which took place at the beginning of 2025. The report is required to consider each proposal from the consultation and set out the Government's position. However, there are indications, reported in The Financial Times, that the Government is unlikely to commit to a policy direction, with legislation now unlikely before 2027.
In this post we look at the background context in which the Committee's report was published, consider its recommendations (which, simply put, fit into the following categories: transparency obligations, new or expanded rights in respect of digital likeness and stylistic output, provenance and labelling requirements, and measures aimed at supporting a licensed market for AI training data), compare the UK and German approaches to the issue of whether an AI embodies a copy of a work ("memorisation") and what to expect on 18 March from the Government's report – but first we consider what the Government's report may, or may not, cover.
What to expect on 18 March and beyond
The DUAA requires the Government to publish, by 18 March, an economic impact assessment (under section 135 DUAA) and a report on copyright works in AI development covering transparency, licensing, technical standards and enforcement, including in relation to AI systems developed outside the UK (under section 136 DUAA).
There are, however, clear indications that these publications will fulfil the statutory obligation without committing to a policy direction. Both the Secretaries of State (for DSIT and DCMS) told the Committee that the Government would not be setting out its final position in March (paragraph 62, Report). The Financial Times has reported that ministers have decided to go back to the drawing board, with no AI bill expected in the King’s Speech (May) and legislation pushed to next year. The consultation reportedly surfaced proposals for more targeted, sector-specific exemptions, distinct from the four original options, which ministers want time to develop.
If the Government is indeed considering sector-specific or use-case-specific carve-outs rather than a blanket TDM exception, this could represent a fifth option that seeks to balance the competing interests in a more granular way. However, it would raise its own questions about scope, the Berne three-step test, and the risk of inconsistent treatment across creative subsectors.
In the meantime, three developments will shape the practical landscape. First, the Getty appeal, which will determine whether UK law has any purchase over models trained abroad but deployed domestically. Second, the outcome of OpenAI’s appeal against the GEMA ruling in Germany, and the CJEU’s pending consideration of related questions in Like Company v Google (C-250/25), both of which will affect the European approach to memorisation and the scope of the TDM exception. Third, whether the Government’s eventual policy response addresses the structural enforcement gap that the Lords report itself identifies, without which a licensing-first regime, however well-designed, will primarily bind only those who choose to comply.
Background to the Committee's report
The Committee's report follows the Government’s consultation on Copyright and AI (December 2024 – February 2025), which sought views on four policy options ranging from maintaining the status quo (option 0) to introducing a broad commercial TDM exception with a rights-reservation (opt-out) mechanism (option 3). The Government had initially presented option 3 as its preferred approach, mirroring elements of Article 4 of the EU Copyright in the Digital Single Market Directive (CDSMD) but then withdrew this announcing a “reset”, stating it no longer had a preferred option (see our earlier post, UK Government consults on Copyright and AI).
During the passage of the Data (Use and Access) Bill, the House of Lords raised the issue of transparency of use of content in training an AI and attempted to insert provisions into the Bill to ensure that the owners of that content could be notified of its use (see our blog post Data (Use and Access) Bill amendments could require transparency from webcrawlers and AI machines marketed at the UK). The Government refused to accept the amendments and the impasse was eventually resolved through the House of Lords extracting a commitment from the Government written into the statute to publish an economic impact assessment and a policy report setting out its plans to deal with these issues (under section 135 and 136 DUAA) within 9 months of the Bill being enacted. This time limit is up on 18 March 2026.
The consultation on Copyright and AI had not yet reported its conclusions at the time of the DUA Bill proposals from the House of Lords. The report required in the DUAA effectively formalises the Government's response to the consultation(section 136). The DUAA also required (section 137) a progress statement to be issued 6 months if the main report had not already been issued by then. The Government therefore issued its progress statement in December 2025. This reported that the consultation received over 11,500 responses. Among Citizen Space respondents, 88% supported licensing in all cases (option 1), while only 3% supported option 3 and 0.5% supported a broader TDM exception without rights reservation (option 2).
The recommendations of the House of Lords Committee report in brief
The Committee’s core position is that the Government should rule out any reform of the CDPA that would remove the incentive to license copyrighted works for AI training (paragraph 39) and should follow Australia in publicly ruling out a commercial TDM exception with an opt-out mechanism (paragraph 177). In the Committee’s view, the tech sector’s demand for such an exception implies that existing law does not clearly permit large-scale commercial training, meaning the push is to weaken protection, not clarify it (paragraphs 36 – 37).
Beyond this, the report recommends:
- Mandatory statutory transparency (paragraphs 116- 118): on training data, going beyond the high-level summaries required under the EU AI Act, with a regulatory body empowered to set reporting standards and enforce compliance. The Committee also proposes confidential granular disclosures to a regulator, modelled on the EU’s General-Purpose AI Code of Practice.
- Personality, digital replicas, and "in the style of" protections (paragraph 84): Two observations are worth making here.
- The case for addressing digital replicas - deepfakes, voice cloning, synthetic performances - rests on a gap in existing protections that is well-documented: the person depicted or recorded is often not the copyright holder, performers’ rights under Part II CDPA do not include a right of adaptation, and the tort of passing off assists only those with established goodwill.
- However, it is worth noting that the “in the style of” limb would extend protection to territory copyright has traditionally excluded. The scope and design of any such right will need to address how it interacts with the foundational idea/expression dichotomy, which the Committee’s own witness Dr Guadamuz described as essential to avoid monopolising artistic techniques and conventions (paragraph 73).
- Provenance and labelling standards (paragraphs 164, 203): a “triple lock” of signed metadata, watermarking and fingerprinting — with legislation considered for mandatory labelling of AI-generated content.
- Sovereign AI (paragraph 135): prioritising the development and adoption in the UK of models with transparency and copyright compliance meeting the Government's standards built in by design.
- Supporting an AI-licensing market (paragraph 228): including exploring unwaivable equitable remuneration rights for individual creators subject to mandatory collective management.
The jurisdiction gap: Getty, GEMA and the memorisation question
The report engages directly with the enforcement challenges exposed by the High Court’s judgment in Getty Images v Stability AI [2025] EWHC 2863 (Ch) (Getty fails in the UK courts). Getty abandoned its primary copyright infringement claim as there was no evidence of training in the UK. The High Court therefore was not asked to rule on whether training using copyright works without a licence constitutes infringement. The remaining secondary infringement claim also failed: the court found that Stable Diffusion’s model weights do not store or reproduce the underlying works and therefore are not “infringing copies” under sections 22–23 CDPA. The case in now under appeal.
The court's consideration of memorisation in Getty warrants particular attention. The court accepted broadly unchallenged expert evidence that the model weights were “purely the product of the patterns and features which they have learnt over time during the training process” and did not store copies of training images. Getty could not assert as a fact that the weights included a copy of any copyright work as there was no evidence of memorisation in the specific model at issue (Stable Diffusion, a latent diffusion model).
One week later, the Munich Regional Court in GEMA v OpenAI (42 O 14139/24) reached the opposite conclusion on the core question of memorisation (see our post, Munich court finds copyright infringement of song lyrics ‘memorised’ by ChatGPT). That court found that ChatGPT models 4 and 4o had memorised song lyrics, making them “reproducibly contained in the model and thus embodied”, and held this constituted reproduction under German copyright law. The TDM exception under section 44b UrhG was held inapplicable on the basis that memorisation exceeds the analytical purpose the exception covers. OpenAI is appealing.
This divergence is not simply a result of different courts applying different legal rules. It may also reflect differences in how the relevant technologies operate. Stable Diffusion is trained on image data, whereas ChatGPT is trained on text. Language models generate outputs by predicting the next element in a sequence of text, while diffusion‑based image models generate images by learning patterns across visual data through a process of iterative denoising. These differences may be relevant to questions about how training data is used and the extent to which outputs resemble that data, although their legal significance will depend on the facts of any particular case.
Getty was granted permission to appeal in December 2025 on a pure question of statutory construction: whether “infringing copy” in sections 22–23 CDPA requires an article to contain reproductions of works, or whether a broader reading is available. The Court of Appeal will not consider the question of whether Stable Diffusion’s weights store copies, as this point was not pleaded by Getty. Both parties in Getty agreed that there was no "copy" as such embodied in the AI service in question, but the memorisation question remains significant beyond this appeal. A future claimant bringing a claim in respect of a different model architecture, particularly an LLM, could seek to demonstrate different facts and pursue a memorisation argument as was employed in the GEMA case in Munich.
These enforcement challenges are compounded by the territorial nature of copyright. The Committee observed that most large-scale training happens outside the UK (paragraph 118), and evidence from Dr Trapova of UCL described this as the “elephant in the room” as UK courts may lack jurisdiction where training occurs entirely abroad (paragraph 47).
A comparison of approaches in different jurisdictions highlights the issue. The report includes a table of TDM provisions in selected jurisdictions. Several major jurisdictions have enacted permissive regimes expressly covering commercial AI training: Japan’s Copyright Act, Article 30-4, provides a broad exception for “non-enjoyment purposes” with no opt-out; Singapore’s Copyright Act 2021, Section 244, permits computational data analysis for commercial purposes with no opt-out and no contractual override; and the US fair use doctrine is being actively tested. A model trained under any of these regimes, deployed in the UK via model weights that per Getty do not store copies, would appear to fall outside the UK's law on secondary copyright infringement through importation, as currently interpreted.
The Committee’s proposed solutions include market-access transparency requirements modelled on the EU AI Act’s Article 53, public procurement leverage, and the suggestion (from Dr Trapova) that unfair competition law could provide an alternative basis for challenging models trained on unlicensed content abroad, though it acknowledged the last of these would be “ambitious” given the UK’s historical position of not having a specific law of unfair competition (paragraph 123). These remain largely untested.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]