- within Insolvency/Bankruptcy/Re-Structuring and Environment topic(s)
Welcome to the latest installment of Arnold & Porter's Virtual and Digital Health Digest. This digest covers key virtual and digital health regulatory and public policy developments during July and early August 2025 from the the United Kingdom, and European Union.
The UK government has published a number of initiatives and responses to consultations this month that have included important proposals for devices and digital health. The 10 Year Health Plan and Life Sciences Sector Plan both refer to integration of digital health into the National Health Service. Further, the Medicines and Healthcare products Regulatory Agency's statement on the future regulation of devices makes some key changes relevant to software, including that the proposed international reliance pathways will include software. There has also been some useful guidance on synthetic data, and on reporting adverse events for software devices. These initiatives continue to demonstrate that digital technologies are seen as a growth area and are important for delivering the government's long-term healthcare plans.
Regulatory Updates
European Commission Launches Public Consultation
on Proposed AI Updates to GMP Guidelines. On July, 7
2025, the European Commission published a consultation on proposed
updates to the EU Good Manufacturing Practice (GMP) guidelines to
reflect the implementation of AI systems in pharmaceutical
manufacturing. In particular, a new Annex 22 specific to AI has
been introduced that sets out the type of AI models that are
allowed in critical and non-critical GMP applications and imposes
obligations for companies in relation to how AI systems are
validated and monitored. For further details on the amendments,
read our July 2025 BioSlice Blog. The consultation ends
on October 7, 2025.
Notified Body White Paper on Implications of EU AI
Act for AI-Driven Medical Devices. As set out in our
July 2025 Digest, the EU Medical Devices
Coordination Group and the AI Board have published a FAQ document on the interplay of the EU AI Act
with the EU Medical Device Regulation (MDR) and In Vitro Diagnostic
Regulation (IVDR). Now, the Notified Body TÜV SÜD has
published a paper on the regulatory overlap and misalignment
between the EU AI Act and the MDR/IVDR in relation to risk
classification, transparency, medical device software, and
conformity assessments. Medical device manufacturers are encouraged
to: (1) engage notified bodies experienced in both the EU AI Act
and MDR; (2) implement a robust AI risk management system and
GDPR-aligned data governance practices; (3) start voluntary
compliance with the EU AI Act ahead of the August 2027 enforcement
date; and (4) track national regulatory sandboxes for innovative AI
systems.
European Medicines Agency (EMA) Publishes First AI
Observatory Report on AI Use in Medicines. The report
outlines AI applications in medicine development and regulation
across the EU in 2024. Alongside this, the EMA published a horizon scanning report on AI/machine learning
applications, highlighting AI and machine learning
opportunities and challenges throughout the medicines lifecycle.
The report notes that most AI use cases in 2024 focused on
pre-authorization stages, although it notes that potential AI uses
include predicting long-term clinical outcomes, screening social
media to detect adverse events, or assessing endpoints from digital
health technologies.
UK Government's 10 Year Health Plan and Life Sciences Sector Plan Support Integration of Digital Health Into the NHS. Under the "analogue to digital" aim of the 10 Year Health Plan, the government proposes expanding the National Health Service (NHS) app into a single access point for all of an individual's health needs and introducing a "HealthStore" for patients to access approved digital tools to manage their conditions. Data, AI, and wearables are identified as key transformative technologies that will help to deliver reform, and the National Institute for Health and Care Excellence's technology appraisal process will be expanded to cover devices and digital products. Under the Life Sciences Plan, a major action includes the establishment of a new secure and AI-ready Health Data Research Service to enhance access to NHS data for research and innovation. Further, an innovator passport will be launched to accelerate the rollout of new medical technologies across the NHS. This will allow technologies that have been robustly assessed by one NHS organization to be adopted more widely, without requiring repeated compliance evaluations by other NHS trusts. Read more about these two plans in our July 2025 BioSlice Blog, together with the response from the Association of British HealthTech Industries.
MHRA Publishes Results From Various Consultations
on Medical Device Regulations. As part of reforms to
medical device legislation, the Medicines and Healthcare products
Regulatory Agency (MHRA) will implement an international reliance
framework based on Australia, Canada, and the U.S., with different
access routes depending on the device class and type. In
particular, the proposed scope of Route 4 will be expanded to
include software as a medical device. In relation to EU CE marks,
the MHRA proposes to conduct a further consultation on the
recognition of such marks being extended indefinitely. Regarding in
vitro-diagnostic (IVD) devices, the MHRA updated the proposal in
relation to Class B IVDs, including software, so that manufacturers
will be required to self-declare conformity and have quality
management system certification to ISO13485. Please refer to our
August 2025 BioSlice Blog for further information on the outcome of
the consultation.
MHRA Guidance on Adverse Event Reporting in
Digital Mental Health Technologies (DMHTs). The MHRA
has published updated guidance on how new post-marketing surveillance rules, which
came into force on June 16, 2025 (as discussed in our June 2025 BioSlice Blog), apply to DMHTs that
qualify as software as a medical device. The guidance provides
examples of serious incidents that could occur with DMHTs. For
example, a malfunction in a virtual reality therapy could lead to
adverse psychological effects, and incorrect AI assessments could
lead to misdiagnosis and inappropriate treatment. The aim of the
updated guidance is to provide clarity on the risks and mitigating
measures that should be put in place by manufacturers and to
improve regulatory compliance, and it is likely to be useful for
software development beyond DMHT.
Expert Report on the Use of Synthetic Data in the
Development of AI as a Medical Device (AIaMD). An
expert working group within the MHRA and the PHG Foundation (a
non-profit think tank that aims to influence health care policy in
relation to emerging technologies) has published a report
summarizing regulatory considerations for manufacturers when
artificially generated data is used in the development of AIaMD.
Key points include that use of synthetic data should be clearly
justified based on technical, ethical, regulatory, or lifecycle
considerations, and that companies should assess the quality and
suitability of synthetic data, as well as the impact on safety and
performance. Although not official MHRA guidance, the principles
and recommendations are likely to help companies navigate
regulatory submissions where synthetic data has been used in AIaMD
development.
IP Updates
UKIPO Rejects NVIDIA's Patent Application Related to AI Medical Technology. On June 27, 2025, the UK Intellectual Property Office (UKIPO) rejected NVIDIA's patent application for an invention that concerned training a neural network using medical imaging and clinical data to determine the appropriate treatment for patients, one example being, predicting how long a patient may need to use an Intensive Care Unit bed. The UKIPO applied the four-step test in Aerotel and considered the AT&T signposts, concluding that while NVIDIA's invention generated a recommendation to a medical practitioner, the decision-making process to follow that recommendation was not part of the claimed invention. Although NVIDIA argued that the system provided objective, technical recommendations for resource allocation, the UKIPO found the problem, as well as the solution, to be purely administrative (i.e., non-technical) in nature. The previous case of Emotional Perception, which we have previously reported on in our December 2023 Digest and our September 2024 Digest, was distinguished on the basis that NVIDIA's invention lacked a concrete technical output. Ultimately, the application was excluded under Section 18(3) of the Patents Act 1977, as the neural network was treated as a computer program, and a method of doing business as such. The UK Supreme Court heard Emotional Perception's appeal in July 2025, and its judgment is likely to have a significant impact on how patent applications relating to AI will be handled in the UK.
Product Liability Updates
Review Underway Into the UK Product Liability
Rules. The UK's Law Commission announced it has started a review into whether
the Consumer Protection Act 1987, the UK's current strict
liability regime for damage caused by defective products, is still
fit for purpose, particularly in the context of digital
technologies. The review will propose what law reforms may be
required to make the regime more appropriate for the digital age
and establish a better balance between protecting consumers from
harm caused by defective products and supporting innovation. These
are similar reasons for the reform of the revised EU Product
Liability Directive, which will apply beginning on December 9,
2026, in Member States, but not in the UK. The Law Commission
invites stakeholders to share their views on these issues. See our
blog post.
Study on the Civil Liability for AI Systems
Published by the European Parliament. The study starts
by noting that existing tort and contract law regimes should be
able to protect novel harms, and a dedicated set of rules is not
necessary. However, dedicated rules do provide predictability,
efficiency, and harmonization, where otherwise Member States might
create divergent regulatory frameworks for AI liability. The study
critically analyzes the limitations of the original Product
Liability Directive from 1985 and whether the revised Product
Liability Directive addresses these limitations. Following the
European Commission's withdrawal of its proposal for an AI
Liability Directive in February 2025, the study presents four
policy options. It primarily advocates for the creation of a new,
standalone strict liability regime for high-risk AI systems, which
imposes liability on a single operator that controls the AI system
and benefits economically from its use. But failing that, a
specific fault-based liability for high-risk AI systems would be an
improvement on the AI Liability Directive in its current form.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.