- within Technology topic(s)
- within Technology, Transport and Employment and HR topic(s)
- with Inhouse Counsel
- in Australia
The rapid advancement of artificial intelligence (AI) technologies is poised to deeply influence public law and has prompted extensive discussion on the necessity of regulatory frameworks and what may constitute effective safeguards, not least on this blog, in recent years.
As in previous blogs, we look at the principal developments in this area through two distinct lenses: on one hand, examining how AI may be used by public body decision-makers such as regulators and assessing the potential consequences and implications of such use. On the other, considering approaches to the broader regulation of AI, including of its use by individuals and private entities, in the context of what appears to be a shifting balance between innovation and oversight.
As we come to the end of 2025, this is a good time to take stock of where we are on both fronts, and what to keep an eye out for in 2026.
Public body use of AI
A key topic of discussion has been the adoption of automated decision-making (ADM). ADM encompasses decisions made by or for public bodies using algorithmic processes and AI, involving varying degrees of human input and oversight. There is currently no specific legal framework governing the use of ADM by the state to make decisions affecting private citizens or organisations, and there are questions over whether the existing framework of public law principles enforced through judicial review is adequately developed or appropriate to provide the necessary oversight in this novel area (see here for a more detailed discussion on public law issues in the use of AI).
With public services under strain and public bodies continually under resourced, the Government is keen to explore and expand the use of AI in public sector decision-making, given its potential to increase efficiency and reduce costs. A recent speech by Lord Sales highlights the prevalence of AI in public administration and the possibility of more substantive benefits, such as reducing capriciousness through the consistent application of rules. However, in the absence of proper consideration of safeguards, there are many reasons why caution should be exercised before extending this too far or, as Lord Sales puts it, naively assuming that the spread of ADM will bring us closer to an ideal. For example, at the more extreme end of decision-making, one of the most pressing questions is whether a machine-made decision can be properly regarded as having been made by an "independent and impartial tribunal established by law" under Article 6 of the European Convention on Human Rights.
The protection of human rights is a central consideration in the uptake of AI in public body decision-making. The Joint Committee on Human Rights is conducting an inquiry into how human rights can be protected in the age of AI. In particular, it seeks to examine how AI can affect human rights with reference to privacy and data usage, discrimination and bias and in relation to remedies for the violation of human rights. The inquiry also puts to its audience whether protections under the existing regulatory framework, as well as under the Government's AI Opportunities Action Plan, are sufficiently robust, and what changes should be enacted to ensure their effectiveness.
The Joint Committee raises fundamental questions as to the future of AI regulation, such as the extent to which the same human rights standards should apply to the use of AI by private and public actors, and who should be held accountable for human rights breaches resulting from AI usage. It will also explore how much of a difference the Council of Europe's Framework Convention on Artificial Intelligence, which was the first international legally binding treaty on AI, will make to the protection of human rights in the UK.
In an attempt to answer some of the outstanding questions, in September the Law Commission announced a project relating to public sector ADM as part of its Fourteenth Programme of Law Reform. It acknowledges that fundamental legal questions, such as whether it is lawful to use ADM to discharge a particular statutory function, are unanswered, and that judicial review may not be well-suited to scrutinising decisions made using ADM for various reasons (listen to our podcast on that issue here). The Law Commission considers that: "Developing a coherent legal framework to facilitate good and lawful ADM can reasonably be described as the most significant current challenge in public law". Its goal is to make recommendations as to the legal framework necessary to promote good, lawful ADM based on consultations with experts and the public.
Both this and the Joint Committee on Human Rights inquiry are areas to watch closely.
Broader regulation of AI
Although early discussions around AI regulation heavily emphasised oversight, accountability, safety and security, there appears to be a change in tone from the Government with the focus leaning more towards innovation.
In the autumn, the Digital Regulation Cooperation Forum (DRCF, made up of the CMA, Ofcom, ICO and FCA) published insights it gathered from trialling its AI & Digital Hub pilot. The pilot consisted of a multi-agency advice service, designed to support innovators navigating the evolving regulatory landscape for AI and digital technologies. The Hub offered free, informal, cross-regulatory advice aimed particularly at organisations whose propositions spanned the remits of at least two DRCF regulators. The DRCF reported that user feedback following the trial was predominantly positive: innovators reported having benefitted from the Hub, as informal advice "increased their awareness of digital regulatory requirements and gave them the confidence to focus their efforts on the UK market". The Hub also promoted cooperation between innovators and regulators to understand how regulatory processes could be streamlined to bridge the gap between practical workability and regulatory procedures. The report suggests that the "AI & Digital Hub has demonstrated the value of a new model of regulation, one that is collaborative, adaptive, and connects with real world innovation".
Building on this, the Department for Science, Innovation and Technology recently announced what was hailed as a new blueprint for AI regulation which has the aim of accelerating innovation and cutting bureaucracy while ensuring the development and deployment of AI products is conducted safely and responsibly. The new AI Growth Labs will pilot a system for companies and innovators to test new AI products in real-world conditions by temporarily relaxing some rules and regulations, under strict supervision.
Although this sandbox model has already been adopted internationally, with countries such as the EU, USA, Japan and Singapore running similar systems to speed up the safe deployment of AI, the AI Growth Lab appears to go further than previous uses in the UK as it is intended to be on a much larger scale, across the economy. The details are yet to be worked out, with a call for evidence seeking views on issues such as whether it should be run centrally by Government or whether individual regulators should run their own sandboxes.
One possibility put forward is that if certain rules are identified through this pilot as being unnecessary or causing an unjustified regulatory barrier, then there may be permanent changes made, potentially using streamlined powers. This suggests the Government may consider bypassing the normal routes for amending regulation or legislation which could give rise to a whole range of issues. Importantly, there will be some "red lines", ie certain rules that should never be relaxed or removed, for example in the areas of consumer protection, safety and human rights, and the call for evidence questions where these red lines should be drawn.
The call for evidence also seeks views on what level of scrutiny is appropriate for oversight of the sandbox, and has suggested primary legislation could give ministers a power to create individual sandboxes via secondary legislation. This secondary legislation would enable time-limited, targeted modifications to specified sectoral regulations if they are hindering AI adoption. In turn, licences for participant firms in the sandbox would specify innovation-specific safeguards, monitoring and restrictions. To those in regulated industries, such a model may sound strikingly familiar – which begs the question, how innovative and different will this proposed system for regulation of AI really be?
Conclusion
The rapid advancement, adoption and influence of AI presents challenges which require the quick adaptation of existing regulatory frameworks, or the formulation of new systems. Recent pilot initiatives are trying to approach regulatory and legislative change in a new way by promoting dialogue, cross-regulatory collaboration and adaptability to ensure that regulations impose safeguards without hindering innovation.
Many fundamental questions remain to be answered, such as the role that we want AI to play in our society, the level of risk we are willing to take in the name of innovation and how regulation hopes to keep up with technological developments. Effective pilots have been lauded on the basis that if this collaborative, adaptive and industry-based approach to regulation is effective, it may present an opportunity to change the way we approach regulation itself. But a small scale pilot under tightly controlled conditions is rather different to wholescale regulation. As the suggestions around the AI Growth Lab demonstrate, it may not be that easy to come up with truly new models of regulation that can achieve the magic balance between innovation and oversight. Perhaps someone should ask Gen AI?
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.