ARTICLE
28 April 2026

Procuring Trust: Managing AI Risks In Public Sector Contracts

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
Public sector organisations face unique challenges when procuring AI-enabled services, from defining precise use cases to managing complex risks around transparency, bias, and accountability. This article examines the practical procurement considerations and governance frameworks needed to ensure AI systems remain safe, reliable, and compliant throughout their operational lifecycle.
United Kingdom Government, Public Sector
Gowling WLG are most popular:
  • within Compliance topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel

AI is already part of public sector delivery, from decision support and monitoring to service delivery and internal efficiency. The procurement challenge is not simply buying an “AI‑powered” tool. It is identifying the right use case, setting the right requirements, allocating risk fairly and putting governance in place so the system stays safe and reliable once it is live.

If you are looking for a short definition of AI and how it works, we have covered that in our recent insight: Artificial Intelligence in business: a starter for boards.

This article focuses on what public sector and in‑house teams typically need most at procurement stage, the key AI risks that show up in public sector contracts and the practical steps that help manage them.

AI in the public sector: a quick context

Public sector use cases are often reported using the following four categories -supporting decision-making, research and monitoring, delivering public services and improving efficiency. Examples of existing use include using an algorithm to identify companies most likely to succeed in exporting by mining online Companies House data, habitat mapping using satellite imagery to support environmental policy and use of machine learning to support sensitivity reviews before records are transferred to the National Archives.

These use cases vary widely. That matters because “AI risk” is rarely a single issue. It is a combination of what the system does, how it is used in the process, what data/materials it uses and what happens when outputs are wrong or hard to explain.

“AI‑powered” is not a contract requirement

There is no single definition of "AI". The definition currently adopted by the OECD and used in the UK government's AI playbook (Copy of AI Playbook for the UK Government (word)) describes AI as a machine-based system that infers from inputs to generate outputs such as predictions, recommendations, content or decisions, with varying autonomy and adaptiveness after deployment. "AI" is used to refer to many different technologies, with differing capabilities and functionalities, often layered together.

For procurement teams, the point is practical: a label does not tell you what you are buying. You still need to pin down the precise use case, the role AI plays and how it will do that. Understanding this is essential in ensuring that your procurement and contract documentation address the practical, technical and legal considerations that arise.

UK approach: context-based regulation, cross-cutting obligations

The UK’s approach to AI regulation is context-based and proportionate, relying on existing sectoral laws rather than a single AI regulation. In practice, procuring an AI product can engage a broad set of issues, including transparency and fairness, privacy, bias and discrimination risk, intellectual property, confidentiality, cyber security, liability and compliance with changing law (often across jurisdictions).

This is why the most effective procurements of AI bring legal, procurement, technical and operational teams into the same process early. Many of the controls that matter most sit in testing evidence, governance, and how the service will run after go‑live, not only in the drafting.

Key AI risks in public sector contracts (and what to focus on)

1. Transparency and explainability

Modern AI systems are statistical models and can be difficult to interpret, particularly where complex architectures make it hard to trace how a specific input led to a specific output. In the public sector, this is particularly challenging because decision-making processes often need to be explainable and open to challenge.

What to focus on in the contract and delivery model: clarify where AI sits in the process, what information the supplier can provide to explain outputs, and what happens when an output needs to be reviewed.

2. Accuracy, misinformation and hallucinations

AI systems generate outputs that are “statistically plausible” rather than factually accurate, and it is well known that generative AI tools are prone to producing false information confidently, referred to as “hallucinations”. This becomes a contract and governance issue when users rely on outputs as if they were verified facts.

What to focus on: define how accuracy will be tested in your use case, agree what the supplier will measure and report, and put user guidance in place so teams know when they need to verify outputs.

3. Bias, discrimination and data quality

Bias is an inherent AI risk, often driven by incomplete or unrepresentative training data. Insufficient data can limit a model’s ability to generalise, while poor-quality, biased or noisy data can produce inaccurate outcomes.

What to focus on: require evidence of bias and performance testing that reflects the population and context the tool will operate in and build in checks that continue after go‑live (not only at procurement stage).

4. Data rights, confidentiality and intellectual property

IP ownership questions arise across the AI stack: the underlying code, any fine‑tuning, outputs, training data, and whether your data is used to train models or could appear in someone else’s outputs. Confidentiality and licensing issues also need consideration - holding data does not automatically mean it can be used with AI in the way a supplier proposes.

What to focus on: map data flows early, confirm permitted use (including for any fine‑tuning) and make ownership and restrictions on outputs and training explicit in the contract.

5. Cyber security and supply chain exposure

Cyber risks around AI are twofold - protecting the AI service itself (including third parties in the supply chain) and recognising that AI can increase the capability of attackers. That makes cyber security a procurement issue as well as a technical one.

What to focus on: security assurance, incident notification obligations, and clarity on who does what when something goes wrong, including the supplier’s role where third-party components are involved.

6. Liability and accountability in practice

One of the big commercial issues to be ironed out in AI procurements is liability: who is liable if the AI gets it wrong, who is responsible if training data was not properly licensed, and who carries risk for IP or privacy problems in outputs. Risk allocation often differs depending on whether you are buying an off‑the‑shelf service or fine‑tuning for a high‑impact use case.

What to focus on: align liability positions with practical control. If the customer controls the data and fine‑tuning, risk allocation should reflect that; if the supplier controls model design and updates, obligations should reflect that.

For a broader contracting lens, read our article: What do customers need in contracts for AI products?

One place to start: an “AI Playbook” view of procurement

The Government’s AI Playbook is a thorough, stage‑by‑stage guide for public sector AI use. In procurement terms, the most helpful way to use that approach is to reduce it to four workstreams that should be visible in every AI-enabled service deal:

  • Use case and impact: define the problem you are solving, what success looks like and whether the tool influences decisions that affect individuals.
  • Data and rights: confirm what data is used, whether it includes personal/confidential information and what rights and restrictions apply.
  • Testing and assurance: gather evidence of testing (accuracy, bias, unintended outcomes) and security assurance and run acceptance testing that reflects real operational use.
  • Live governance: agree how the tool will be monitored, how updates are controlled, and what happens when issues arise (including routes for review and redress where relevant).

This keeps the focus where it needs to be as contracts support good practice, but operational controls keep AI-enabled services working as intended over time.

Procurement checklist: build an evidence pack you can rely on

Public sector teams often ask for a checklist of “questions to ask suppliers”. That can help, but it is easy to end up with a long list and little usable evidence. A better approach is to ask for (and create) a small set of artefacts that make risk visible and manageable.

Here are five items that tend to add the most value:

  1. A one-page use case brief (owned by the buyer): Set out what the tool is for, what it will influence, and what success looks like (including acceptable error levels where relevant). This becomes the reference point for testing and governance.
  2. A data map and permissions note (buyer + supplier): Document what data is used, where it flows and what rights and restrictions apply (including confidentiality and any third-party licence limits).
  3. A testing and assurance summary (supplier evidence, buyer validation): Require evidence of testing for accuracy, bias and unintended outcomes, and clarity on security testing and assurance. Then define what you will validate during proof of concept or acceptance testing in your environment.
  4. A live monitoring and change plan (joint governance document): Set out what will be monitored (including drift, bias and unexpected outputs), how often, who reviews results and how updates will be released and checked.
  5. A review and escalation route (especially where individuals are affected): Define how outputs are reviewed, how issues are raised and how the organisation responds, including feedback and redress mechanisms where appropriate.

These artefacts also make contracting easier. They reduce ambiguity and help you write obligations that match how the service will be delivered in practice.

Next steps for AI procurement

If you are procuring an AI-enabled service now, aim for two outcomes: clarity and control. Clarity means being precise about the use case, the role AI plays and the data and rights position. Control means building monitoring, managed updates and governance into delivery so the tool keeps performing as intended after go‑live.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More