ARTICLE
29 March 2026

Truth On Trial: Deepfakes And The New Battleground For Evidence Integrity

E
ENS

Contributor

ENS is an independent law firm with over 200 years of experience. The firm has over 600 practitioners in 14 offices on the continent, in Ghana, Mauritius, Namibia, Rwanda, South Africa, Tanzania and Uganda.
Deepfakes are rapidly eroding the long‑held assumption that visual and audio evidence reflects reality.
South Africa Technology
Linda Sheehan’s articles from ENS are most popular:
  • within Technology topic(s)
  • with Finance and Tax Executives
  • in United States
  • with readers working within the Oil & Gas and Telecomms industries

Deepfakes are rapidly eroding the long‑held assumption that visual and audio evidence reflects reality. The sophistication of synthetic media now forces South Africa's legal community to rethink how digital evidence is authenticated and how investigative processes must adapt.

Locally, reports already show rising AI‑driven identity fraud, biometric spoofing, voice cloning and impersonation scams. These trends reflect a global shift in which synthetic media is increasingly weaponised in both criminal and civil disputes. Internationally, courts are confronting the “liar's dividend” – when individuals strategically claim genuine evidence is fake – signalling a future in which courts must evaluate both fabricated content and fabricated doubt.

This article explores:

  1. How South African evidentiary law is adapting; 
  2. What the EU's AI Act and its draft Deepfake Code of Practice mean for local practitioners; and
  3. Practical, risk-based guidance for handling AI-generated content in litigation and investigations.

Authenticating evidence in the AI-era

South Africa's evidentiary framework – the Consumer Protection Act (“CPA”), Cybercrimes Act, Electronic Communications and Transactions Act (“ECTA”), Protection of Personal Information Act (“POPIA”) and the common law – was designed for a world where digital records were assumed to be reliable reflections of the real-world events. Deepfakes dismantle that assumption. They destabilise the notion of a trustworthy “original” and heighten the requirements for proving that a digital record is complete, unaltered and accurate.

The second draft of the European Commission's voluntary deepfake code of practice published on 06 March 2026 sets out targeted obligations for when AI-generated content, such as:

  • labelling standards, including use of an “AI” acronym, supported where appropriate by concise text such as “Generated with AI” or “Manipulated with AI”
  • audio only content to include a brief spoken disclaimer
  • notice appearing at the first point of interaction and expressed unambiguously, as required by Article 50(4) of the EU AI Act

1.1 Authenticity can no longer be sssumed

Courts can no longer accept authenticity based solely on appearance, a party's assurance or supporting witness testimony. The key question now is whether the content is genuine at all. This requires more rigorous scrutiny, often including technical or forensic verification.

Courts will still prefer the most reliable – and ideally original – version of any digital file. This aligns with ECTA's requirement that information must have “remained complete and unaltered,” and that the systems generating or storing it must be reliable.

Where AI reconstructs pixels, smooths audio, predicts frames or enhances quality, the result may no longer constitute an “unaltered” data message. These AI‑modified outputs should be treated as derivative evidence, with the untouched original remaining the primary source for integrity assessment.

1.2 The dual burden on practitioners

Deepfake‑era evidence imposes a dual burden:

  • Authenticate the original digital artefact, and
  • Validate the integrity of any AI-assisted analysis or enhancement performed on it.

This requires meticulous documentation of tools used, prompts or parameters applied, the purpose of each processing step and the preservation of both original artefacts and all AI-generated derivatives.

1.3 Integrity requirements Under ECTA

South African law already recognises that the integrity and evidential weight of a data message depend on more than the content itself. Courts must evaluate the reliability of the processes and systems through which data is generated, stored, communicated and preserved. Deepfakes directly undermine these assumptions by introducing avenues for subtle, nearly undetectable manipulation. This will require enhanced integrity assessments beyond those contemplated when ECTA was enacted, especially as AI‑mediated evidence becomes more common.

1.4 Legal remedies and their limitations

South Africa's legal framework provides avenues for addressing malicious deepfakes, including:

  • Cybercrimes Act (electronic fraud, impersonation and identity‑theft‑style offences),
  • Common‑law claims (fraud, personality‑rights violations), and 
  • POPIA‑based claims (misuse of personal information.)

However, enforcement remains difficult. Deepfake creators often operate anonymously, from foreign jurisdictions and with easily accessible tools that enable mass-production of synthetic media.

1.5 POPIA Section 71: Human oversight is mandatory

POPIA Section 71 prohibits decisions with legal or similarly significant consequences from being based solely on automated processing. This safeguard is critical in forensic and investigative contexts.

AI tools that identify individuals in suspected deepfakes, match facial features or classify subjects as high‑risk can influence prosecutorial, disciplinary, regulatory or employment outcomes. These determinations easily fall within Section 71's “significant effect” threshold.

Human oversight must therefore be genuine and substantial – not a rubberstamp. Decision‑makers must interrogate the context, reliability and limitations of AI outputs.

This reinforces the importance of documented, independent human review throughout investigative workflows. AI may assist, but it cannot autonomously determine outcomes that affect legal rights, reputation or liberty.

1.6 The new evidentiary baseline

These legal frameworks collectively raise the authentication bar. Practitioners must now combine traditional evidence with:

  • metadata analysis
  • device forensics,
  • cryptographic hashing,
  • AI‑tool identification, and
  • detailed logging of all analytical procedures.

Meeting this standard often requires specialised digital‑forensic expertise, adding cost and complexity – especially for smaller firms or legally aided matters. But these enhanced measures are becoming increasingly necessary environment where the line between genuine and fabricated digital evidence is blurred.

2. Lessons from courts abroad

Deepfakes have begun reshaping evidentiary disputes worldwide, signalling lessons and warnings for South Africa's legal system:

2.1  Lack of awareness of deepfake risks: In a UK family law matter a heavily doctored audio recording portraying a father as violent was successfully challenged through digital forensic experts. The father's attorney warned that it may not occur to most judges that deepfake material could be submitted as evidence.

2.2  Too much awareness: The “liars dividend” is a phenomenon where bad actors exploit deepfake awareness to cast doubt on genuine evidence. Tesla's argument that it could not confirm the authenticity of video clips of Elon Musk because public figures are frequent deepfake targets was denied by the California Superior Court over concerns it would set a precedent that public figures could use to evade accountability.

2.3 AI-enhanced evidence rejected: A Washington court excluded cellphone footage enhanced using Topaz Video Enhance AI. The model's opague, predictive methods introduced new pixels based on what the AI “thought” should appear. The court found this risked misleading the jury, and the evidence failed the Frye standard for scientific acceptance.

2.4 AI-generated content can mislead courts and harm society:  The Minnesota's Judicial Branch AI Response Committee noted judicial concern about making decisions based on AI-generated material. While early deepfakes displayed clear signs – monotone voices, repetitive expressions – these “tells” are rapidly diminishing.

2.5 Verification is resource-intensive and affects access to justice:  In the US, a deepfake audio clip causing public unrest required intensive forensic investigation, including FBI involvement, to both confirm it was AI-generated and trace it back to the bad actor.

3. Practical guidance for deepfakes in litigation, investigations and regulatory matters

3.1 Ensuring evidence is accurate and reliable

Deepfakes demand a higher evidentiary standard. Investigators should work from forensic copies, record cryptographic hashes and maintain comprehensive logs of tools, prompts and settings. AI‑generated outputs must be treated as investigative leads, not conclusions – requiring corroboration, checking alternative explanations, and verifying claims from large language models. Clear, non‑technical explanation of methods is essential as courts increasingly scrutinise AI‑supported evidence.

3.2 Using generative AI to support evidence gathering

Generative AI can summarise material, flag anomalies and organise large datasets, but only with human oversight and full auditability.

Practitioners must:

  • maintain transparent chain of custody,
  • disclose when AI has generated or altered content, and
  • distinguish clearly between machine-suggested insights and professional judgement.

Compliance with POPIA Section 71 requires decisions influenced by automated tools to be explainable and open to challenge.

3.3 Emerging judicial responses and the path forward

Internationally, courts are responding quickly. The US is considering burden‑shifting rules for suspected AI‑fabricated evidence; Louisiana now requires lawyers to verify authenticity and disclose AI‑generated material; and judges in England and Wales emphasise vigilance around synthetic media.

For South Africa, the imperative is clear – adopt enhanced verification protocols early, build judicial awareness and prepare for scenarios where litigants allege deepfakery even when evidence is genuine.

Conclusion

Deepfakes have reshaped the authentication burden. Success now depends on rigorous documentation, transparent and explainable AI use and sustained human oversight. Practitioners who modernise their evidentiary processes early will be best positioned to maintain credibility and meet the rising expectations of courts, clients and the public.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More