ARTICLE
14 October 2025

Underlying Risks Of Using AI-Generated Evidence In Nigeria's Justice System

A
Alliance Law Firm

Contributor

ALF is a multiple award winning law firm operating out of offices in Lagos, Abuja, and Port Harcourt Nigeria. Our mission is to establish a world class, full service Nigerian law firm distinguished by its premium service. We incorporate a rich blend of traditional legal practice with the dynamism required to satisfy our broad range of clients who operate in various industries.
Algorithms and artificial intelligence ("AI") technologies have become pervasive across various sectors, and the legal sector is no exception.
Nigeria Technology
Uche Obi SAN’s articles from Alliance Law Firm are most popular:
  • within Technology topic(s)
  • with readers working within the Media & Information industries
Alliance Law Firm are most popular:
  • within Finance and Banking, Transport and Insolvency/Bankruptcy/Re-Structuring topic(s)

INTRODUCTION:

Algorithms and artificial intelligence ("AI") technologies have become pervasive across various sectors, and the legal sector is no exception. Today, AI tools are now accessible to a wide array of legal actors, including lawyers, defendants, judges, and other stakeholders, largely due to the broad availability of computers and internet connectivity. This widespread connectivity enables virtually anyone to engage with AI technologies.1 While these technologies offer new efficiencies and possibilities within the legal field, it is important to recognize that technological advancement must not compromise the foundational principles of judicial integrity, which primarily include evidence credibility and fairness, which remain indispensably constant.

Respected legal Scholars have noted distinctive marks that help understand the various forms AI-generated electronic evidence can take. Grossman and Grimm distinguish between two categories: acknowledged and unacknowledged AI-generated evidence.'2 The former was noted as such evidence, 'about which there is no dispute that the evidence was created by, or is the product of, an AI system,' while the latter underscore 'where one party claims the evidence is an authentic representation of what actually happened, and the opposing party claims the evidence is a GenAI-fabricated deepfake.'3 While we agree with this distinction, it becomes noteworthy that unacknowledged AI-generated evidence is likely to pose more difficulty to assess by the courts (and is more misleading) compared to acknowledged AI evidence.

A quick leap into other jurisdictions shows that courts are already grappling with this issue. In the case of State of Washington v. Joshua Puloka,4 the Superior Court of Washington for King County rejected AI-enhanced video evidence along with all other AI-enhanced exhibits adduced before it. The court's ruling was predicated upon the newness of using such technology as a generally accepted tool for the forensic enhancement of electronic evidence.5 It also expressed concerns about the credibility of such evidence, stating that 'the video produced by Topaz Video AI enhancement model does not satisfy ER 401, as the resulting video does not show with integrity` what actually happened but uses opaque methods to represent what the AI model "thinks" should be shown.'6

The reasoning of the court here denotes a hallucinating effect on electronically-generated evidence which has contact with AI models. Again, this is helpful when trying to define AI-generated evidence to also include those originating from other devices but enhanced by AI. This case also aligns with what Gossman and Grimm7 noted as 'acknowledged AI-generated evidence.'

As an inference, the incident in early 2024, involving a finance worker in Hong Kong who was deceived into transferring $25.6 million after scammers used AI-generated deepfake videos impersonating the company's CFO and colleagues to gain his trust and approval, serves as an important reference point.8 Although the police later apprehended the fraudsters,9 if allegations were to arise against the finance worker regarding the malicious or fraudulent handling of company funds, and he were called to account for the loss, any recording of the conference call could reasonably be used as evidence in his defence. This scenario underscores the growing reality that AI tools can not only create but also enhance electronic records that may become central to judicial proceedings. Numerous similar events point to the same conclusion: AI-generated or AI-altered content is no longer a theoretical concern but a practical challenge that courts are increasingly likely to encounter.

In Nigeria, the current legal framework governing electronic evidence primarily emphasizes the admissibility of such evidence10 rather than its underlying reliability or credibility. The Evidence Act ("EA") 2011, along with its 2023 amendment, while still addressing the conditions under which electronic and computer-generated evidence may be admitted in court, notably lacks explicit provisions or guidelines concerning the credibility assessment which are specific to AI-generated evidence. This legislative gap presents obvious challenges, especially as AI introduces complex layers to the evaluation of evidence quality, authenticity, and trustworthiness. It raises a critical question: is there sufficient regulation and guarantee of credibility for electronic and computer-generated evidence within the Nigerian judicial system?

While reflecting on this question, it is prudent to note that there should be a necessary shift in focus from admissibility alone to also encompass the credibility of electronically-generated evidence. This is further emphasised due to the significant implications it presents for the administration of justice in Nigeria. The contemporary challenge, therefore, lies in understanding how Nigerian judges and relevant stakeholders should be navigating (or could navigate) these emerging risks posed by AI-generated electronic evidence.

This inquiry is essential in ensuring that the Nigerian judicial processes remain fair and the integrity of evidence evaluation is maintained amid technological change. Against this backdrop, this essay explores the underlying risks of AI-generated electronic evidence within the Nigerian context. It aims to critically examine the relationship between rapidly evolving technological capabilities and Nigeria's existing legal framework, particularly the mechanisms currently in place for assessing the credibility of electronic evidence. The essay further seeks to propose informed responses to be adopted within Nigeria's justice system to effectively address the emerging challenges posed by AI-generated evidence.

UNDERSTANDING AI-GENERATED EVIDENCE:

Evidence is any form of proof or information, such as documents, records, physical objects, or exhibits, legally presented by the parties through witnesses during the trial of a case to persuade the court or jury of their position.11 It serves to establish facts relevant to the case and induces belief in the minds of the decision-makers about the issues in dispute.12 It is apposite to ask whether AI can generate evidence? This fundamental question is variously answered in the positive, especially now that AI technologies have become more sophisticated and integrated into various sectors, including the legal sphere.13

AI-generated evidence, therefore, could be referred to as information produced or influenced by AI systems that is presented in legal proceedings to support or contest claims. This may include deepfakes, which are highly realistic fake images, audio, or video created by AI; automated logs generated by AI monitoring systems; and outputs from AI analysis tools that interpret complex data. For instance, AI might analyse financial records to detect fraud patterns14 or produce surveillance footage enhanced by facial recognition algorithms.15 Evidence from any of these processes may end up in the courtroom, which could be referred to as AI-generated evidence.

Given that these are not statutory explanations, the grounds of testing credibility are also absent in Nigeria's corpus juris. It is worrisome that electronic records are covered by the 2023 amendments to the Evidence Act 2011,16 and are therefore recognised and admissible in Nigerian Courts, but AI-sourced records are not envisaged for regulation. Could the definition of a computer under section 258 of the Evidence Act, 2011 (as amended) be applied to incorporate AI, hence, bringing AI-generated documents (evidence) to the admissibility tests under section 84 of the Act?

The Evidence Act, 2011 (as amended) broadly defines a "computer" as any device used for storing and processing information,17 hence encompassing a wide range of digital devices. In contrast, AI specifically refers to a computer or machine's ability to perform tasks that typically require human intelligence,18 such as learning and decision-making. The former definition focuses on the device's functional capacity to handle data, while the latter emphasizes cognitive processing capabilities. Therefore, the definition in the Evidence Act, 2011 (as amended) does not incorporate AI since it lacks the element of intelligent action beyond simple data processing. This distinction matters legally and technologically because not all devices classified as computers under the Act perform AI functions, and AI's advanced capabilities require separate consideration beyond basic computing.

CATEGORISING UNDERLYING RISKS OF AI-GENERATED EVIDENCE:

  • Risks to Accuracy and Reliability: AI systems are not infallible. Errors in data input, algorithmic biases, and limitations in training data can produce inaccurate results. The decision-making processes of many AI tools have so far remained opaque, known as the "black-box" problem.19 The black-box problem has the potential to make it difficult for legal actors to challenge or verify conclusions. This can lead to false positives (wrongly identifying innocent behaviour as suspicious) or false negatives (overlooking critical evidence), both of which are jeopardizing factors to fair trials.
  • Risks to Authenticity and Manipulation AI technologies, particularly those behind deepfakes, possess alarming potential to fabricate or manipulate evidence that appears indistinguishable from genuine material. Detecting such forgeries requires sophisticated tools and expertise that may not yet be widely available in Nigeria. Furthermore, if AI training datasets are biased or compromised, the validity of the evidence produced is undermined, posing a challenge to establishing authenticity. While the courts are focused on the case brought before it, the primary issue related to AI in the context of a trial can be the presence of widespread, AI-generated information or news online that may prove manipulative to the public. While the standards of justice rely heavily on the independence and fairness of the judges, it is not preposterous to suggest that public opinion may, at times, exert an influence on judicial decision-making. Where that is not the case, the public opinion of such a party before the court can influence their livelihood after the case. There are several instances where companies' stock prices have plummeted due to misinformation, regardless of what the court had pronounced, and this also applies to individuals. A significant problem is the spread of disinformation and propaganda through bot accounts and paid online trolls, who often use AI-generated images to appear more authentic.
  • Procedural and Legal Risks: Maintaining the chain of custody is a cornerstone of evidence integrity. However, this becomes increasingly complex with AI-generated evidence. Determining and certifying the source of such evidence remains challenging, especially when automated processes generate or modify data without transparent records. The situation is further complicated by the fact that many judges and legal practitioners may lack the technical understanding needed to critically assess AI evidence, increasing the risk of misinterpretation or undue reliance on such evidence in judicial proceedings. The usual grounds for cautioning against the risks of evidence credibility and guiding evidence admissibility include these key principles:20Relevance: Evidence must have a logical connection to the facts in issue to be considered material, ensuring it helps prove or disprove the case. This prevents distractions by irrelevant information.
  1. Pleading: Evidence to be admissible must be properly pleaded or disclosed beforehand, so parties and courts are aware of its nature and scope, allowing for fair trial preparation.
  2. Necessary Foundation: A proper foundation must be laid to show the evidence is authentic and reliable.
  3. Non-exclusion by Statutes: Evidence must not be barred by statutory provisions such as the Evidence Act or other Nigerian laws that specify exclusions to protect legal fairness or privacy.
  4. Compliance with Legal Requirements: Evidence must meet all procedural and substantive legal conditions for admissibility, including proper authentication, chain of custody, and conformity with rules on hearsay, expert testimony, and electronic data.

In the case of AI-generated evidence, the principles of transparency and credibility remain essential, even in the absence of well-established verification criteria to scrutinise such evidence. In practice, these principles are often reinforced through best practices such as rigorous pre-trial evidence disclosure, expert evaluation of evidence (e.g., AI output), judicial scrutiny of authenticity, and admissibility hearings to safeguard against unreliable or prejudicial evidence.21 Courts often apply balancing tests weighing probative value against potential prejudice or confusion before admitting complex evidence.22 These procedures align with fairness, openness, and accuracy, principles central to sound justice systems globally. As AI technologies become increasingly capable of producing highly convincing but entirely fabricated images, videos and documents, it is crucial that courts, lawyers, and other stakeholders in the justice system approach such AI-generated evidence with informed caution and healthy scepticism, asking the right questions to avoid being misled by sophisticated fabrications.23

  • Ethical and Privacy Concerns: AI-driven evidence collection often involves extensive data mining and surveillance, which can infringe on individual privacy rights. Additionally, AI systems may produce algorithmic biases that reinforce discrimination, thereby undermining fairness in judicial decisions. The lack of transparency inherent in many AI processes can erode public confidence and impede due process guarantees. Understanding this risk, the Nigerian Bar Association (NBA), through its Section on Legal Practice – Technology and Law Committee, has issued guidelines to highlight the ethical implications of using AI in legal practice24 (and this research finds it relevant to AI-generated evidence). These guidelines directly connect to several Rules of Professional Conduct (RPC), particularly rules 14,25 15,26 16,27 19,28 and 24(2).29 These RPC rules collectively address concerns around confidentiality, competence, ethical conduct, conflicts of interest, and integrity, and these issues are critical when lawyers use AI in handling evidence in the courtroom. Data accuracy is a fundamental protection for data subjects under Nigeria's Data Protection Act (NDPA)2023.30 It, therefore, becomes troubling when, by the interaction of AI with facts, data is mutilated, suppressed, or altered.

CONSEQUENCES OF OVERLOOKING THESE RISKS:

  • Impact on Fair Trial Rights: Unreliable AI-generated evidence heightens the risk of miscarriages of justice, including wrongful convictions or unjust acquittals. Such outcomes are capable of eroding public trust in the legal system and could diminish the legitimacy of judicial outcomes. The stakes are particularly high for Nigeria. The right to fair hearing, enshrined in Section 36 of the Constitution of the Federal Republic of Nigeria 1999 (as amended), fundamentally depends on the reliability and integrity of the evidence presented in court. If AI-generated evidence is admitted without robust safeguards, there is a real possibility of erroneous convictions, acquittals, or miscarriages of justice. This raises urgent questions about how Nigerian law and judicial practice should respond to the unique dangers posed by AI-generated evidence, to maintain established standards of reliability and admissibility. Courts in other jurisdictions have already begun grappling with these concerns. For example, in State of Washington v. Puloka, the court rejected AI-enhanced video footage, citing concerns over authenticity and its failure to meet believability requirements.31 The case highlights the need for careful scrutiny of AI-generated or AI-altered evidence—an approach Nigerian courts would do well to adopt as they confront similar issues.
  • Long-term Effects on Legal Precedent and Justice System: If flawed AI evidence gets normalised, it risks setting dangerous precedents that weaken the evidentiary standards in Nigerian courts. Over time, this could institutionalise acceptance of unreliable or manipulated evidence, compromising the wider pursuit of justice.

LEGAL AND TECHNICAL SAFEGUARDS AGAINST RISKS:

  • Current Legal Frameworks: Existing Nigerian laws governing electronic evidence provide a foundation but essentially lack specifics addressing AI-generated evidence complexities. These gaps necessitate clear statutory guidance on admissibility, verification, and challenge procedures tailored to AI contexts.
  • Technological Countermeasures: Advances in AI explainability aim to demystify black-box algorithms, increasing transparency for judicial scrutiny. The case of State of Washington v. Puloka, briefly discussed above rejected the AI-transformed evidence for lack of general acceptability of the AI tool in effecting such enhancements in evidence, and to say the least, un-explainability of AI systems cannot aid general acceptability of such tools that may be used on evidential material that will end up in courtrooms. Authentication tools leveraging blockchain or digital signatures can help verify evidence integrity and provenance, reducing tampering risks.
  • Need for Judicial and Practitioner Education: Critical to effective oversight is educating judges, lawyers, and law enforcement personnel on AI capabilities and limitations. Expert witnesses proficient in AI technologies can aid courts in understanding and evaluating AI-driven evidence.

RECOMMENDATIONS FOR MANAGING AI EVIDENCE RISKS:

To safeguard Nigeria's justice system, several measures are advisable:

  1. Enact legislative reforms that incorporate detailed (and express) provisions for AI-generated evidence admissibility and oversight.
  2. Develop standardized methodologies for validating AI evidence reliability and authenticity.
  3. Establish ethical frameworks guiding AI deployment in judicial settings to uphold privacy, fairness, and non-discrimination, just like the NBA SPL has published one for Lawyers.
  4. Promote interdisciplinary collaboration between legal professionals, technologists, and policymakers to foster informed responses to evolving AI challenges.

CONCLUSION:

AI-generated evidence is already manifesting in courtrooms around the world. Its infiltration into the Nigerian justice system is very probable, and may already be ongoing undetected. The reality shown in authenticity, fairness, privacy, and ethics which need serious attention. Courts, lawyers, and other stakeholders in Nigeria's justice sector must, therefore, be vigilant. Strong safeguards are essential to balance innovation with justice and fairness. Understanding AI evidence is still new in Nigeria. The legal framework governing evidential matters in Nigeria (Evidence Act) needs updating, especially around how to authenticate and categorize risks properly. Best practices worldwide show the importance of transparency, expert evaluation, and clear procedures. Ignoring these could lead to wrongful decisions and loss of public (as well as stakeholders') trust in the judicial system with serious consequences. Judges, lawyers, and all justice sector players must come together to begin discussions on this very sensitive matter. They must learn about these challenges and work to protect fundamental rights and fair trial guarantees of the Constitution of the Federal Republic of Nigeria. This is a collective call to adapt, educate, and reform; so that technology serves justice, not threaten it.

Footnotes

1. MR Grossman and PW Grimm (ret), 'Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence' (2025) 26 Columbia Science & Technology Law Review 110

2. Ibid.

3. Ibid.

4. (2024) No. 21-1-04851-2 KNT.

5. State of Washington v Puloka, Paragraph 2 – 7 of Conclusion of Law.

6. State of Washington v Puloka, Paragraph 13 of Conclusion of Law.

7. Gorssman and Grimm, (n1).

8. Heather Chen & Kathleen Magramo, 'Finance Worker Pays out $25 Million after Video Call with Deepfake 'Chief Financial Officer'' (CNN 4 Feb 2024) <view link> accessed 21 August 2025.

9. Ibid.

10. Evidence Act (EA) 2011 (as amended), section 84.

11. Chief Sergeant Chidi Awuse v Dr. Peter Odili & Ors (2003) LLJR-CA.

12. Ibid.

13. UNESCO, 'AI and the Rule of Law: Capacity Building for Judicial Systems' (UNESCO) view link; I Taylor, 'Justice by Algorithm: The Limits of AI in Criminal Sentencing' (2023) 42(3) Criminal Justice Ethics 193 view link

14. PA Adejumo and C Ogburie, 'The Role of AI in Preventing Financial Fraud and Enhancing Compliance' (2025) 22(03) GSC Advanced Research and Reviews 269 view link.

15. Thaddeus L Johnson, Natasha N Johnson, Volkan Topalli, Denise McCurdy and Aislinn Wallace, 'Police Facial Recognition Applications and Violent Crime Control in U.S. Cities' (2024) 155 Crime Science 105472 view link; Sa K, Ra A, Krishnaa BBS and Pa V, 'Advancements in Real-Time Face Recognition Algorithms for Enhanced Smart Video Surveillance' (2023) 230 Procedia Computer Science 486.

16. Evidence Act 2011 (as amended), section 84B.

17. EA, section 258(1).

18. Peter Stone and others, Artificial Intelligence and Life in 2030 (One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016) view link accessed 1 September 2025.

19. Lou Blouin, 'AI's mysterious "black box" problem, explained' (University of Michigan-Dearborn, 22 July 2025) <view link> accessed 27 August 2025.

20. Okoye & Anor. v Christopher Obiaso & Ors. (2010) 8 NWLR (Pt. 1195) 145 168.

21. TRI/NCSC AI Policy Consortium for Law & Courts, 'AI-generated Evidence: A Guide for Judges' (National Center for State Courts) view link accessed 2 September 2025.

22. Ibid.

23. UNESCO, 'How to determine the admissibility of AI-generated evidence in courts?' (UNESCO) view link accessed 2 September 2025.

24. Nigerian Bar Association, Guidelines for the Use of Artificial Intelligence in the Legal Profession in Nigeria (Section on Legal Practice – Technology and Law Committee, April 2024) <view link> accessed 27 August 2025.

25. Duty of dedication and devotion to the cause of the client.

26. Duty to represent clients within the bounds of the law.

27. Duty to represent clients competently.

28. Duty to maintain privilege and confidentiality of clients.

29. Lawyers' responsibilities in litigation.

30. NDPA, section 34(c).

31. (2024) No. 21-1-04851-2 KNT.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More