When a high school principal's voice went viral making racist and antisemitic comments, the audio seemed authentic enough to destroy careers and inflame community tensions. Only later did forensic analysis reveal the recording was a deepfake created by the school's athletic director. The incident, requiring two forensic analysts to resolve the nature of the recording, illustrates a fundamental challenge facing the legal system: as AI-generated content becomes indistinguishable from human-created content, how do courts determine authenticity?
This challenge extends beyond theoretical concerns. Courts nationwide are grappling with synthetic evidence in real cases, from criminal defendants claiming prosecution videos are deepfaked to civil litigants using AI-generated content to support false claims.
Current Legal Framework Challenges
Technologies designed to detect AI-generated content have proven unreliable and biased, while humans demonstrate poor ability to distinguish between real and fake digital content. No foolproof method currently exists to classify text, audio, video, or images as authentic or AI-generated.
Recent cases reveal how this crisis manifests in practice. In USA v. Khalilian, defense counsel moved to exclude voice recordings on grounds they could be deepfaked. When prosecutors argued that witness familiarity with the defendant's voice could authenticate the recording, the court responded that was "probably enough to get it in," which is a standard that likely represents insufficient scrutiny for deepfake allegations.
In Wisconsin v. Rittenhouse, the defense successfully challenged prosecution efforts to zoom iPad video evidence, arguing that Apple's pinch-to-zoom function uses AI that could manipulate footage. The court required expert testimony that the zoom function would not alter underlying video — testimony the prosecution could not provide on short notice.
Federal Evidence Rule Developments
The US Judicial Conference's Advisory Committee on Evidence Rules considered proposals to amend Federal Rules to address AI-generated evidence challenges on May 2, 2025. Among the proposals being considered by the Committee are changes to Rule 901, which governs authentication of evidence in legal proceedings. Rule 901(a) provides that evidence is authentic if the proponent produces "evidence sufficient to support a finding that the item is what the proponent claims it is." Rule 901(b) provides examples of evidence that satisfies the Rule 901(a) requirement.
One proposal would modify Rule 901(b)(9), relative to an item generated by a "process or system," by requiring the proponent to provide evidence describing the process or system and showing that it produces "valid and reliable result."
Other proposals focus on new section specific to deepfakes, 901(c). One version of the proposed 901(c) would provide that "[i]f a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence."
Another version of the proposed 901(c) would establish a two-step process for evaluating deepfake challenges: First, parties challenging evidence authenticity on grounds of AI fabrication must present evidence sufficient to support a finding of fabrication warranting court inquiry. Mere assertions that evidence is deepfaked would be insufficient. Second, if opponents meet this requirement, evidence would be admissible only if proponents demonstrate it is more likely than not authentic — a higher standard than traditional "sufficient to support a finding" requirements.
This approach attempts to balance preventing baseless "deepfake defense" strategies while ensuring adequate scrutiny of potentially fabricated evidence. However, it leaves unresolved how courts will determine authenticity when detection technology proves unreliable.
The Committee determined a rule amendment was not necessary at this time and decided to keep proposed Rule 901(c) on the agenda for fall 2025 meeting without issuing public comment, noting reduced need for public input on deepfake issues.
State Evidence Rule Developments
Some states are paving the way when it comes to AI-generated evidentiary issues. Louisiana HB 178, which went into effect August 1, 2025, became the first statewide framework designed to address AI generated evidence by expanding and specifying the role of attorneys in exercising reasonable diligence to verify the authenticity of evidence. The law revises Louisiana Code of Civil Procedure article 371 to provide, in part, that "[a]n attorney shall exercise reasonable diligence to verify the authenticity of evidence before offering it to the court. If an attorney knew or should have known through the exercise of reasonable diligence that evidence was false or artificially manipulated, the offering of that evidence without disclosure of that fact shall be considered a violation of this Article [providing for contempt of court and further disciplinary action]."
Federal and State Responses
President Trump signed the Take It Down Act into law on May 19, 2025, criminalizing AI use to create deepfake images without depicted persons' consent, with FTC enforcement beginning within one year. This federal action supplements complex state landscapes where many states have enacted laws addressing nonconsensual sexual deepfakes and have enacted laws limiting deepfakes in political campaigns.
Tennessee has enacted the ELVIS Act (which went into effect on July 1, 2024), becoming the first state with deepfake legislation outside intimate imagery and political content categories, specifically protecting musicians' voices from AI manipulation. New York's digital replica law requires written consent, clear contracts and compensation for AI-created likeness use, while Minnesota's updated criminal code penalizes non-consensual deepfakes with misdemeanor or felony charges.
This state-by-state approach creates significant complications through inconsistencies that can lead to unpredictable outcomes for those seeking legal redress under newly enacted laws.
Judicial Gatekeeping Challenges
The authenticity crisis forces courts to confront fundamental questions about their role in the digital age. Traditional evidence authentication under Federal Rule 901 requires only evidence "sufficient to support a finding that the item is what the proponent claims it is" — a deliberately low threshold designed to let juries weigh evidence credibility.
This approach worked when authentication disputes involved questions like whether photographs accurately depicted crime scenes or whether signatures were genuine. Deepfakes shatter this framework by creating content that can fool both human observers and technological detection systems.
Some scholars propose expanding judicial gatekeeping authority, moving authenticity determinations from juries to judges. This approach would parallel how courts handle complex technical evidence under Daubert standards, requiring judges to evaluate evidence reliability before it reaches juries.
Access to Justice Implications
Synthetic media creates troubling access-to-justice problems. Hiring digital forensic experts costs from hundreds of dollars for hourly consulting to several thousand dollars per project, with higher fees for high-profile cases. This financial burden falls heaviest on those least able to bear it, with wealthy litigants affording comprehensive forensic analysis while individuals and small businesses may lack resources to challenge sophisticated deepfakes.
This disparity is particularly concerning in criminal cases where stakes include liberty and life. Current practice often places financial burden on defendants who may lack resources for adequate defense.
First Amendment Considerations
Synthetic media regulation faces significant constitutional hurdles, particularly regarding political speech. A federal judge blocked California's prohibition law in 2024 over First Amendment concerns, finding that the law "unconstitutionally stifles the free and unfettered exchange of ideas ... vital to American democratic debate."
This tension between preventing harm and preserving free expression complicates legislative responses, with most lawmakers opting for lighter-touch disclosure policies not yet blocked in federal court.
Industry and Technology Responses
Private sector responses reflect both promise and limitations of technological solutions. Reputable synthetic media services typically prohibit malicious deepfake creation, requiring users to certify permission for uploaded content. However, users can misrepresent rights and circumvent guardrails.
Some platforms embed watermarks or digital signatures within AI-generated content for enhanced traceability, but these methods are far from foolproof, with evidence that watermarks can be removed easily.
Emerging Legal Standards
Courts are developing practical approaches to synthetic media challenges without formal rule changes, including enhanced burden requirements for video and audio evidence in high-stakes cases, pretrial evidentiary hearings to resolve authenticity disputes, expert testimony requirements for deepfake allegations and heightened scrutiny for celebrity content.
Practical Guidance
Given current legal uncertainty, practitioners should adopt proactive strategies, including:
- Maintaining detailed records of content creation processes with timestamps and source materials;
- Including specific inquiries about AI-generated materials in discovery requests;
- Identifying qualified digital forensics experts early in cases involving audiovisual evidence;
- Advising clients about reputational and legal risks associated with AI-generated content; and
- Including specific provisions addressing AI-generated content in contracts.
Looking Ahead
The synthetic media revolution represents more than a technological challenge; it fundamentally questions how legal systems establish truth in the AI age. The legal system's response demonstrates remarkable adaptability, with courts developing new authenticity approaches, legislatures crafting targeted responses and the legal profession building expertise in digital forensics.
The authenticity crisis requires coordinated responses across multiple legal domains. Federal evidence rules need updating while preserving adversarial testing, state legislation must balance harm prevention with constitutional protections, and the legal profession must develop technological literacy adequate to the digital age. The institutions that successfully adapt to these challenges will preserve judicial proceeding integrity and remain relevant in an era where reality itself can be artificially generated.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.