ARTICLE
20 August 2025

AI-Enhanced Cryptocurrency Creates New Liability Challenges

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
When a decentralized finance (DeFi) protocol called "Compound" distributed $90 million in COMP tokens to users due to a smart contract bug in September 2021, the crypto community faced an unprecedented question: who was legally responsible for an algorithmic error that no human directly caused?
United States Technology

When a decentralized finance (DeFi) protocol called "Compound" distributed $90 million in COMP tokens to users due to a smart contract bug in September 2021, the crypto community faced an unprecedented question: who was legally responsible for an algorithmic error that no human directly caused? The protocol's autonomous governance system had malfunctioned, but there was no CEO to sue, no board of directors to hold accountable, and no traditional corporate structure to assign liability.

As artificial intelligence increasingly powers cryptocurrency transactions and decentralized finance protocols, traditional legal frameworks struggle to address fundamental questions of accountability, governance, and user protection. The intersection of AI and digital currency creates a regulatory gap where billions of dollars operate beyond conventional legal oversight.

Evolution from Code to Cognition

Traditional smart contracts execute predetermined rules: "If X happens, then do Y." These systems, while autonomous, remain predictable because their decision trees are explicitly programmed. Legal analysis could theoretically trace every action back to human-coded instructions.

AI-enhanced smart contracts operate differently, using machine learning to adapt behavior based on market conditions, user interactions, and performance outcomes. An AI-powered trading protocol might develop strategies its creators never programmed, identify market patterns humans haven't recognized, and make decisions based on correlations emerging from vast dataset processing.

This evolution creates "cognitive smart contracts"—systems that don't just execute human decisions but make autonomous judgments influencing billions of dollars in transactions. When these systems cause errors or losses, determining liability requires understanding not just what happened, but why an AI system "decided" to take specific actions.

Regulatory Framework Challenges

Current cryptocurrency regulation focuses primarily on traditional financial services concepts: securities registration, money transmission, and anti-money laundering compliance. These frameworks assume human decision-makers who can be held accountable for financial services violations.

The Securities and Exchange Commission's enforcement actions against DeFi protocols have struggled with fundamental attribution problems. When the SEC filed charges against decentralized exchange operators, it had to identify specific individuals responsible for protocol operations—a challenge when protocols are designed for autonomous operation.

AI integration amplifies these attribution challenges:

No clear operators with ongoing control over AI decision-making.

Emergent behavior that wasn't explicitly programmed, making intent-based liability theories inadequate.

Cross-jurisdictional operations without regard for regulatory boundaries.

Contract Law Implications

Traditional contract law assumes human parties can understand obligations and modify agreements when circumstances change. Smart contracts eliminate this flexibility, executing automatically regardless of changing circumstances or unintended consequences.

AI-enhanced smart contracts create additional complications when AI systems interpret contract terms, potentially reaching conclusions differing from human understanding. If an AI system interprets "reasonable market conditions" differently than humans would, determining the controlling interpretation becomes problematic.

Consider a recent case where an AI-powered yield farming protocol interpreted a smart contract's "emergency shutdown" provision to mean it could liquidate user positions during high volatility periods. Users argued this wasn't the intended interpretation, but AI had identified language technically supporting its actions. Courts struggled to apply traditional contract interpretation principles to algorithmic decision-making.

Custody and Compliance

Digital asset custody regulations assume human custodians who can exercise discretion, follow court orders, and comply with regulatory requirements. AI systems managing digital assets automatically may not be capable of such compliance.

The Department of Treasury's proposed stablecoin regulations require custodians to segregate customer assets and maintain regulatory compliance. However, when AI systems manage billions in digital assets without human oversight, questions arise about their ability to comply with court orders, implement regulatory changes, or exercise the fiduciary duties that custody regulations require.

Recent enforcement actions suggest regulators struggle with these questions. The Commodity Futures Trading Commission recently fined a DeFi protocol for failing to implement proper customer identification procedures, but protocol operators argued they couldn't control AI system compliance decisions after deployment.

Market Manipulation and AI Intent

Securities and commodities laws prohibit market manipulation, but these laws assume human traders whose intent can be analyzed through traditional legal frameworks. AI systems that manipulate markets may do so without "intent" in any conventional sense.

Recent examples include:

AI trading bots coordinating behavior across multiple protocols without explicit programming.

Machine learning systems developing pump-and-dump-like strategies through reward optimization rather than malicious intent.

AI systems discovering and exploiting regulatory arbitrage opportunities.

When AI systems engage in behavior constituting manipulation if done by humans, current legal frameworks provide limited guidance on liability attribution or remedial measures.

Autonomous Governance Challenges

Traditional corporate governance assumes human decision-makers who can be held accountable for business judgments. Decentralized Autonomous Organizations (DAOs) governed by AI systems eliminate this assumption entirely.

Some DeFi protocols now use AI systems to:

Analyze governance proposals and recommend voting decisions.

Automatically implement approved changes to protocol parameters.

Optimize token economics based on market performance data.

Manage protocol treasuries according to algorithmic strategies.

When these AI governance systems make poor decisions harming token holders or protocol users, traditional corporate law provides limited recourse with no directors to sue, no officers to hold accountable, and no traditional corporate structure to assign fiduciary duties.

Privacy and Surveillance Implications

The intersection of AI and privacy-focused cryptocurrencies creates unique regulatory challenges. AI systems can potentially de-anonymize privacy coin transactions by analyzing blockchain patterns, transaction timing, and network behavior—capabilities potentially conflicting with the privacy protections these currencies provide.

Financial regulators express increasing concern about privacy coins' anti-money laundering compliance, but AI systems that can pierce privacy protections create new surveillance possibilities that existing privacy laws don't address.

Algorithmic Stablecoin Challenges

Algorithmic stablecoins using AI to maintain price stability represent complex intersections of AI and cryptocurrency regulation. These systems must make rapid decisions about monetary policy, collateral management, and market intervention—functions traditionally reserved for central banks and highly regulated financial institutions.

When algorithmic stablecoins lose their pegs or collapse entirely, systemic risks can affect traditional financial markets. However, regulating AI systems performing central banking functions raises fundamental questions about monetary policy, financial stability, and autonomous systems in critical financial infrastructure.

Cross-Border Compliance Complications

AI-powered cryptocurrency systems can operate across multiple jurisdictions simultaneously, creating compliance challenges traditional international financial law doesn't address. An AI system managing a DeFi protocol might execute trades in multiple countries, interact with users subject to different regulatory regimes, and adapt behavior based on local conditions without human oversight.

The EU's Markets in Crypto-Assets (MiCA) regulation and similar frameworks assume identifiable operators who can ensure compliance. When AI systems make autonomous decisions across borders, compliance becomes a technical challenge rather than a legal obligation.

Proposed Regulatory Approaches

Various regulatory solutions have been proposed:

Algorithmic Accountability Requirements: Requiring identifiable parties to remain responsible for AI system decisions in cryptocurrency contexts, though this conflicts with decentralization principles.

AI Governance Standards: Mandating specific governance structures for cryptocurrency protocols using AI decision-making, though this may stifle innovation while being difficult to enforce.

Enhanced Disclosure Requirements: Requiring detailed disclosure of AI system capabilities and decision-making processes, though technical complexity makes meaningful disclosure difficult.

Regulatory Sandboxes: Creating controlled environments for AI-cryptocurrency systems under relaxed requirements, though sandbox limitations may not reflect real-world conditions.

Looking Ahead

Effective regulation of AI-powered cryptocurrency systems may require frameworks that preserve human oversight over critical decisions while allowing algorithmic efficiency for routine operations. This might include hybrid governance models requiring human oversight for critical functions, algorithmic auditing standards for financial AI systems, liability insurance requirements for AI system failures, and emergency human override capabilities.

The regulatory response will likely determine whether AI-powered financial systems develop in ways that serve broad social interests or primarily benefit creators and early adopters. The most effective approaches will combine technological innovation with legal frameworks that preserve human agency over critical financial decisions.

As AI systems become increasingly capable of autonomous financial operation, the question facing regulators and industry participants is not whether to allow such systems, but how to ensure they serve human interests rather than purely algorithmic optimization. For guidance on cryptocurrency regulation, AI compliance, or related fintech matters, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More