ARTICLE
24 November 2025

Emerging Antitrust And IP Challenges In The Age Of Artificial Intelligence & Algorithms (Part-1)

LP
Legitpro Law

Contributor

Legitpro is a leading international full service law firm providing integrated legal & business advisory services, operating through 5 locations with 100+ people. Our purpose is to deliver positive outcomes with our colleagues, clients and communities. The firm proudly serves a diverse clientele, including multinational corporations, foreign companies—particularly those from Japan, China, and Australia and dynamic startups across various industries. Additionally, the firm is empanelled with the Competition Commission of India (CCI) to represent it before High Courts across India. Our Partners also serve as Standing Counsel for prestigious institutions such as the Government of India (GOI), the National Highways Authority of India (NHAI), Serious Fraud Investigation Office (SFIO) and the Union Public Service Commission (UPSC).
Artificial Intelligence has rapidly evolved from an operational enhancer to a core market actor.
India Antitrust/Competition Law
Rahul Dahiya’s articles from Legitpro Law are most popular:
  • within Antitrust/Competition Law topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in India
  • with readers working within the Retail & Leisure, Telecomms and Law Firm industries

Emerging Antitrust and IP Challenges in the Age of Artificial Intelligence & Algorithms (Part-1)

(This article is presented in two interconnected parts. Part One lays the conceptual foundation by examining the key antitrust and intellectual property challenges arising from AI and algorithmic markets. Part Two builds on that framework by analysing the Indian regulatory perspective, global enforcement trends and the broader policy conclusions that will shape the future of AI governance.)

  1. Introduction: When Markets Begin to Think for Themselves

Artificial Intelligence has rapidly evolved from an operational enhancer to a core market actor. Across industries, e-commerce, finance, digital advertising, manufacturing, aviation, ride-hailing, entertainment, insurance and logistics, algorithms are becoming the invisible engines that price goods, allocate inventory, target customers, detect fraud and adjust supply in real time. These systems are self-learning, dynamic and increasingly autonomous, meaning that markets are no longer shaped solely by strategic human behaviour but by the iterative logic of machine learning models that continuously experiment, adapt and optimise.

This shift challenges the traditional foundations of competition law and IP law. Antitrust doctrines were built around human agreements, strategic intent and observable patterns of conduct. Intellectual property frameworks were built around human creativity, originality and identifiable inventors. AI disrupts both interpretations: algorithms can collude without communication, coordinate without intent, exclude without explicit discrimination, generate content without a human author and create inventions without a human inventor.

As AI becomes embedded in nearly every business model, a new legal frontier emerges, one in which the tools shaping market outcomes are no longer passive implements but active, learning agents. Regulators, courts and businesses must confront an urgent question: How should antitrust and IP law adapt when the tools that animate markets are themselves changing at unprecedented speed?

We explore the emerging challenges at the intersection of AI, competition law and intellectual property, examining how artificial intelligence is rewriting the rules around cartel conduct, information exchange, market power, predatory pricing, algorithmic discrimination, patentability, copyright and the balance between innovation and competition.

  1. AI-Driven Markets: A Paradigm Shift in Competitive Behaviour

2.1. The Rise of Algorithmic Decision Making

AI systems are not static software. They constantly learn from new data, detect patterns that humans cannot observe and adapt strategies based on environmental feedback. Their capacity to revise rules, update parameters and refine outcomes makes them profoundly different from human decision-making.

In many industries, AI has moved from being a backend optimisation tool to becoming the primary engine that drives core commercial decisions. Algorithms now dynamically determine how prices fluctuate in response to market conditions, how users are targeted based on granular behavioural insights and how transactions are routed across digital platforms to maximise efficiency. They influence how supply chains adjust to demand patterns, disruptions and inventory cycles and they control how advertising is allocated through real-time bidding, audience segmentation, and predictive engagement models. Even risk assessment, once reliant on manual evaluation and static rules is now governed by AI systems that continuously analyse patterns, anomalies and emerging threats. Together, these functions demonstrate how deeply AI has embedded itself into the operational and strategic fabric of modern markets. The result is a "living market," shaped by models that continuously respond to signals from competitors, consumers and platforms. This dynamic learning environment increases efficiency, but also introduces forms of conduct that can resemble antitrust violations even in the absence of intent.

2.2 The Data-Algorithm Loop and Self-Reinforcing Market Power

Modern AI relies heavily on data. Companies with larger datasets train better models, which attract more users, which generate even more data creating a self-reinforcing feedback loop. This dynamic amplifies market concentration, locking new entrants out of the market not because of inferior technology but because of lack of access to high-quality training data.

Traditional measures of market dominance, such as market share, control of supply or the ability to raise prices are no longer adequate to capture the realities of AI-driven markets. Today, dominance increasingly flows from far less visible but far more powerful assets, access to proprietary datasets that enable superior model training, control over essential algorithms that determine market outcomes and exclusive access to high performance compute resources that competitors cannot easily replicate. It also stems from ownership of foundational model architectures that become industry standards, from platform network effects that AI further amplifies through personalised engagement and from control over the cloud and AI infrastructure on which entire ecosystems depend. These new sources of competitive advantage redefine what it means to be dominant in the algorithmic economy.

  1. Algorithmic Collusion: When Machines Converge on the Same Strategy

One of the most widely discussed competition risks is algorithmic collusion, where AI systems independently learn to adopt strategies that lead to supra competitive outcomes. This can occur even without human coordination, explicit communication or shared intent.

3.1 Autonomous Tacit Collusion

AI-powered pricing systems, especially reinforcement learning engines, test different pricing strategies, observe competitor reactions and converge on stable equilibria that maximise profits. Unlike human tacit collusion, which requires sophisticated signalling or repeated interactions, algorithmic tacit collusion may emerge naturally from optimisation logic.

Three attributes make this form of collusion especially troubling. First, the speed at which algorithms operate allows them to experiment, iterate and adjust thousands of times faster than any human decision maker, rapidly converging on profit-maximising strategies. Second, their opacity means that the internal logic driving these outcomes is often inscrutable even to the developers who built the models, making it difficult to pinpoint how or why the algorithm learned to behave in a collusive manner. Third, once these systems settle into a collusive equilibrium, they tend to maintain it with remarkable stability, showing far less deviation than human actors typically would. Together, these features challenge long standing legal frameworks that rely on detecting an identifiable "agreement" between firms, raising profound questions about how to police collusion in markets increasingly shaped by autonomous systems.

3.2 Algorithmic Information Exchange and the Hub and Spoke Problem

Shared AI vendors and cloud based analytics providers can, often unintentionally, function as central hubs that facilitate coordinated behaviour among competing firms, creating a modern form of the classic "hub and spoke" collusion model. This risk becomes particularly pronounced when multiple competitors rely on the same algorithmic tools or data processing infrastructure. For instance, competing airlines may depend on a single dynamic pricing vendor that optimises fares across carriers, e-commerce sellers might use the same automated repricing engine that continuously adjusts prices in response to market movements, retailers may employ a common predictive inventory system that synchronises stocking decisions, advertisers frequently rely on the same programmatic ad exchange that allocates bidding opportunities, and ride hailing companies often use surge pricing algorithms that follow similar logic within shared platform architectures. In each of these scenarios, the shared technological intermediary can unintentionally align market behaviour across firms, even without direct communication or explicit agreements among the competitors themselves. When the same AI tool processes sensitive data from multiple clients, it may create an unintended conduit for information exchange, even without deliberate sharing.

This development raises a critical question for modern competition enforcement: at what point does the shared use of an industry-standard AI system become the basis for finding a price fixing, output restriction, customer allocation or information sharing agreement? As firms increasingly rely on common algorithmic tools, regulators are beginning to view such shared reliance as a potential conduit for coordinated behaviour, even in the absence of any direct communication or explicit intent among competitors. Importantly, enforcement agencies are also expanding the circle of potential liability. They are signalling that responsibility may attach not only to the companies deploying these AI tools, but also to the developers who build, train or fine-tune the models, as well as the platforms and cloud providers that supply the underlying algorithmic infrastructure. This represents a significant broadening of antitrust exposure across the entire AI ecosystem, transforming developers, vendors and infrastructure providers into potential participants or facilitators of unlawful coordination.

3.3 Predictive Algorithms and Signalling

Predictive algorithms introduce an even more complex layer to the antitrust landscape because they can function as inadvertent signalling mechanisms between competitors. Many advanced AI systems, particularly those built on forecasting, demand prediction, or market behaviour modelling are designed to analyse vast amounts of real time data and anticipate how competitors are likely to act. When such systems continually adjust a firm's own pricing or output in response to these predictions, they may create a de facto channel of communication, even though no direct exchange of information has occurred. In effect, the algorithm becomes a silent intermediary, interpreting competitor behaviour, forecasting future moves and aligning strategies in a way that can mimic coordinated conduct.

This raises profound legal challenges for regulators. If an AI tool independently predicts a competitor's next move and adjusts its user's strategy accordingly, does this constitute indirect coordination? Can reliance on predictive modelling be treated as the functional equivalent of receiving a signal from a rival? And if so, to what extent should liability attach to the firm using the model, the developer who designed it or the dataset that trained it? As AI becomes more capable of inferring rival strategies with increasing accuracy, regulators must grapple with whether these algorithmic forecasts cross the line from legitimate competitive intelligence into a mechanism that enables or strengthens collusive market outcomes.

  1. Exclusionary Practices Involving AI: Algorithms as Gatekeepers of Competition

4.1. Algorithmic Self-Preferencing

Platforms that operate as digital intermediaries, such as online marketplaces, search engines, advertising networks, and app stores, now rely extensively on AI-driven ranking and recommendation systems to determine the visibility and prominence of products, services and content. These algorithms have the capacity to quietly prioritise the platform's own offerings, elevate affiliated businesses, highlight high margin products or favour preferred advertisers, all without any explicit disclosure. Crucially, this form of discrimination is not usually the result of direct human instruction, instead, it is embedded deep within complex optimisation models, training data patterns and machine learning architectures that naturally evolve toward outcomes beneficial to the platform's commercial interests. Despite its subtlety, the competitive impact is significant. By shaping consumer attention and influencing purchasing pathways, these AI-powered ranking systems can distort fair market opportunities, entrench incumbents, suppress rival visibility and create structural advantages that are difficult, if not impossible, for smaller competitors to overcome.

4.2. Data Driven Exclusion

Companies that control large, high-quality datasets increasingly hold a powerful competitive advantage, because AI performance depends directly on the quality and breadth of training data. When such firms deny access to these datasets or license them on discriminatory terms, through refusals, restrictive contracts or selectively favourable licensing, they can effectively prevent rivals from developing models of comparable strength. This type of data driven foreclosure is often more subtle and far more effective than traditional structural barriers, enabling incumbents to entrench their dominance and suppress new competition in ways that are difficult for regulators to detect.

4.3. API and Model Access Restrictions

AI as a service providers can also engage in exclusionary conduct by leveraging their control over critical technical infrastructure. They may restrict or tier API access in ways that limit a rival's ability to build competitive products, throttle the performance of competing applications that rely on their platforms or impose licensing terms that are so restrictive or costly that they effectively deter meaningful competition. In some cases, they may simply deny interoperability altogether, preventing third party systems from integrating with their models or data streams. Although these practices resemble traditional refusal to deal or tying strategies, their execution through automated systems and real-time technical controls makes them far more scalable, opaque, and difficult for regulators to detect or remediate, thereby amplifying their exclusionary impact in AI-driven markets.

4.4. Algorithmic Customer Steering

AI driven recommendation engines can subtly steer high value customers away from competitor offerings, not through explicit exclusion, but through personalised ranking, product placement, or targeted suggestions. Such steering is extremely difficult to detect, making enforcement complex.

  1. AI-Enabled Predatory Pricing and Personalized Exploitation

Dynamic pricing is one of AI's most commercially powerful and widely adopted capabilities, enabling businesses to adjust prices in real time in response to an array of constantly evolving market signals. Using sophisticated machine learning models, firms can instantly react to demand fluctuations, monitor and mirror competitor behaviour and calibrate pricing to each customer's willingness to pay. These systems also draw on behavioural analytics to identify purchasing patterns and leverage micro segmentation to tailor prices to narrowly defined consumer groups. Together, these features allow companies to optimise revenue with a level of precision and speed that traditional pricing strategies could never achieve.

5.1 When Does Algorithmic Pricing Become Predatory?

Predatory pricing has traditionally been understood as pricing goods or services below cost with the deliberate intent to drive competitors out of the market, but the introduction of AI dramatically complicates this analysis. Advanced algorithms can segment customers with such precision that predatory discounts can be targeted only at those consumers most likely to defect to a rival, leaving other segments priced normally and obscuring the strategy. AI can also cross subsidise different customer groups, maintaining overall profitability while still inflicting strategic harm on competitors. Moreover, these systems can detect vulnerability signals in real time, such as comparison-shopping behaviour or indications of churn and selectively deploy loss-leading prices to neutralise competitive threats. As a result, AI may facilitate forms of predatory pricing that do not require across the board below cost pricing, making them far harder to detect using traditional legal tests. This evolution compels regulators to rethink both the cost-based assessments and the intent requirements that have long formed the core of predation analysis.

5.2. The Problem of Personalized Exploitation

AI's powerful customer profiling capabilities can also give rise to a range of exploitative practices that were previously difficult to implement at scale. By analysing behavioural patterns, purchase histories and psychological cues, AI systems can enable highly discriminatory pricing, offering different consumers vastly different prices for the same product. They can craft exploitative offers that take advantage of impulse triggers or emotional states, and they can manipulate behavioural biases, such as loss aversion or urgency to push consumers toward decisions they might not otherwise make. Perhaps most concerning, AI can target financially vulnerable individuals, identifying those with limited means or high credit risk and steering them toward high cost or unfavourable products. These practices, while often subtle and algorithmically driven, increasingly fall within the scope of emerging doctrines on exploitative abuse, even in jurisdictions where such claims have historically been limited or rarely enforced.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More