Those risks are outlined in a recent analysis by Steptoe, which examines how the expanded use of AI is widening the aperture of antitrust exposure beyond the well-worn debate over algorithmic price fixing. While U.S. and EU regulators have spent years scrutinizing whether shared pricing tools could facilitate unlawful coordination, the firm argues that AI’s competitive impact extends far further into areas such as market allocation, personalized pricing, predatory strategies and data-driven monopolization .
The post points to recent enforcement actions as early signals of how regulators are thinking about algorithmic conduct. In the U.S., the Department of Justice’s case against RealPage put a spotlight on AI-driven rent recommendations that allegedly relied on nonpublic competitor data.
In Europe, guidance from the European Commission suggests that even the parallel use of a common pricing algorithm could breach competition law, regardless of whether firms ever directly communicate.
But Steptoe emphasizes that these cases represent only a narrow slice of the emerging risk profile.
“While the price-fixing issues in RealPage and Gibson have dominated the landscape, the rapid integration of AI into business strategies across industries is creating new and evolving antitrust risks,” the authors write.
Those risks include AI systems that learn to avoid competing head-to-head for customers or geographies, effectively replicating market or customer allocation without any human agreement. AI-driven personalized pricing could also trigger scrutiny under EU abuse-of-dominance rules or U.S. unfair competition standards if algorithms extract maximum willingness to pay from less price-sensitive consumers.
The analysis also flags AI-enabled predatory pricing as a potential next frontier. Advanced systems could selectively underprice only those customers most likely to defect to rivals, while maintaining profitability elsewhere, raising questions about how traditional below-cost pricing tests apply in an algorithmic context.
Beyond pricing, Steptoe highlights how control over large proprietary datasets can reinforce network effects, entrench dominance, and raise barriers to entry before consumer harm becomes obvious.
Looking ahead, the post suggests that enforcement agencies are likely to adapt existing doctrines rather than create entirely new AI-specific rules. Companies deploying AI at scale should expect closer scrutiny of how algorithms are trained, what data they rely on and how their outputs affect competitive dynamics. As AI becomes a standard feature of digital markets, Steptoe concludes, antitrust compliance will increasingly require proactive assessment of algorithmic tools long before regulators come knocking.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.