The State of Competition in the AI Stack

 

TSE hosted the 18th edition of its Digital Economics Conference on January 8–9, 2026, in Toulouse. One of the highlights was a roundtable discussion entitled “Is there enough competition in the AI stack? What (if anything) should be done about it?”
The session was chaired by Jacques Crémer (TSE) and brought together a distinguished panel of experts from industry, law, and economic consulting: : Adam Cohen (OpenAI), F. Enrique González-Díaz (Cleary Gottlieb Steen & Hamilton), Oliver Latham (Charles River Associates), and Rikke Riber Rasmussen (Google). Together, they reflected on one of the most debated questions of the moment: is artificial intelligence becoming monopolized, or is competition still alive and well?
As Jacques Crémer recalled, when the roundtable was first conceived last summer, a sense of “clear panic” dominated the conversation. Many believed that one company would inevitably come to dominate AI, leaving little room for competition. Yet over the course of the discussion, a more nuanced picture emerged — one of a fast-moving and highly contestable market.

Is there enough competition? How should we interpret AI companies’ valuations?

A first point raised during the discussion was the need to better understand the current competitive landscape — and to reconcile it with the astronomical valuations of AI companies.
For one participant, predicting the future structure of competition remains extremely difficult. While large technology companies currently control critical inputs such as cloud infrastructure, chips, and vast datasets — often forcing new entrants into partnerships — some AI firms are progressively building their own capabilities to reduce dependence and stimulate competition. Distribution was also highlighted as crucial: although most users access AI tools through chatbot interfaces, companies are rapidly diversifying their products. How AI products and digital platforms will interact in the future, and what will drive user adoption, remain open questions.
Another speaker described the market not as a settled “fiefdom,” but as an experimental phase in which no stable business model has yet emerged. In this view, high company valuations reflect expectations about a potentially enormous future market rather than entrenched dominance. The rapid entry of competitors and falling production costs were cited as signs of a dynamic and competitive ecosystem.
A further intervention compared today’s AI boom with the Web 2.0 era. Unlike services such as Google Search — which benefited from zero consumer prices and strong feedback loops reinforcing dominance — AI models currently face positive marginal costs and weaker network effects. One participant drew a historical analogy with the 19th-century “Railway Mania”: even if today’s enthusiasm ultimately proves financially excessive, the infrastructure built along the way could leave lasting benefits for society.
From a legal perspective, it was emphasized that the AI market is characterized by innovation and dynamic competition rather than entrenched positions. High valuations may simply reflect temporary advantages gained through innovation — which is consistent with healthy competition, as long as those advantages are not cemented through anti-competitive practices.

How should we view partnerships between AI firms and tech giants?

The discussion then turned to cross-investments and partnerships between AI developers and large technology companies.
While regulators often view such arrangements with suspicion, one speaker argued that they can efficiently address the “knowledge spillover” problem typical of general-purpose technologies. By aligning incentives between innovation and infrastructure providers, these partnerships may accelerate development.
Another participant acknowledged that, for many start-ups, partnering with major tech firms is less a strategic choice than a necessity. The capital required to replicate large-scale cloud infrastructure makes independent scaling nearly impossible.
Data was also identified as a key competitive frontier. According to one contributor, publicly available web data has largely been exhausted. Competitive advantage is now shifting toward access to scarce, proprietary datasets. In the future, success may depend not only on better algorithms, but also on exclusive access to high-quality non-public data — potentially creating new barriers to entry.

Is it too early for regulators to intervene?

A central question was whether regulators should act now — or wait.
One participant argued that competition authorities should actively learn about the sector and carefully assess whether intervention is needed, but warned against prematurely applying strict ex-ante regulations such as those found in the Digital Markets Act (DMA). Unlike established digital platforms, AI markets have not yet been shaped by antitrust enforcement, and early heavy-handed regulation could risk stifling innovation.
Another speaker advocated for a case-by-case approach. Broad interoperability mandates, it was suggested, could become impractical or even counterproductive — for example, requiring every AI-powered feature in an application to offer multiple competing providers via a drop-down menu could degrade user experience. If competition remains strong at the upstream model level, concerns about downstream distribution may be less pressing.
Historical comparisons were also drawn with browser competition in the 1990s, where bundling practices disadvantaged standalone competitors. At the same time, one participant emphasized that competition policy is not designed to address existential or safety risks — its tools serve different objectives.
Finally, regulators were encouraged to rely on evidence rather than fear. Over-regulation, it was warned, could slow innovation. Instead of restricting development based on uncertain future risks, some suggested focusing on data portability and open standards to maintain flexibility and competition.

What role can Europe play in this global competition?

The final part of the discussion addressed Europe’s position.
Several structural challenges were identified: high energy prices, fragmented capital markets, and regulatory complexity. One participant pointed specifically to Europe’s copyright framework, arguing that restrictions on the use of data for training AI models may push frontier development outside the continent.
Another speaker suggested that Europe’s relative weakness in AI is not due to AI-specific policy failures, but to broader structural issues. A more horizontal industrial policy — and leveraging Europe’s strengths, such as nuclear energy to power data centers — could be part of the solution.
A final contribution warned against the risk of a “two-speed world.” If Europe continues to prioritize strict regulation while the United States and China move ahead rapidly, it could become primarily a market for technologies developed elsewhere — accessing only delayed or compliance-modified versions of cutting-edge tools.

Overall, the roundtable painted a complex picture: far from a settled monopoly, the AI sector appears dynamic and uncertain. The challenge for policymakers will be to strike a delicate balance — preserving competition and innovation while avoiding both complacency and premature intervention.