Key Takeaways
From model developers to cloud providers to hardware manufacturers, there are growing concerns about concentration risk in the AI sector.
According to S&P research , just ten foundation model providers account for 88% of the market, increasing the risk that issues with a single provider could spiral into a crisis.
In the AI sector, there are three distinct markets where concentration among a handful of firms could become a problem.
At the hardware level, Nvidia’s stranglehold on the supply of high-end AI processors for data centers has become a major cause for concern.
As Chinese companies have discovered in the wake of U.S. export controls, relying on a single manufacturer for crucial components leaves them vulnerable when supplies are cut off.
Meanwhile, at the level of service provision, a small number of cloud providers and AI developers wield enormous power.
With Amazon Web Services and Microsoft Azure accounting for 33% and 20% of the global cloud market respectively, they have also emerged as key players in AI, responsible for the bulk of data hosting and GPU provisioning.
Likewise, many of the same Big Tech firms that dominate the cloud business produce their own foundation models, which are among the most widely deployed.
Even models developed by other companies are usually distributed via platforms like Azure AI Studio and Amazon Bedrock, further entrenching Big Tech’s role in the space.
Some of the first institutions to raise the alarm over AI concentration risk have been central banks and financial regulators.
The European Central Bank (ECB), for example, has argued that overreliance on a small number of suppliers may make the “operational backbone” of the financial system more vulnerable to cyberattacks or technology failures.
As explained by S&P analysts Miriam Fernandez and Andrew O’Neill, advances in small AI models and edge AI could help limit dependency on big model developers.
As companies look to reduce costs and speed up processing, they predicted a “commercial shift from large to small models” with lower computational requirements.
Another promising technology they highlighted was the use of decentralized networks to train models, host data and run inference workloads.
While decentralized AI may not be able to diminish the influence of today’s AI leaders, it can create new ways for users to build, run and access agents that are less dependent on centralized servers and databases.