Key Takeaways
Two months after Elon Musk declared that xAI’s new GPU supercluster was the world’s most powerful, Mark Zuckerberg has made the same claim about Meta’s rival cluster.
The apparent one-upmanship marks the latest chapter in Musk and Zuckerberg’s longstanding rivalry, pitting two of Silicon Valley’s most high-profile leaders against each other in a contest for computing power.
In September, Musk revealed that the company had assembled a new AI training cluster consisting of 100,000 Nvidia H100 GPUs as part of his plan to turn xAI into an artificial intelligence heavyweight.
“Colossus” as Musk christened the new machine “is the most powerful AI training system in the world,” he said. Moreover, once the company got its hands on a batch of more advanced H200s, he predicted that it would double in size within a few months.
Then, during Meta’s third-quarter earnings call on Wednesday, Oct.30, Zuckerberg said the firm had begun training Llama 4 models “on a cluster that is bigger than 100,000 H100s, or bigger than anything that I’ve seen reported for what others are doing”.
For Musk and Zuckerberg, the recent cluster-swinging contest is the latest incarnation of a rivalry that has often centered on artificial intelligence.
In 2017, Zuckerberg criticized Musk’s warnings about AI, describing his attitude as irresponsible. Firing back at the comments, Musk said : “I’ve talked to Mark about this. His understanding of the subject is limited”.
The social media beef reached a climax in 2023 when the two men openly floated the prospect of getting into a cage fight.
Although a physical showdown never materialized, the tech billionaires’ business rivalry has escalated since Musk founded xAI last year.
Since early 2023, Meta and xAI have emerged as front-runners in the race to dominate open-source AI.
Both companies have positioned their respective AI offerings, Llama and Grok, as alternatives to proprietary models like OpenAI’s GPT range. However, their reasons for doing so diverge.
For Meta, open-source AI is intended to foster innovation by encouraging experimentation by other researchers and developers.
Meanwhile, xAI seeks to counter what Musk views as the dangerous monopolization of AI by Big Tech by offering more accessible, transparent models.
While Meta and xAI’s heavyweight superclusters will certainly help propel the next generation of Llama and Grok Models to new heights, they might not remain the largest AI training setups for long.
Earlier this year, it was reported that Microsoft is building a new $100 billion data center that could dwarf existing supercomputers.
Expected to be operational by 2028, the rumored new platform would dramatically out compute today’s biggest clusters, giving Microsoft and its partner OpenAI a significant boost in the escalating AI hardware race.