Key Takeaways
As sophisticated Artificial Intelligence (AI) models have become available to the public throughout the past year, they have attracted the attention of policymakers around the world. As the technology spreads and evolves, the political reaction to AI is now of major significance to the global businesses that develop and use it.
Attempts to shape AI policy have been spearheaded by Big Tech firms like Google and Meta, which have formed new groups to lobby for their respective causes. Now, as the field grows, key battle lines are emerging between the different camps.
On Tuesday, December 5, an international coalition of over 50 organizations led by IBM and Meta forged an alliance to promote open standards and transparency across the burgeoning AI ecosystem.
Consisting of businesses, universities and various public and private research institutes, the “AI Alliance,” was announced to the world with all the commitments to safety, security and responsibility that have become de rigueur for such groups.
What makes the new platform stand out, however, is its emphasis on transparency and openness, which pits the alliance against a recent trend that has seen the most powerful AI models emerge from behind closed doors.
Commenting on the initiative , Jeff Boudreau, chief AI officer at Dell, remarked: “AI progress that drives real value for humanity can only happen with open innovation and in open ecosystems.” Likewise, the CEO of AMD, Lisa Siu, said that open standards and transparency can help ensure AI is “a force for positive change.”
But if the new scheme intends to promote healthy development across the AI sector, why has it left out some of the most important players in the game?
Considering membership of AI Alliance would give them a chance to work alongside researchers from CERN, NASA, and other bulwarks of scientific innovation, the absence of certain technology giants is conspicuous.
Were Google, Microsoft, OpenAI and Anthropic not invited to the party? Or did they choose not to participate? Perhaps they are simply preoccupied with their own AI policy initiative — the Frontier Model Forum (FMF).
Unlike AI Alliance, FMF’s website makes no mention of openness or transparency.
It makes a passing reference to ML Commons and pays lip service to the need for collaboration and “cross-organizational discussions.” But sharing resources and fostering open innovation doesn’t appear to be on the organization’s agenda, which is ironic considering its members’ history.
When Sam Altman and Elon Musk dreamed up OpenAI in 2015, the non-profit arrived in Silicon Valley like a refreshing breeze of open-source idealism.
In its initial years, OpenAI was a prolific contributor to the field’s intellectual commons, supporting open research and releasing the code for GPT-1 and 2 under a free software license.
But things would change after the company transitioned to a for-profit business model in 2019.
With the launch of GPT-3 in 2020, OpenAI started charging users, who could only access it via an application programming interface (API). Meanwhile, the firm’s new investor Microsoft retained exclusive rights to use the underlying codebase.
A similar story can be told about Google, which has built so much of its business on foundations of free and open knowledge but has been more guarded with its most powerful AI platforms in recent years.
Like OpenAI, Google has increasingly transitioned to an API-based model. For example, when developers gain access to its latest AI model Gemini on December 13, they will interact with it through Vertex AI, a cloud platform that provides only basic functionality for free.
While the various members of FMF have increasingly turned their back on open-source ideals, the movement is far from dead.
As the AI Alliance demonstrates, developers continue to organize around the free and open sharing of information.
While Google and OpenAI have forsaken free software licenses to retain control over their latest large language models (LLMs), Meta’s LLaMA bucked the trend. With the caveat that its license imposes restrictions on commercial use, anyone can download the starting code for pre-trained and fine-tuned LLaMA models for free.
Within weeks of its release, researchers had already fine-tuned a LLaMA base model using readily available public data. In just a short space of time and without anything close to OpenAI’s computing capacity, they built Koala , a chatbot that can compete with ChatGPT.
In a leaked internal document discussing the success of open-source LLMs, Google engineers wrote “We aren’t positioned to win this arms race and neither is OpenAI.”
Pointing to an ascendant “third faction” of open-source AI developers, the memo observed that free, more efficient alternatives in the public domain are rapidly outpacing Google and OpenAI’s LLMs.
As organizations like FMF gear up to influence policy and shape regulation, it can sometimes feel like the little guy doesn’t stand a chance.
But in the end, open-source development remains a wellspring of innovation, and the most important models could still emerge from outside the walled gardens of contemporary Big Tech AI.