Key Takeaways
In the contest between proprietary and open-source AI, proponents of open models have gained an important endorsement: the White House.
Responding to President Biden’s Executive Order on AI safety , the Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued policy recommendations on Tuesday, July 30, embracing AI openness. However, supporters of closed development argue that openness comes with its risks.
In its report , NTIA observed that AI with widely available model weights, which it refers to as open foundation models, can provide a variety of benefits:
“They diversify and expand the array of actors, including less resourced actors, that participate in AI research and development. They decentralize AI market control from a few large AI developers. And they enable users to leverage models without sharing data with third parties, increasing confidentiality and data protection.”
The agency’s findings echo arguments made by open-source AI developers, who have criticized the closed-off models built by Google and OpenAI for being opaque and open to manipulation.
“Open-source and open access are important for […] ideological and philosophical reasons at this stage in the rise of AI,” Livepeer CEO Doug Petkanics told CCN in an interview.
“The leaders in this space are Big Tech companies that are executing in a proprietary way, as is their right […] but it comes with some side effects. They get to inject their bias, they get to control their access, [and] set the pricing as well.”
For researchers, hobbyists and startups, this dependency on Big Tech AI can be a major constraint. But defenders of closed models argue that placing them in the public domain could be dangerous.
Ask ChatGPT how to make a bomb, and it will decline to answer, as it will for most obviously dangerous prompts. Without access to the back end of OpenAI’s models, malicious actors hoping to bypass safety filters have limited tools at their disposal.
However, the same isn’t true for freely programmable open-source models, while LLMs like Hugging Face’s Falcon don’t even have traditional AI guardrails to begin with.
Open-source models have already been used to create chatbots optimized for crime. For example, WormGPT and FraudGPT are two modified versions of the GPTJ language model trained on large quantities of malware data.
Meanwhile, malicious instances of AI image generators, including nudify apps and tools for generating deepfake child abuse imagery, almost exclusively use the open-source Stable Diffusion model.
Although it acknowledges the risks of existing open foundation models, the NTIA report concludes that they don’t justify imposing restrictions on the technology. However, it maintains that more powerful future models may require regulatory intervention and recommends that the federal government adopt a monitoring framework to inform its response going forward.
Alongside potential dangers, the agency also found that open foundation models could have important safety benefits compared to their closed peers.
In the tradition of open-source software, advocates of making AI model weights freely available argue that the benefits of transparency and community-driven oversight outweigh safety risks.
Upon releasing Llama-2 last year, Meta declared : “We believe it’s safer,” noting that opening access means developers and researchers can stress test models, identifying issues more effectively than internal tests would be able to.
“By seeing how these tools are used by others, our own teams can learn from them, improve those tools, and fix vulnerabilities,” the firm said in a statement.
This emphasis on community oversight is echoed by many prominent AI experts.
Open-source models “can be interrogated, scrutinized, and evaluated by anyone, without needing to seek approval from a central decision-maker,” observed one paper authored by leading researchers in the field. “This empowers developers […] encouraging a culture of contribution and accountability.”
Finally, the study observed that proprietary AI can also be abused. “It is not clear that closed models are definitively ‘safer’ than open-source models,” they stated. As such, they called for a more nuanced understanding of the relative strengths and weaknesses of different approaches.