Home / News / Technology / AI Workers Back Elon Musk-Supported Bill — Tech Giants and Startups Split on Safety Regulation
Technology
4 min read

AI Workers Back Elon Musk-Supported Bill — Tech Giants and Startups Split on Safety Regulation

Published
James Morales
Published

Key Takeaways

  • California’s AI Safety Bill has caused a rift in Silicon Valley.
  • While some industry figures have come out in support of the legislation, others are opposed to it.
  • Having passed through the California legislature, Governor Newsom must now approve the Bill.

A California bill that intends to promote the “safe and secure” development of frontier AI models has exposed a major rift in Silicon Valley.

Senior tech executives, prominent investors, and politicians on both sides of the aisle are among the bill’s critics. Meanwhile, supporters of SB-1047 include Elon Musk, Vitalik Buterin, and, most recently, an alliance of current and former employees of AI companies, including OpenAI, Google, Anthropic, Meta, and xAI. 

What Is SB-1047?

Designed to address growing concerns about the social consequences of increasingly powerful machine learning models, SB-1047  aims to establish guidelines for the ethical development and deployment of AI systems.

Key provisions include transparency and accountability requirements for developers and organizations that want to incorporate AI in their decision-making process.

The bill also proposes creating a statewide oversight committee to monitor AI’s impact on various industries and ensure compliance with ethical standards. 

If passed, SB-1047 would impact some of the world’s most important AI firms located in California, including Anthropic, Microsoft, Google, Meta, and OpenAI. 

AI Workers Endorse California AI Bill

Having made its way through the California legislature, only Governor Gavin Newsom’s approval is needed to sign SB-1047 into law. As the deadline for him to decide on his position approaches, Newsom has come under pressure from both sides. 

In an open letter  to the governor, 113 current and former employees of leading AI labs have voiced their support for the bill, which they said represents a “meaningful step forward” in California’s efforts to regulate AI safety. 

“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” the letter states. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”

A Split From Industry Leaders

The recent letter marks a major divergence from the position taken by some of the top executives in the AI industry, who have come out against California’s AI bill.

For example, OpenAI’s chief strategy officer Jason Kwon has argued that AI safety should be regulated at the federal level rather than by individual states. By moving ahead independently, the proposed legislation could threaten “California’s unique status as the global leader in AI,” Kwon stated  in a letter to the bill’s author, California State Senator Scott Wiener.

Meanwhile, Google Brain Co-Founder Andrew Ng called  SB-1047  “vague”  and “pernicious,” arguing that it creates a “huge  gray zone in which developers can’t be sure how to avoid breaking the law.”

Among the large AI developers that will be most affected by the legislation, only Anthropic has expressed cautious support  for SB-1047.

Incidentally, some signatories of the letter supporting the bill work for OpenAI and Google, highlighting just how divisive the legislation is, even among those with otherwise shared interests. 

This division is also mirrored at the political level. Although the bill has the support of Democrats who control the California legislature, some of their peers in Washington have been critical of  SB-1047.

House Democrats Split With California Peers

House Speaker Nancy Pelosi has said  the Bill is “well-intentioned but ill-informed.” 

Meanwhile, echoing Kwon’s comments, Representative Zoe Lofgren wrote  to Wiener to express her concern that the Bill is “premature” considering the current lack of Federal guidelines on the matter.  

Moreover, “SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement,” she added.

Was this Article helpful? Yes No