California’s Appropriations Committee made significant changes to SB 1047 on August 15. The committee passed the bill with these amendments. We can now read about them.
AI systems have not killed people or been used in widespread cyberattacks outside of science fiction movies. Some lawmakers want to put safeguards in place before bad actors create a dystopian future. California’s SB 1047 aims to prevent real-life disasters caused by AI systems. The bill passed the state’s senate in August. It is now waiting for California Governor Gavin Newsom to either approve or veto it.
Although the goal of SB 1047 seems agreeable, it has angered many players in Silicon Valley, from venture capitalists to big tech trade groups, researchers, and startup founders. Among the numerous AI bills circulating in the country, California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has become one of the most contentious.
SB 1047 aims to stop large AI models from causing “critical harms” to humanity. The bill gives examples of such harms, like using an AI model to create a weapon resulting in mass casualties or orchestrating a cyberattack causing more than $500 million in damages. Developers, the companies creating the models, would be held accountable for implementing safety protocols to prevent these outcomes.
The bill only applies to the largest AI models globally, costing at least $100 million and using a significant amount of compute during training. Few companies have currently developed AI products meeting these requirements, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon.
This bill also mandates safety protocols for covered AI products, including an emergency stop button to shut down the AI model. Developers must create testing procedures to address risks posed by AI models and hire third-party auditors annually to assess their safety practices.
A new agency, the Board of Frontier Models, would enforce the rules of SB 1047. Developers must submit annual certifications to the board, assessing the potential risks of their AI models, the effectiveness of their safety protocols, and compliance with the bill’s requirements.
If a developer’s safety measures are found inadequate, California’s attorney general can issue an injunctive order against them, potentially halting their operations. Penalties for using an AI model in a catastrophic event could reach up to $10 million for a $100 million model.
Proponents of SB 1047, like California State Senator Scott Wiener, believe that the bill is crucial to protecting citizens from potential harms of AI technology. On the other hand, opponents, including several Silicon Valley players, argue that the bill will stifle innovation and harm the tech ecosystem.
The fate of SB 1047 now lies in the hands of California Governor Gavin Newsom, who will decide whether to sign it into law by the end of August. If passed, the bill is likely to face legal challenges before its implementation in 2026.
Read More
Top court bans Elon Musk’s X in Brazil