California Governor Gavin Newsom has vetoed Senate Bill 1047, a state proposal aimed at imposing stricter safety regulations on advanced Artificial Intelligence (AI) systems, a decision that has reignited debate about the growing use of AI across the workforce.
The bill would’ve required tech companies to conduct safety tests on their AI models prior to public release. It was drafted specifically to limit the potential risks posed by advanced AI technologies — from cyberattacks and deepfake technology to biowarfare.
The governor’s veto underscores a pressing question affecting the future of work and workplace’s acceptance of the advanced AI tools: how do policy makers develop and deploy AI responsibly without curbing its immense positive potential?
Newsome argued in a public statement that SB 1047, while “well-intentioned,” applied blanket regulations to AI technologies regardless of their risk levels — potentially hurting smaller AI innovators while focusing only on the largest, most costly models.
“SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsome said. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Advocacy groups and some policymakers see the veto as a missed opportunity. They argue that without clear guidelines, the risks associated with unregulated AI will continue to grow. There have already been reports this year of technology being weaponized by malicious international hackers in attempts to steal corporate information. This concern is compounded by the lack of federal regulations — leaving a critical gap in U.S. AI governance.
“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” California Senator Scott Wiener, who introduced the bill, said. “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public.”
Governor Newsom’s administration said that it plans to collaborate with prominent AI researchers and academics — including Dr. Fei-Fei Li from Stanford’s Human-Centered AI (HAI) Institute and Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley. These experts will help state policy makers draft new legislation designed to balance the benefits of AI innovation with safety protocols.