California state legislators have taken an historic step towards regulating artificial intelligence (AI), in a vote that could greatly influence the standard for governing the technology on a national level in the U.S.
The state’s assembly recently approved Senate Bill 1047, which would require tech companies to conduct safety tests on their AI models prior to public release. The legislation is designed specifically to limit the potential risks posed by advanced AI technologies — from cyberattacks and deepfake technology to biowarfare.
“Innovation and safety can go hand in hand — and California is leading the way,” stated Senator Scott Wiener, who introduced the bill. “With this vote, the Assembly has taken the truly historic step of working proactively to ensure an exciting new technology protects the public interest as it advances.”
Enhanced regulations around AI development might assuage persistent workforce fears around the technology. An updated study by Bently University and Gallup comparing 2023 figures to 2024 reveals that 75% of respondents believe AI will reduce the total number of jobs in the country over the next decade.
This apprehension towards AI is at the same level as last year’s figures, as economic uncertainties and the fast development of AI maintain concerns about AI’s impact on the workforce.
Around 70% of individuals who are “extremely knowledgeable” about AI have little to no trust in business to use AI responsibly.
The New York Times reports that the bill grants the state attorney general the authority to sue AI companies for significant harms like death or property damage caused by their language models.
Governor Gavin Newsom has not yet revealed his stance on the bill, and intense lobbying from the tech industry is expected until the September 30 deadline for his decision.
The legislation could slow down the rapid introduction of AI-driven solutions in the workplace as businesses spend time and resources to ensure they are compliant with the new regulations.
However, supporters argue that without proper safeguards, AI could be catastrophic, potentially disrupting democratic processes. There have already been reports this year of technology being weaponized by malicious international hackers in attempts to steal corporate information.
AI is becoming increasingly embedded into the workflows of many industries across the U.S., and the future of work will be affected by how well tech giants balance advancements of the technology while considering the need to protect public safety and interests.
Similar to California’s bill AB 3211, which would mandate that tech companies add watermarks to AI-generated content created by their technology, senate bill 1047 could lead to the creation of new roles in the labor market focused specifically on compliance.