The writer of California’s SB 1047, the nation’s most controversial AI security invoice of 2024, is again with a brand new AI invoice that might shake up Silicon Valley.
California state Senator Scott Wiener launched a new invoice on Friday that will defend staff at main AI labs, permitting them to talk out in the event that they suppose their firm’s AI techniques could possibly be a “vital danger” to society. The brand new invoice, SB 53, would additionally create a public cloud computing cluster, known as CalCompute, to present researchers and startups the required computing sources to develop AI that advantages the general public.
Wiener’s final AI invoice, California’s SB 1047, sparked a full of life debate throughout the nation round how one can deal with large AI techniques that might trigger disasters. SB 1047 aimed to forestall the potential for very giant AI fashions creating catastrophic occasions, akin to inflicting lack of life or cyberattacks costing greater than $500 million in damages. Nevertheless, Governor Gavin Newsom in the end vetoed the invoice in September, saying SB 1047 was not the very best strategy.
However the debate over SB 1047 rapidly turned ugly. Some Silicon Valley leaders mentioned SB 1047 would harm America’s aggressive edge within the world AI race, and claimed the invoice was impressed by unrealistic fears that AI techniques may result in science fiction-like doomsday situations. In the meantime, Senator Wiener alleged that some enterprise capitalists engaged in a “propaganda marketing campaign” towards his invoice, pointing partly to Y Combinator’s declare that SB 1047 would ship startup founders to jail, a declare specialists argued was deceptive.
SB 53 primarily takes the least controversial elements of SB 1047 – akin to whistleblower protections and the institution of a CalCompute cluster – and repackages them into a brand new AI invoice.
Notably, Wiener shouldn’t be shying away from existential AI danger in SB 53. The brand new invoice particularly protects whistleblowers who imagine their employers are creating AI techniques that pose a “vital danger.” The invoice defines vital danger as a “foreseeable or materials danger {that a} developer’s growth, storage, or deployment of a basis mannequin, as outlined, will end result within the demise of, or critical damage to, greater than 100 folks, or greater than $1 billion in injury to rights in cash or property.”
SB 53 limits frontier AI mannequin builders – probably together with OpenAI, Anthropic, and xAI, amongst others – from retaliating towards staff who disclose regarding info to California’s Legal professional Common, federal authorities, or different staff. Beneath the invoice, these builders can be required to report again to whistleblowers on sure inside processes the whistleblowers discover regarding.
As for CalCompute, SB 53 would set up a bunch to construct out a public cloud computing cluster. The group would encompass College of California representatives, in addition to different private and non-private researchers. It will make suggestions for how one can construct CalCompute, how giant the cluster needs to be, and which customers and organizations ought to have entry to it.
After all, it’s very early within the legislative course of for SB 53. The invoice must be reviewed and handed by California’s legislative our bodies earlier than it reaches Governor Newsom’s desk. State lawmakers will certainly be ready for Silicon Valley’s response to SB 53.
Nevertheless, 2025 could also be a more durable 12 months to move AI security payments in comparison with 2024. California handed 18 AI-related payments in 2024, however now it appears as if the AI doom motion has misplaced floor.
Vice President J.D. Vance signaled on the Paris AI Motion Summit that America shouldn’t be excited by AI security, however quite prioritizes AI innovation. Whereas the CalCompute cluster established by SB 53 may certainly be seen as advancing AI progress, it’s unclear how legislative efforts round existential AI danger will fare in 2025.