The author of SB 1047 introduces a new AI bill in California
techcrunch.com
The author of Californias SB 1047, the nations most controversial AI safety bill of 2024, is back with a new AI bill that could shake up Silicon Valley.California state Senator Scott Wiener introduced a new bill on Friday that would protect employees at leading AI labs, allowing them to speak out if they think their companys AI systems could be a critical risk to society. The new bill, SB 53, would also create a public cloud computing cluster, called CalCompute, to give researchers and startups the necessary computing resources to develop AI that benefits the public.Wieners last AI bill, Californias SB 1047, sparked a lively debate across the country around how to handle massive AI systems that could cause disasters. SB 1047 aimed to prevent the possibility of very large AI models creating catastrophic events, such as causing loss of life or cyberattacks costing more than $500 million in damages. However, Governor Gavin Newsom ultimately vetoed the bill in September, saying SB 1047 was not the best approach.But the debate over SB 1047 quickly turned ugly. Some Silicon Valley leaders said SB 1047 would hurt Americas competitive edge in the global AI race, and claimed the bill was inspired by unrealistic fears that AI systems could bring about science fiction-like doomsday scenarios. Meanwhile, Senator Wiener alleged that some venture capitalists engaged in a propaganda campaign against his bill, pointing in part to Y Combinators claim that SB 1047 would send startup founders to jail, a claim experts argued was misleading.SB 53 essentially takes the least controversial parts of SB 1047 such as whistleblower protections and the establishment of a CalCompute cluster and repackages them into a new AI bill.Notably, Wiener is not shying away from existential AI risk in SB 53. The new bill specifically protects whistleblowers who believe their employers are creating AI systems that pose a critical risk. The bill defines critical risk as a foreseeable or material risk that a developers development, storage, or deployment of a foundation model, as defined, will result in the death of, or serious injury to, more than 100 people, or more than $1 billion in damage to rights in money or property.SB 53 limits frontier AI model developers likely including OpenAI, Anthropic, and xAI, among others from retaliating against employees who disclose concerning information to Californias Attorney General, federal authorities, or other employees. Under the bill, these developers would be required to report back to whistleblowers on certain internal processes the whistleblowers find concerning.As for CalCompute, SB 53 would establish a group to build out a public cloud computing cluster. The group would consist of University of California representatives, as well as other public and private researchers. It would make recommendations for how to build CalCompute, how large the cluster should be, and which users and organizations should have access to it.Of course, its very early in the legislative process for SB 53. The bill needs to be reviewed and passed by Californias legislative bodies before it reaches Governor Newsoms desk. State lawmakers will surely be waiting for Silicon Valleys reaction to SB 53.However, 2025 may be a tougher year to pass AI safety bills compared to 2024. California passed 18 AI-related bills in 2024, but now it seems as if the AI doom movement has lost ground.Vice President J.D. Vance signaled at the Paris AI Action Summit that America is not interested in AI safety, but rather prioritizes AI innovation. While the CalCompute cluster established by SB 53 could surely be seen as advancing AI progress, its unclear how legislative efforts around existential AI risk will fare in 2025.
0 Commentarii ·0 Distribuiri ·41 Views