Anthropic endorses California’s AI safety bill, SB 53

Spread the love

Monday, ethnographic Declaration An official approval SB 53State Senator is from Scott Winner to California Bill, which will impose first-country transparency on the world’s largest AI model developers. Anthropic approval identifies a rare and big win for SB 53, at which time when big technical groups prefer CTA And Chamber for progress Talking against the bill.

“Although we believe that the Patches of the state regulations are best addressed at the Frontier AI Protection Federal level, strong AI progress in Washington will not wait for the minimum.” The anthropologist says in a blog post. “The question is not whether we need the AI ​​administration – whether we will develop it tomorrow or reactionally tomorrow. SB 53 provides the Solid path to the former.”

If passed, SB 53 Frontier AI model developers like OpenAI, Ethnical, Google and Jai are needed to develop the protection structure, as well as public safety and security reports before setting strong AI models. Whistle blower will also be set up to the employees who come forward with the concerns of security in this bill.

The Senator Winner’s bill focuses on the AI ​​models, especially to restrict the contribution to “catastrophic risk”, which is defined as the cause of the death of at least 50 people or more than one billion dollars. The SB 53 AI focuses on the extreme aspects of risk-AI models to provide expert-level assistance in the production of biological weapons, or to be used in cyberratetacks to be more closely used than AI dipfecs or psychophyse.

California has approved the previous version of the Senate SB 53, but before the governor’s desk proceeded, a final vote still needs to be taken. Governor Gavin News has been silent in the bill so far, though he Vetoid Senator Weiner’s Last AI Protection BillSB 1047.

Frontier AI Model Developers have confronted bills Pushback From both Silicon Valley and the Trump administration, both argued that such efforts could limit American innovation in competition against China. Investors such as Andresen Harovits and Y Combinators have led some pushback against it SB 1047And in recent months, the Trump administration has Again and again Threat Blocking states completely from AI control.

One of the most common arguments against the AI ​​Protection Bill is that states should be left in the federal government. Andresen Harovits’ head of AI Polycy, Matt Perolt and Chief Legal Officer Joy Ramswami published a Blog post Last week, many AI bills in today’s state are violating the Constitution’s trade section – which limits the state governments from the passage of their borders and the passing laws that damage the inter -state trade.

TechCrunch event

San Francisco
|
October 27-29, 2025

However anthropological co-founder Jack Clark argued in one X posts The technology industry will create strong AI systems in the coming years and cannot wait for the federal government to work.

“We have long been saying that we will like a federal value,” Clark said. “But in the absence of this, it creates a solid sapphire for the AI ​​administration that cannot be ignored.”

OpenAI’s Chief Global Affairs Officer Chris Lehan sent a Letter Governor News should not pass any AI control in August that will push startups outside California – although the SB 53 is not mentioned in the name.

Former policy research head of Openai, Miles Brundz, says Post Lehan’s letter in X -in the “SB 53 and AI principles was usually filled with confusing garbage.” Significantly, the goal of SB 53 is to completely control the world’s largest AI companies – especially those that have earned more than $ 500 million.

Despite the criticism, policy experts say that SB 53 is a more modest method than the previous AI protection bills. Senior Fellow of the American Innovation Foundation and the White House AI Policy Advisor, Dean Ball in August Blog post He believes that there is a good opportunity to be SB 53 law now. SB 1047 criticized the ball that SB 53’s drafts showed respect to “technical reality,” as well as “measure legal restraint”.

Senator Winner’s before D That SB 53 was affected by a lot by a Expert policy panel Governor News calls for the leading Stanford researcher and co-founder of the World Labs, California, co-operative-California, to advise how to control AI.

Most AI labs already have some versions of the internal security policy that requires SB 53. OpenEE, Google Dipmind and anthropologists regularly publish security reports for their models. However, these companies are not bound by anyone but they do it themselves and sometimes they Back Their Self-pressed Protection PromiseThe The goal of SB 53 is to set these requirements as state law, if an AI lab fails to comply with financial response.

Earlier in September, California lawmakers Revised To remove a section of SB 53 Bill that requires the developers of the AI ​​model audited by the third party. Technology agencies have previously fought with such third -party audits in the other AI policy fighting, arguing that they are overly burdened.

Leave a Reply

Your email address will not be published. Required fields are marked *