California lawmaker behind SB 1047 reignites push for mandated AI safety reports

Spread the love

Wednesday is the state of California Senator Scott Winner New amendment In his latest bill, SB 53, needed for it The largest AI agencies in the world for the publication of protection and protection protocol And the reports issued when the security occurred.

If the law is signed, California will be the first state that will be imposed on top AI developers, including OpenAI, Google, Ethnic and Jai.

Senator Winner’s Previous AI Bill, SB 1047, AI model developers include similar requirements for publishing security reports. However, Silicon Valley fought violently against that bill and it was Veto by the governor gavin neum to the endThe The California Governor then called on a team of AI leaders-the leading Stanford researcher and co-founder of the World Labs, to set goals for the state’s AI protection efforts and the state’s AI protection effort.

California’s AI policy group recently released them Final recommendationIn order to establish “strong and transparent proof environment” “by quoting the requirements of the need to publish information about their system on the industry” requirements. The Senator Winner Office said in a press release that the amendments of SB53 were greatly affected by this report.

“The bill continues as an act in progress, and I am hoping to work with all the stakeholders in the coming weeks,” said Senator Vienna’s notice.

Governor News claims that SB has failed to achieve 1047 – ideally, the goal of SB 53 is to maintain a balance that creates meaningful transparent requirements for the largest AI developers without the rapid growth of the AI ​​industry in California.

In an interview with TechCrunch, Nathan Calvin, VP Nathan Calvin on the State Affairs of the Non profit AI Safety Group, said, “These are anxiety that my company and others are talking for a while.” “The agencies seem like minimal, reasonable action to understand what action they are taking to deal with these risks.”

This bill has also been created for Whistl Blower Protection for employees of AI labs, who believe that their company’s technology has created a “critical risk” to the society – defined in the bill as a contributor to more than 100 people’s death or injury or more than $ 1 billion.

In addition, the goal of the bill is to create a public cloud computing cluster Calcoput to support the startups and greater scale AI developers.

In contrast to SB 1047, Senator Winner’s new bill AI model developers do not account for the loss of their AI models. The SB53Ao was designed to not impose a burden on startups and researchers, which use the top AI models or open source models of AI developers.

With the new amendment, SB53 is now running to the California State Legislative Assembly on Privacy and Customer Protection for approval. If this is passed there, the bill will have to pass through several other legislatures before reaching the Governor Newsm desk.

On the other hand in the United States, the governor of New York is now in Kathy Hachel Considering the Similar AI Protection Bill, Raise Act, which is also needed for publishing security and protection reports, also requires large AI developers.

The fate of state AI laws like The Rise Act and SB 53 was briefly risky Federal Law makers considered the state AI as AI suspension under the control of the state AI – An attempt to limit a “patchwork” of AI laws that organizations need to navigate. However, that proposal Failed to 99-1 Senate Vote before July.

In a statement to TechCrunch, former President of the Wi -Combinator, Geoff Raleston, said, “It should not be controversial to ensure that AI is securely developed – it should be basic.” “The Congress should be leading, the border model-making companies should demand transparency and accountability from the border makers. However, states must take steps because there is no serious federal action. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

So far, lawmakers have failed to get AI companies on the board with the need for state-of-meditation transparency. The ethnographic has elaborately supported AI is needed to increase transparency in agenciesAnd even published Moderate optimism about the recommendation From the AI ​​policy group in California. However, companies like OpenAI, Google and Meta have become more resistant to this effort.

Topped AI model developers usually publish security reports for their AI models, but they are less compatible in recent months. For example, Google decided Not publish any security report for the published its most advanced AI model, Gemini 2.5 Pro, a few months after making it available. Opena also decided Do not publish any security report for the GPT -1.5 modelThe Later, a third -party research was published that it could be proposed to be Align less than the previous AI modelThe

SB 53 presents a toned-down version of the previous AI protection bills, but it can still force AI companies to publish more information than them today. Suddenly, they will closely observe the boundary of the Senator Winner when they examine the boundaries again.

Leave a Reply

Your email address will not be published. Required fields are marked *