Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks

Spread the love

In a new report, California-based policies are advised by an AI pioneer fee-Fi Lee co-organized by Li Lee that lawmakers should consider the AI ​​risks that are not yet monitored in the world when making AI regulatory policies “.

The 41-Surrender Intensive Report California Policy came from the Working Group in Frontier AI models published on Tuesday, it is an attempt organized by Governor Gavin News California’s controversial AI Protection Bill, veto of SB 1047The When News found it SB Missed 1047 markLast year, he acknowledged the need for a wide evaluation of AI risk to inform lawyers last year.

According to the report, LI, co-author UC Berkeley College of Computing Dean Jennifer Chhais and Carnegie Endency for International Peace President Mariano-Forentino Kullla will argue on what the Border II labs like Openai are increasing. Throughout the ideological spectrum, the stakeholders of the industry have argued against SB 1047 as well as SB 1047, as well as SB 1047, as well as the Turing Award winner of the Turing Award winner, as well as the report of the Databrick, and reviewed the report before the report was published.

According to the report, the risks of novels raised by AI systems may require laws that will force AI model developers to report their security tests, data acquisition practices and security systems publicly. This report also advised to increase the value around the third party evaluation of these metrics and corporate policies in addition to the protection of Whistle blower staff and contractors.

Lee et al. Enter an “unimaginable layer” for AI’s possibilities to help perform cybattacks, create biological weapons, or to bring threats to other “extreme”. However, they argued that the AI ​​policy should not only solve the current risk, but the consequences of the future should be expected that can happen without adequate protection.

“For example, we do not need to monitor nuclear weapons [exploding] Reliably predicted that it can cause widespread harm, “the report says,” If those who guess the most extreme risk are right – and if we are uncertain, the boundaries and expenditures are extremely high. “

The report offered a bipolar strategy to increase the transparency of the AI ​​model development: Believe, however, verify. The AI ​​model developer and their employees should provide ways to report the fields of public concerns, the report states that, like the internal security test, the third party verification needs to submit the examination demand.

The report, whose final version was published in June 2021, did not support any particular law, experts on both sides of the AI ​​policy -making debate well accepted it well.

George Mason University AI-centric research associate Dean Ball, who criticized SB 1047, said in a post of X that the report was A committed step In California’s AI protection control. California State Senator Scott Winner, who introduced SB 1047 last year, said it was also a win for AI protection lawyers. Winner said in a press release that the report was “Emergency Emergency We have started in the Legislative Assembly” around the AI ​​administration ” [in 2024]The “

The report seems to have combined with several components of SB 1047 and Winner’s Follow-up bill, SB 53Such AI model developers need to report the results of security tests. By taking a wide view, it seems to be a very necessary win for AI protection people, Whose agenda lost land in the past yearThe

Leave a Reply

Your email address will not be published. Required fields are marked *