Google’s latest AI model report lacks key safety details, experts say

Spread the love

Thursday, a few weeks after its most powerful AI model launch, Gemini 2.5 ProGoogle published Technical report The results of its internal security assessment are being shown. However, the report is light about the details, experts say that the model makes it difficult to determine what risk the model can create.

Technical reports provide useful – and UninterruptedAt time – information that companies do not always advertise about their AI. In greater, the AI ​​community is viewing these reports as a good faith attempt to support independent research and security evaluation.

Google adopts the method of reporting different security reports than some of its rivals, by publishing technical reports only as a model as a graduate from the “experimental” stage. The company also does not include searching for all its “dangerous power” evaluation in these writing ups; It saves them for separate monitoring.

Several experts in talking to TechCrunch were still disappointed by the sparcity of Jemi 2.5 Pro report, but they mentioned that Google is not mentioned Border Protection Structure (FSF)The Google describes FSF as an attempt to detect FSF last year as an attempt to detect AI capabilities that may be “fatal damage”.

“This [report] There is very rare, minimal information and the model has already been published a few weeks later after the public is available, “The Institute for AI Policy and Strategy co-founder Peter Wildford told TechCrunch.” It is impossible to verify whether Google survives the public’s promise and thus impossible to evaluate their models is impossible to evaluate. “

The co-founder of the Secure AI project, Thomas Woodside, said that when he was happy to publish a report for Google Gemi 2.0 Pro, he was not sure of the company’s commitment to provide timely supplementary assessment evaluation. Woodside mentioned that Google published the results of the last time a dangerous capacity test in June 2021 – for a model announced in February of the same year.

Too much confidence does not inspire, Google has not provided any reports for it Gemini 2.5 flashA small, more skilled model company announced last week. A spokesman told TechCrunch a report for flash “coming soon”.

“I hope this is a promise to start publishing Google’s more frequent updates,” Woodside told TechCrunch. “These updates should include the results of assessment of models that have not yet been publicly deployed, as these models can also create serious risk.”

Google may be one of the first AI labs to offer valued reports but this is not just Complaint Recently. Meta has been released a Similarly to evaluate Skimpi Protection Its new Lama 4 Open models decided not to publish any reports for Openai GPT -4.1 seriesThe

Google’s head assures the regulators giant to maintain a high quality of AI security tests and reports. Two years ago, Google told the US government It will publish all the “notable” public AI models “in” privilege “security reports. Organization Has followed that promise with similar promises From Other countriesAI promised to “supply public transparency” around the products.

Kavin Bankston, senior adviser to the AI ​​Governance of the Center for Democracy and Technology, reports “Race Below” to protect AI in the Trends of Sports and View.

“Competitive labs such as Openai have been combined with the report that shaved the time of their protection test for months, the Google’s top AI model’s short documentation tells a worrying story of a competition under AI protection and transparency as companies take their models to market.”

Google said in a statement that it does not have details in its technical report, but it conducts a protection test for models before its release and “hostile red taming”.

Leave a Reply

Your email address will not be published. Required fields are marked *