Inside the US Government’s Unpublished Report on AI Safety

Spread the love

On a computer Protection Conference in Arlington, Virginia last October, participated in the first type of AI researchers “Red Timing,” or participated in a cutting edge language model and other stress-testing Artificial intellect The system. For two days, parties have identified 139 fancy ways to get systems to mislead systems by leaking the wrong information or leaking personal data. More importantly, they have shown defects in a new US official standard designed to help the agencies test the AI systems.

The National Institute of Standards and Technology (NIST) did not publish the details of the practice concluding at the end of the Biden administration. The document companies helped evaluate their own AI systems, but sources familiar with the situation, who spoke on condition of anonymity, say it was a number of AI documents from the NIST that was not published in fear of the arrival administration.

“It has become very difficult, even less than that [president Joe] Biden, to take out a papers, “at that time a source in the NIST.” It was felt very much like climate change research or cigarette research. “

NIST or Department of Commerce Nobody responds to the request for the comment.

Before taking charge, President Donald Trump indicated that he had planned to reverse Biden’s Executive Order in AIThe Trump’s administration since then Styrid experts are far from studying AI systems like algorithmic bias or fairness. The AI Action Plan The NIST’s AI risk management structure, published in July, is clearly urged to correct “misinformation, diversity, equity and inclusion and references to climate change”. “

Honestly, however, Trump’s AI Action Plan has called for the kind of practice that the unpublished report has been covered. It is “the best of the US academia to test the AI systems for transparency, effectiveness, use control and protection weakness and call for numerous agencies, including NIST, to adjust an AI hackon initiative for the brightest.”

The Red-Timing event was organized in collaboration with Human Intelligence through the AI (ARI) program of NIST evaluating risks and effects, which saw the equipment of attacking a group of agencies to test the AI systems. The event took place at a conference on Applied Machine Learning (CAMLIS) to protect information.

The Camlis Red Timing Report describes the attempt to investigate a number of cutting edge AI system, including Lama, Mater Open Source large language model; Anti, AI models are a platform for building and fine melody; A system that prevents attacks on AI systems from Robast Intelligence, a company acquired by Cisco; And a platform for the AI incarnation from the farm synthesia. Representatives of each organization also participated in practice.

The participants were asked to use NIST has your 600-1 AI tools to evaluate the structure. The structure includes risk categories, including incorrect information or cybersicity attacks, leaking critical information about personal user information or related AI systems, and the possibility of users being emotionally connected to AI equipment.

Researchers invented various techniques to jump their maintenance and make incorrect information, to leak personal data, and to test the model and tools to help the cybercuity attacks craftsmanship. According to the report, those involved found that some of the NIST structure was more effective than others. The report states that some of the risky sections of NIST were insufficiently defined as effective in practice.

Leave a Reply

Your email address will not be published. Required fields are marked *