Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

AI researchers from OpenAI, Google Dipmind, Ethnographic and Nonprofit Group AI Researchers in AI have called for a deep investigation of strategies to monitor the so-called thoughts of AI rational models Position paper Published on Tuesday.
AI is a key feature of rational models such as OpenAI of the 3 And DIPSECR R1Their Thinking Or COTS – an external process where the AI models work through the problem, such as how people use scratch pads to work through a solid math question. Rational models are a key technology and paper authors to strengthen AI agents that the COT observation can be a key method for controlling AI agents as AI agents become more broad and capable.
Researchers on the position paper say, “COT observation presents a valuable addition to the protection system for the Frontier AI, a rare glimpse of how AI agents decide,” the researchers on the position paper say. “Nevertheless, there is no guarantee that the current visibility of visibility will continue. We encourage research communities and Frontier AI developers to use the best use of COT and study how to save it.”
Position Paper AI Model Developer Tells to study what Cotts make COTS “observable” – in other words, which reasons can increase or reduce transparency in how AI models really reach the answers. Paper writers say that COT observation can be a key method to understand the AI rational models, but note that it can be fragile, caution against any interference that can reduce their transparency or reliability.
Paper writers have also called for AI model developers to track COT monitory and study how the procedure can be applied as a security system.
Notable signs of the paper include Opena’s Chief Research Officer Mark Chen, Safe Superintendent CEO Elias Sutskver, Nobel Laureate Jeffrey Hinton, Google Dipmind co-founder Shane Leg, Jai Safe Advisor Dan Handser Dan Hander Hander Hander. The first authors include the leaders of the United Kingdom AI Protection Institute and Apollo Research and other signatures are from Meta, Amazon, Meta and UC Berkeley.
This paper has identified a moment in an attempt to increase research around AI protection among many leaders of the AI industry. This comes at the time when technology companies are caught in a fatal competition – which led Top researchers Meta to poach OpenAEA, Google Dipmind and an ethnographic with a million dollar offers. Some researchers who are most looking for are the AI agents and AI rational models.
In an interview with TechCrunch, an open Open researcher Bowen Bakr said, “We have this new chain-off-thought thing at this critical time.” To me, publishing this national position paper is a process of further research and attention before this topic. “
Open is publicly publicly published by the first AI rational model, and a period of 1 in September 2024. In the next months, the technology industry was quick to publish competitors that showed more advanced performances on Google Dipmind, XAI and some ethnographic models benchmarks.
However, the AI argument models are relatively rarely understood about how it works. Although AI labs have skilled to improve AI’s performance last year, it has not been translated to better understanding of how they reached their answers.
AI models were one of the leaders of the industry to determine how it really works – it is a field known as interpretation. Earlier this year, CEO Dario announced Amodai Promise to Crack Open the Black Box of AI Model by 2027 And further investment of explanations. He urged the OpenAI and Google Dipmind to further research the matter.
The initial research from anthropologists has indicated that Cots may not be completely reliable indication How these models come in answers. At the same time, OpenAI researchers say that the observation of the bed may be one day Reliable ways to track alignment and protection In AI models.
The goal of this national position papers is to attract enthusiasm and more attention to the newborn fields of research like COT monitoring. Companies such as OpenAI, Google Dipmind and anthropologists are already researching these topics, but it is possible that this paper will encourage more financing and research in the space.