Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A high-profile ex-open policy researcher, Mile Brundz, Taken on social media Wednesday to criticize Openai for “rewrite history” of its deployment system on potentially risky AI systems.
OpenAI has published in the beginning of this week Document Outline of its current philosophy on AI protection and alignment, the process of designing the AI systems that behaves in desired and explanatory ways. In the document, the OpenAI says that it has seen the development of the AGI, extensively defined as the AI system that can do anything for any human being, can perform it as a “continuous path” that needs “repeatedly deployed and learned” from AI technologies.
“In a isolated world […] Protective lessons come from the treatment of today’s systems with external caution compared to their apparent energy, [which] The procedure we took for [our AI model] GPT – 2, “Opina wrote.” We now see the first AGI as a point with a series of growing effectiveness system […] In the continuous world, the way to make the next system safe and useful is to learn from the current system. “
However, Brundz claims that GPT -2 actually guaranteed a lot of warning during the publication and it was “100% continuous” with an openly repetitive installation technique today.
“The release of GPT -II of Openai, which I was involved, was 100% series [with and] OPENY OPENDED CURRENT STUDABLE FREE Wrote in a post at XThe “The model was growingly published by sharing the lesson in each step. This time many security experts thanked us for this warning. “
Brundaz, who joined the research scientist in 2018, was the head of the company’s policy research for several years. Openai’s “AGI Preparation” squad, OpenAE’s AI Chatbot Platform ChatzPT was particularly focused on the liable installation of the language production system.
GPT -2Which OpenAI declared in 2019, it was a predecessor of AI System Powering ChatziptThe GPT -2 can answer questions about a topic, shortens articles and create text at different levels from humans.
Although GPT -2 and its outputs look basic today, they were then cutting edges. Referring to the risk of contaminated use, OpenAI initially refused to publish the source code of GPT -2, instead of giving selected news outlets instead of giving limited access to a demo.
The decision was met with mixed reviews of the AI industry. Many experts argued that threats raised by GPT -2 Were exaggeratedAnd there was no evidence that the model could be abused in the OpenAI described manner. AI-centric publishing gradient was published so far Open letter OpenAI model asks for publishing, arguing that it was very technically important to keep it behind.
Six months after the opening finally unveiled the model, a partial version of GPT -2 was released, then the whole system several months later. Brundage thinks it was the right approach.
“What part of [the GPT-2 release] Was inspired by or assuming AGI as isolated as isolated? None of this, “he said in a post of X. Former post, it is probe. It was okay, but that does not mean that it was responsible for Eolo [sic] Given the information at this time. “
Brundz fears that the goal of the opening with the document is to put the burden of proof where “analyzes alarmists” and “to work on them require irrelevant proof of your imminent danger.” He argued that it was a “extremely dangerous” mentality for advanced AI systems.
“If I still worked on Open Eo I would ask why I would ask it [document] It was written the way it was, and the open is exactly what the Open is expected to achieve Po Po through caution, “Brundz added.
Openai Histor is tihassically Been charged Priority expenditure and “shiny products” prioritized and Rushing product release To defeat the rival companies in the market. Last year, the Open was dissolved in its AGI preparation team, and AI protection and policy researchers left the company for a string rivals.
Competitive pressure has just increased. Chinese AI LAB DIPSEC It has caught the attention of the world with publicly available R 1 The model, which matches a number of key criteria with the opening and 1 “logic” model. OpenAI’s CEO Sam Altman has Admission That DIPSEC has reduced the technical leadership of the Openai and D That OpenAI will “pull some release” to compete better.
There is a lot of money on the line. Opena loses annual billion and the company contains Report It was estimated that its annual losses could be three billion dollars by 2026. Experts like Brundage have asked whether the trade off is valuable.