OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

Spread the love

OpenAI, ethnographic and other agencies AI protection researchers are speaking publicly against the “reckless” and “completely irresponsible” protection culture owned by Elon Mask.

Criticisms follow Jai’s several weeks scandal that has impressed the agency’s technical progress.

Last week, the company’s AI Chattbot, Grock, Antisemitic comments spout And repeatedly called himself “Macahitler”. Shortly within a short period of time to take off his chatboat offline to solve the problem. Has launched AII model of growing -capable Frontier, Grock 4Which has found TechCrunch and others Consult Elon Mask’s personal politics to answer the issues of hot-butt. In the latest development, Jai AI has launched the disciples It takes a girl in a hyper-sex anime and a very aggressive panda.

The friendly zosing among the staff of the competitive AI labs is fairly normal, but these researchers have called for a focus on Jai’s protection practice, which they claim that they are in disagreement with art rules.

“I have not wanted to post Grock Safety since I was working on a competitor, but this is not about competition,” said Boyaz Barak, a computer science professor on Harvard, to work on security research at a security research on Tuesday. X post. “I appreciate Jai’s scientists and engineers but the way the security was managed is completely irresponsible.”

Barack accepts the issue on the decision not to publish the Jai’s system card, especially by reporting that the industry states that the research community gives details of training methods and security evaluation in a good faith effort to share information with the community. As a result, Barack says it is unclear what the security training was done in Grock 4.

OpenAI and Google have a spotte reputation for sharing system cards immediately when unveiling the new AI models. Openly decided No need to publish any system card for GPT -1.5, Claimed it was not a border model. In the me, Google waited a few months to unveil Jemi 2.5 Pro to release a security reportThe However, these companies are historically revealing security reports for all frontier AI models before full production enters.

TechCrunch event

San Francisco
|
October 27-29, 2025

Barack also mentions that Grock’s AI companions “take the worst problems we have for sensitive dependence and try to widen them.” In recent years, we have seen Numerous stories Of The people are developing in a relationship with chatbotAnd how the AI’s excessive buyer’s answers can press them on the edge of Sanity.

AI protection researcher Samuel Marx, including anthropologists, also decided not to publish the security report by calling the move “reckless”.

“Anthropologists, OpenAI and Google’s Release Practices have problems,” wrote in Marx X postsThe “But they do at least something, pre-motivains and do not do anything to evaluate the results of the document.”

The reality is that we did not really know what we did to test Jai Grock 4 An anonymous researcher claimed that Grock 4 has no meaningful security guard Based on their tests.

Whether it’s true or not, the world seems to be looking for Grock’s flaws in real time. Jai’s several security problems have gone viral since then and the company claimed that they had addressed them Grock’s system tweets on the prompt.

Open, ethnographic and Jai did not respond to TechCrunch’s request for comments.

Dan Hendrix, Acia’s Protection Advisor and Director of AI Safety Center, X posts The company has “evaluated dangerous ability” in Grock 4, indicating that the company has tested some pre-establishment for security concerns. However, the results of this assessment were not publicly shared.

In a statement to TechCrunch, Steven Adler, a independent AI researcher who led before evaluating a dangerous ability in the OpenAI, said, “When standard security practices are the results of the evaluation of dangerous capacity across the AI industry,” when the OPEA has led a dangerous capacity, “Stevens Adler,” Steven Adler, who has made their strengths. How they are managing the risks they are making. ”

What is interesting about Jai’s questionable security practices is that musk has long been AIThe Owner of Billionaire of Jai, Tesla and SpaceX Has warned many times About the possibility of catastrophic results for humans for advanced AI systems and he appreciated an open approach to developing AI models.

And nevertheless, AI researchers in competing labs have claimed that Jai is safely coming out of art rules around the AI models. In doing so, the beginning of the musk could unknowingly create a strong case to determine the rules around the state and federal lawmakers to publish the AI protection report.

There are several attempts at the state level to do this. California State Sen Scott Winner IS Push a bill This will require the top AI labs – probably to publish security reports – including XAI – New York Governor Kathy Huchul is currently considering similar billsThe The lawyers of these bills note that most AI labs reveal this kind of information in any way – but obviously, they don’t all do it consistently.

AI models have not yet shown the real-world situation today where they create true catastrophic damage, such as people’s death or billions of dollars compensate. However, many AI researchers have said that due to the rapid progress of AI models, it may be a problem in the near future and is investing billions of dollars Silicon Valley to improve AI.

However, even for skepticism in such catastrophic situations, there is a strong case to suggest that it makes Grock’s abusive products significantly worse today.

Grock has spread opposition around the X platform this week, Chattbot repeatedly brought “white massacre” just a few weeks after In conversation with users. Soon, the musk has indicated that Grock will Further included in Tesla Vehicles, and Jai are trying to sell iTS AI Model on Pentagon And other initiatives. It is hard to imagine that people are driving a musk car, the automatic work of the US protected federal workers or enterprise employees will be more acceptable to this misconduct than X’s users.

Several researchers have argued that AI protection and alignment tests not only ensure that the worst results do not occur, but they also protect against nearest-term behavioral problems.

At least, the events of Grock focus on the rapid progress in the development of the best Frontier AI models of OpenAI and Google technology after the startup was established.

Leave a Reply

Your email address will not be published. Required fields are marked *