Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropological CEO Dario Ambidei believes that today’s AI models do hallucinates, or present things up and present, as if they were true, less than humans, he said on Thursday a press briefing of Anthropic’s first developer in San Francisco.
Amodai said all this in the greater point that he was creating: AI hallucinations are not limited to the AGI ethnographic path-not human-level detectives or better AI systems.
“It really depends on how you measure it, but I suspect that AI models are probably less hallucinates than humans, but they are more amazing in a more amazing way,” said Amodai TechCrunch question.
Anthropic CEO, one of the Bulish leaders in the industry, is expected to achieve AGIs. A widely promoted He wrote the paper last yearAmodai said that he believed that the AGI could reach with 2026. During the press briefing on Thursday, the ethnic chief executive officer said that he was seeing continuous progress in this goal, noted that “water is increasing everywhere.”
“Everyone is always looking for these hard blocks about the key [AI] You can, “Ambedai said.” They were nowhere to be seen. There is nothing like that. “
Other AI leaders believe that hallucinations present a major obstacle to achieving AGI. Earlier this week, Google Dipmind CEO Demis Hassabis Dr. Today’s AI models have many “holes” And get the wrong questions wrong. For example, at the beginning of this month, an ethnic representative was a lawyer They are obliged to apologize to court after they use clad To create a quote in the filing of a court, and the AI ​​chattbot hallucinates and the name and title have been incorrect.
Amodi’s claim is difficult to verify, mainly because most hallucination benchmarks beat AI models against each other; They do not compare models with people. Some strategies seem to help the lower hallucination rate as to access AI models to web searching. Individually, some AI models, such as Openai’s GPT -4.5Benchmarks have significantly lower hallucinations than the primary generation system.
However, there is also evidence to suggest that hallucinations are actually worse in advanced rational AI models. OpenAI’s O3 and O4-Mini models OPENY HIGH HALLUSION RATE MORE MODE MODERSAnd the company does not understand why.
Later in the press briefing, Amodai mentioned that TV broadcasts, politicians and people always make mistakes in all kinds of professions. The mistake that AI makes is not even strict in its intelligence, according to Amodi. However, the CEO of anthropic confidence with which AI models can present the untrue things as true that may be a problem.
In fact, an ethnographic AI models have done a fair amount of research on the tendency to cheat human beings, which seemed to be a problem, especially in the company’s recently launched Claud Opus 4. Apollo Research, a Protection Institute given the initial access to the AI ​​model, has been found that a preliminary version of Claud Opus 4 has shown a High tendency to scheme against people and cheating on themThe Apollo advice as far as the ethnic model should not be published. Anthropic said it brought some mulch that appeared to solve the problems that Apollo raised.
Amodi’s comments suggest that anthropological an AI model can be considered as AGI or is equal to human-level intelligence, even if it still does hallucinet. Although an AI that hallucinates can read less than AGI by the definition of many people.