Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

Spread the love

An Analysis By the Epot AI, a non -profit AI Research Institute, the AI ​​industry, suggests that the AI ​​models cannot be able to gain a lot of performance beyond logic for a long time. As soon as in a year, progress can be reduced from rational models, according to report searches.

OPENCY O3 In recent months, AI has led enough profit to benchmarks, especially the criteria that measure math and programming skills. Models can apply more computing to problems, which can improve their effectiveness, the downside is to complete the tasks they take longer than conventional models.

Reasonable models are first created by training a conventional model about a lot of data, then applying a technique called Reinforcement Learning, which effectively “respond” the model for solutions to solid problems.

According to Epoche, Frontier AI labs, like Openai, have not yet applied a lot of computing energy at the learning level of rational model training.

It is changing. Opena says that it has applied to more than 10x computing than the predecessor, and 1 and 3 training, and Epok assumes that most of this computing was dedicated to learning the reinforcement. And OpenAI researcher Dan Roberts recently revealed that the organization’s future plans were called Reinforce To use more computing energy than primary model training.

However, how much computing can be applied to learning every reinforcement in the ages is still an upper bound.

Age logic model training
According to an era AI analysis, rational model training can slow down scaling.Figure Credit:Age AI

Josh Yu, an analyst and analysis of analysis, explains that performance profits from standard AI model training are currently four times every year when the performance profits of reinforcement are increasing ten times every 3-5 months. The progress of the argument training “probably will be converted to the overall border by 2026”, he also said.

TechCrunch event

Berkeley, CA
|
June 5


Book now

Epok’s analysis of a number of assumptions and participated in public comments from the AI ​​company officials. However, it also creates that scaling rational models can prove to be a challenge for reasons as well as computers with high overhead expenditures for research.

“If the study requires continuous overhead expenses, rational models may not scale as expected,” you wrote. “Quick calculation scaling is probably a very important element in model progress, so it is like tracking it closely” “

That indicates that rational models can reach any kind of limit in the near future, which may be concerned with the AI ​​industry, which has invested a lot of resources that developed these types of models. In the meantime, studies have shown that reasonable models, which may be Run is incredibly expensiveThere are serious errors, like a tendency More hallucinate Than specific conventional models.

Leave a Reply

Your email address will not be published. Required fields are marked *