Anthropic launches a new AI model that ‘thinks’ as long as you want

Spread the love

Anthropic Cloud 4.7 Sonnet is releasing a new Frontier AI model, which users have created the company designed to “think” about the questions as long as they want.

Anthropic Claud 3.7 Sonnet calls Sonnet the first “hybrid AI argument model” of the industry, as it is a single model that is real-time answers and more considered, “Thinking” questions can both answer. Users can choose whether to activate the “logic” capabilities of the AI ​​model, which requires Claud 3.7 Sonnet to “think” for a short or long time.

The model presents anthropological extensive efforts to ease the user experience around its AI products. Most AI chatbots today have a distant model picker that forces users to choose from various different options that have been spent on cost and affordability. Labs like ethnographic do you rather have to think about it – ideally, a model works all over.

On Monday, Claud 4.7 Sonnet is roaming to all users and developers on Monday, ethnographic, but only anthropological premium clad chatbot plan will get access to the rational features of the model. Free Claud Users will get Claud 3.7 Sonnet’s Standard, Non-Rissoning Edition, which anthropologically claimed that its previous border exceeded the AI ​​model, CLOD 3.5 SonnetThe (Yes, the agency has saved a number))

Claud 4.7 Sonnet costs $ 3 per million input tokens (meaning you can write about 750,000 words, more than the entire “Lord of the Rings” series, clodd for $ 3) and $ 15 per million output token. This makes OPENIA O3-Mini (1 million input token $ 1.10/1 million output tokens per $ 4.40 per 1 million input) and DIPSC R1 (1 million input token per 15 cent/$ 2.19 per 1 million output token per 1 million output token) However, keep in mind that O3-Mini and R1 are strictly rational model-clood 3.7 is not as hybrids.

Anthropic’s new thought mode Figure Credit:Ethnographic

CLOD 3.7 SONT HONEST HOLD HOLD AII AI MODE which may “cause”, a trick Many AI Labs have become the traditional tap of improving the AI ​​performance tapperThe

Reasonable models such as O3-Mini, R1, Google’s Gemi 2.0 flash thoughts and Jai Grock 3 (think) use more time and computing energy before answering the questions. Models divide the problems into smaller steps, which improves the correctness of the final answer. Reasonable models are not necessarily the same as any human being or reasonable, but their process is modeled after discounting.

Eventually, anthropic clode wants to determine how long to “think” about its questions, without the need to select users in advance, the lead in an interview with Dian Pen Techcunch.

The anthropological one writes, “Humans do not have two separate brains for questions that can be answered immediately to the answer that can be answered immediately,” Blog post Sharing with TechCrunch, “We consider the argument as a power model to be a border model to integrate the logic to other ability rather than to provide the logic to a separate model.”

Anthropic says it is allowing Claud 3.7 to show the Sonnet with the “visible scratch pad” to show the episode of his internal plan. Pen tells TechCrunch that for most prompts you will see Claud’s full thought process, but some parts can be re -created for faith and protection.

Claud app on Claud Application process Figure Credit:Ethnographic

Anthropic says that it has made Clock’s thoughtful models imitated for such a solid coding problem or agent work for real-world works. API tapping detectives on anthropic can control the cost of thinking, business speed and answer quality.

In an experiment to measure real-word coding tasks, the needle, Claud 3.7 Sonnet was 62.3% accurate, compared to the opening and 3-minute models, which scored 49.3%. In an AI model retail setting, another test to measure the simulated users and external API, Tau-Bench, Claud 5.7 Sonnet scored 5.2%, scored .5.5%compared to the OPNA and 1 model.

Anthropic also states that Claud 4.7 Sonnet will refuse to answer the questions less often than its previous models, claiming that the model is capable of creating a more brief difference between the harmful and gentle prompt. Anthropic says that it has reduced the unnecessary rejection by 45% compared to Claud 3.5 Sonnet. When it comes at the time Some more AI labs are rewriting their procedure to restrict their AI chatboats answersThe

In addition to the Claud 4.7 sonnet, an agent called ethnic Claud Code is also releasing coding equipment. To preview the research, the equipment allows developers to run specific tasks by direct clode from their terminal.

In a demo, anthropological staff showed how Claud Code could analyze a coding project with a general command, e.g. Explain the structure of this project. “A developer using Plain English on the command line can correct a codebase. The CLOD code will describe its edits as it changes and even checks a project for error or push it to the githab repository.

The ethnographic spokesperson told TechCrunch that the Claud Code will initially be available to a “Come First, First Serving” basis to a limited number of users.

Anthropic is publishing Claud 3.7 Sonnet at a time when AI labs are sending new AI models at a pace of a separation. The ethnic history has adopted a more systematic, protection-centric system. But this time, the company is trying to lead the pack.

Although for how long, the question. Open can be closer to the publication of a hybrid AI model of its own; Sam Altman, CEO of the organization, says it will come to “month”.

Leave a Reply

Your email address will not be published. Required fields are marked *