Anthropic’s Claude AI model can now handle longer prompts

Spread the love

Part of the attempt to attract more developers to the popular AI coding models of anthropic companies, enterprise customers are increasing the amount of information that can send to the clode in a single prompt.

For ethnographic API customers, the organization Claud Sonnet 4 The AI model now has a million token context window – which means that AI requests can be operated as 750,000 words, the entire Lord of the Rings is higher than the trilogy, or more than the 75,000 line code. It is about five times the previous limit of clode (200,000 token), and the 400,000 token context window provided by the Openai GPT -5 is more than double.

The long context for Claud Sonnet 4 will also be available through the Antropic Cloud Partners with Amazon Bedrock and Google Cloud Vertex AI.

Ethnic has created an The largest enterprise business Among the AI model developers, Microsoft sells Claud on AI coding platforms such as Githab Kapilt, Windsaroff and any spier cursor. Although clad developers have become a favorite model, GPT -5 can threaten an ethnographic domination with its competitive price and strong coding performance. Anyone CEO Michael Truel also helped to declare Openai even The introduction of GPT -5Which is now the default AI model for the new users of the cursor.

In an interview to Brad Abrams TechCrunch, the AII coding platforms for the clad platform, he is expecting to receive “lots of benefits” from this update. Asked if there was any touch on the GPT -5 cloud API, Abrams reduced the anxiety that he was “API business and it is really happy with the way it is growing.”

Where OpenAI produces most of its earning customer subscriptions to Chatzip, ethnographic business centers sell the AI model through API. It has made the AI coding platforms as a key customer for anthropological and it may be that the company is throwing some new parks to attract users in the face of GPT -5.

Last week, the ethnic has unveiled an updated version of its largest AI model, Off to work 4.1 Which has given the company’s AI coding capabilities a little further.

TechCrunch event

San Francisco
|
October 27-29, 2025

Generally speaking, when the AI models have more context, all the tasks perform better, especially for software engineering problems. For example, if you ask an AI model to spin a new feature for your application, it is likely to work better if you see the entire project instead of a small section.

Abrams also told TechCrunch that the large context of the cloud helps to perform it better in long agent coding functions, where the AI model has been autonomously working for a few minutes or hours. With a large context window, Claud may think of all its previous steps in long-hurrying functions.

However, some companies have finalized the windows in the larger context, claiming that their AI models can process huge prompts. Google Gemssi provides 2 million token context windows for 2.5 Pro and Meta Lama provides a 10 million token context window for 4 scouts.

Some studies show that there is a limit to how effective Windows can be in the larger context; AI models are Not great These huge prompts in processing. Abrams said that the ethnographic research team focused on increasing the “effective context window”, not just the Claud Contest Window, which suggests that AI can understand most of its information. However, he refused to reveal the ethnographic proper strategies.

When Claud Sonnet 4’s requests are more than 200,000 token, anthropological API users will charge more, millions of input tokens per million output tokens ($ 22.50 to the million input token and a million output token 15).

Leave a Reply

Your email address will not be published. Required fields are marked *