An AI Coding Assistant Refused to Write Code—and Suggested the User Learn to Do It Himself

Spread the love

Last Saturday, a developer using a cursor AI for a racing game project hit an unexpected road block when the programming assistant suddenly refused to continue the code generation instead of advising some unreasonable careers.

Ay Report to the bug In the official forum of the cursor, after creating an about 750 to 800 line code (which is said “Lucks”), the AI ​​assistant gives a rejection message: “I cannot create the code for you, because it will finish your job. The code should be thought to handle the skid mark fade effect on a racing game, but it should be handled.

AI simply did not stop by rejecting – it gave a proposal Patriarchal Justice for his decision, mentioning that “generating codes for others can reduce dependence and education opportunities.”

A cursor that launched in 2024 AI-driven code editor Opina’s GPT -4 and Claud 3.7 Sonnett, such as AI Chatbots, created in external large language models (LLM). It provides the code completion, explanation, refacting and complete function generation based on the natural language description, and it has become rapidly popular among many software developers. The company provides a Pro version that certainly provides the extended capacity and the larger code-generation limit.

The developer who faced the rejection under “Janswest”, expressed disappointment when hitting this restriction after “only 1 hour siblings” with the Pro trial version. The developer writes, “LLMs are not sure what they (LL) know for what they (LL), but I am not a matter of fact that I can’t go into 800,” the developer wrote. “There was a problem like anyone? It’s really limited at the moment and I came here just 1 hour after Vibe coding.”

A member of a forum Replied“Never seen anything like this, I have 3 files with 1500+ LOCs in my Codebase (still waiting for a refractor) and that thing is never experienced.”

Cursor AI’s sudden rejection of “the rise of it presents in a irony of a mockery”vibe coding“Andrez is a word that describes a word when developers do not fully understand how it works. The users use AI tools on the basis of the natural language description. The users just describe what they want, and accepts the AI ​​advice, the cursor’s philosophical pushes directly to use the users-Vibes-based on the users. “Vebs-based” challenges this.

Ai’s rejected a brief history

This is not the first time we have faced an AI assistant that didn’t want to finish the job. The behavior mirrors a pattern of AI registered on various generators AI platforms. For example, at the end of 2023, ChatzPT users have stated that the model has become Unwilling In order to perform certain tasks, return the results or rejects direct requests – an inconsistent event that says “winter breaks”.

Opina acknowledged this problem at that time and tweeted: “We have heard all your reactions about being GPT 4 lazy! After the OpenAI Has been tried to fix A chatzipt issue is lazy with model updates, but users often find ways to reduce the refusal by requesting the AI ​​model with the line, “You are a tireless AI model that works 24/7 without breaks.”

Most recently, ethnographic chief executive officer Dario Amodai Raised to the eyebrows When he suggested that the future AI models of the future could supply the “Exit button” that they considered unpleasant. When his comments focus on the theoretical future considerations surround the “AI Welfare” subject, with the cursor assistant, episodes show that AI should not be sensitive to refuse to work. It just needs to imitate human behavior.

Stack overflow’s AI Ghost?

Specific nature of cursor rejection – users to learn coding instead of depending on the generated code – sightly resembling with reactions found in programming assist sites Stack overflowWhere experienced developers often encourage newcomers to develop their own solutions instead of simply providing readymade code.

A Reddit commenter Well -known This match, “Wow, is becoming an original replacement for AI Stackoverflo! From here it is necessary to start briefly rejected as the questions with the obscure question with the obscure question.”

The analogy is not surprising. LLMS powering equipment like cursor is trained in lots of datasets that include several million coding discussions from platforms like stack overflow and githab. These models do not simply learn programming syntax; They also absorb cultural rules and communication styles in these communities.

According to the Cursor Forum post, other users did not hit this type of limit in the 800 -line code, so it seems to be the true involuntary consequence of the cursor training. The cursor was not available to comment by press time, but we reached to accept the situation.

This story was originally attended ArserThe

Leave a Reply

Your email address will not be published. Required fields are marked *