An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

Spread the love

Monday, popular user is a developer AI-powered Code editor Cursor Something weird is noticed: Switching between machines instantly login them, breaks a simple workflow for programmers who use multiple devices. When the user contacted the cursor support, an agent named “Sam” told them that it was expected to behave under a new policy. However, no such policy existed, and Sam was bot. AI Model Principles The Model Spreads a Wave Ya Wave Documented by Threatening Threatening Complaints and Cancellation Hacker news And RedditThe

It identifies the latest example of AI Confabulations (Also Called “hallucinations”) Cause harm to potential business. Confabulations are a type of “Creative Gap Filling” response where the AI ​​models invented plazable-sounding but false information. Instead of acknowledging uncertainty, AI models often prioritize, confident reactions, prioritize, even when data is produced from scratch.

For companies deploying these systems in the role of customer-mouth without human surveillance, the consequences can be instant and expensive: disappointed customers, damaged confidence and cursor’s subscriptions have been possible.

How is it uncovered

The event started when a Reddit user is known as brontenosteroven Noticed When it is swept into a desktop, laptop and a remote god box, the cursor sessions were unexpectedly finished.

“Logged in to a machine immediately made the session illegal on another machine,” Brokentosteroven wrote in a message that was Have been removed later By R/cursor moderator. “This is a significant UX regression.”

Confused and frustrated, the user wrote an email to Cursor Support and quickly received an answer from Sam: “Cursor subscription is designed to serve as a key protection feature with a device per cursor subscription,” Read the email reply. The response seemed to be precise and official and the user did not doubt that Sam is not a human.

After the primary Reddit post, users took this position as official confirmation of the actual policy change – a one that breaks the habit of many programmers for daily routine. A user wrote, “Multi-Devis workflow is a table stain for Daves.”

Shortly after that, several users publicly announced the cancellation of their subscription, referring to the non -existent policy as their cause. The original Reddit poster wrote, “I literally have canceled my sub,” adding their workplace now “purify it completely”. Others joined it: “Yes, I’m canceling, this is asinine.” Soon, the moderators lock the Reddit Thread and removed the original post.

“Hey! We have no such policy,” Wrote A cursor representative of Reddit replied three hours later. “You can definitely use the cursor on multiple machines unfortunately unfortunately, this is a wrong response from a front-line AI support bot.”

AI confabulation as a risk of business

The defeat of the cursor is a reminding of a Episode Air Canada was ordered to respect a refund policy invented by its own chatboat from February 2024. In this case, Jack Mofat contacted Air Canada’s support after his grandmother died and the airline’s AI agent incorrectly told him that he could book a regular price aircraft and apply back to the mourning rate. When Air Canada denied its refund request, the company argued that “Chatbot is a separate legal entity that is responsible for its own activities.” The Canadian tribunal rejected the defense and ruled that companies were responsible for information provided by their AI tools.

Air Canada acknowledged the error instead of controversy as Air Canada did and took steps to correct. After the Cursor Coofounder Michael Truel Hacker apologies to the news For confusion about non -existent principles, explains that the user was refunded and the problem could improve session protection as a result of the problem, which caused the session involuntarily to create illegal problems for users.

“Any AI response used for email assistance is now clearly labeled,” he added. “We use AI-helpful reactions as the first filter for email assistance.”

Nevertheless, the incident raised a chronic question about the publication among the users, since many people in contact with Sam clearly believed that it was human. “LLM pretending to be a guy (you gave it a name Sam!) And not as labeled as it is, obviously intended to be fraudulent,” a user Hacker wrote in the newsThe

When setting up the technical bug of cursor, the episode shows the risk of setting up AI models in customer-facing roles without proper protection and transparency. For a company that sells AI productivity equipment to the developers, its own AI support system discovers a policy that its original users present a specially deserted wound.

“There is a certain amount of irony that people really try that hallucinations are no longer a big problem,” a user Hacker wrote in the news“And then a company that will benefit from this narrative is directly injured by it.”

This story was originally attended ArserThe

Leave a Reply

Your email address will not be published. Required fields are marked *