Who’s to Blame When AI Agents Screw Up?

Spread the love

In the past Year, Veteran Software Engineer J Prakash Thakur spent his night and weekly holidays prototyping AI agents This, in the near future, food and engineer mobile applications can almost fully order themselves. His agents, though surprisingly capable, have also published new legal questions that are waiting for companies that are trying to capitalize on Silicon Valley’s most popular new technology.

Agents AI programs It can work most individually, companies allow customers to automatically automatically answer or provide invoice. ChatzPTs and similar chatbots can be drafted or analyzed the email based on request, Microsoft and other technology giants hope that agents will deal with More complex function– and most importantly, do this With little human observationThe

The highest in the technology industry Ambitious plan Some days involve multi-agent systems by grouping to replace with dozens of agents The whole workplaceThe For companies, the advantage is clear: to save time and labor expenses. In the meantime, the demand for technology is increasing. Tech Market Researcher Gartner Assumption This agent will solve 80 percent of the general customer service questions by AI 2029. Fever, a service where traders can book freelance coder, Report “AI Agent” searchs have increased 18,347 percent in recent months.

Most self-educated coder living in California wanted to be at the top of the emerging field. His day’s work in Microsoft is not related to agents, but he is tinking AutogenicOpen source software for Microsoft Building Agents, since he returned to Amazon in 2021. Last week, a similar agent called Amazon Strands launched the development equipment; Google It offers an agent development kit.

Since agents are meant to work autonomously, the question about whom he carries for when their defects cause financial loss are Thakur’s biggest concern. He believes that the fault is to determine the fault when the wrong communication agents of different agencies can become controversial in a single, large system. He compared the challenge of reviewing the error logs from different agents with the reorganization of the conversation based on the notes of different individuals. “It is often impossible to determine responsibility,” Tagore says.

Joseph Fireman, a senior legal consultant in Openai, said at a recent legal conference organized by the Media Law Resource Center in San Francisco in San Francisco that the aggressive parties followed the deepest pockets. This means agents like that agents can be ready to take some responsibility – even when wandering with an agent a child can blame. (If that person is in fault, they will probably not be a good target in terms of money, thinking about it). The fireman said, “I don’t think anyone is hoping to sit on the computer’s basement on the computer.” Insurance industry Roll Out Coverage has begun Companies are for AI chatboat issues to help cut the accident costs.

Onion ring

Tagore’s tests are involved in stringing agents together on a system that requires as much human intervention as possible. The project he followed is to replace colleagues software developers with two agents. One was trained to search for special tools needed to create the application and the other had shortened their use principles. In the future, the third agent can use the marked equipment and follow the short principles for the development of a whole new application, Thakur says.

When Thakur puts his prototype in the exam, a search agent found a tool that, according to the website, “Enterprise supports unlimited requests per minute for users” (which means high-cost clients can rely on as much as they want). However, when trying to spread the original information, the brief agent has eliminated the important qualifications of “every minute for enterprise users”. It was misinterpreted the coding agent, which does not qualify as an enterprise user, can write a program that makes unlimited requests to the outside service. Because it was a test, no harm. If this happens in real life, the cut guide can break the whole system unexpectedly.

Leave a Reply

Your email address will not be published. Required fields are marked *