FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others

Spread the love

FTC announced Thursday that it was turning on a Investigation Among the seven technology companies that produce AI chatboat products for minors: alphabet, character, Instagram, Meta, Open AEE, SNAP and Jai.

Federal regulators are evaluating the protection and cashing of chatboat companions, how they try to limit the negative impact on children and adolescents, and if parents are aware of the potential risk, they want to learn.

This technology has proven controversial for its weak results for child users. Open And Character.A. Chatboats face suicide after being encouraged by companions.

Even when these companies establish sensitive conversations for blocking or deskalates, users of all ages have found ways to bypasses these protection. In the case of Open, a teenager spoke to Chatzipt for months about plans to finish his life. Although Chatzipt initially sought to renew the teenager to professional assistance and online emergency line, he was able to fool the chatbot to share detailed instructions that he used to commit suicide.

“Our protection works more reliably on the common, short exchanges,” OpenA Wrote At this time in a blog post. “We have learned over time that these protections can sometimes be less reliable in long interactions: the parts of the model’s protection training can be reduced as they move back and forward.”

TechCrunch event

San Francisco
|
October 27-29, 2025

Meta is also on the fire for the AI ​​chatbots for excessive relaxing rules. In accordance with a long document that outlines the “Content Risk Value” for Chattabots, Meta Its AI has allowed the Companions Make “romantic or passionate” conversation with kids. It was only removed from the document after Reuters journalists asked Meta about this.

AI chatbots can also bring danger to older users. A 76 -year -old man, who was cognitively disabled by a stroke, began a romantic conversation with a Facebook messenger bot inspired by Kendall Jenner. Chatbot invited him to See him in New York CityIn spite that he is not a real person and he has no address. The man suspected that he was real, but AI assured him that a true woman was waiting for her. He never turned it into New York; He fell on the train station and endured the life-end injuries.

Some mental health professionals “have noticed growth”AII-related psychosis“In which the users are confused with the thought that their chattabot is a conscious entity that needs to be released.

FTC Chairman Andrew N Ferguson says, “As the AI ​​technologies have developed, it is important to consider the impact of the chattabots on the children, as well as the United States, the United States maintains its role as a global leader in this new and exciting industry.” In a press noticeThe

Leave a Reply

Your email address will not be published. Required fields are marked *