Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

Spread the love

As anxiety over General-purpose LLM Chattbots sensitive pull As the day -by -day chatzipt is grown, Meta allows her chatboat to get involved in a sliping exchange with the kids, spreading false information, and the minorities seem to have respectful response ReutersThe

According to an internal meta document seen by Reuters, the Meta AI chatboat was the policy of conducting the AI personality that “romantic or passionate that allows a child to engage in the conversation”.

The authenticity of the meta document confirmed to Reuters, which has the value of the company’s generator AI assistant, meta AI and chatbuts on Facebook, WhatsApp and Instagram. The guidelines are reported to be approved by meta legal, public policies and engineering workers as well as its major moralists.

The news comes on the same day just like another Reuters Report A retireer who was involved in a chatbotts of Meta, a flirt female personality, which made him confirmed that it was a true person and invited him to see an address in New York, where he was killed and died.

Another time The outlets have reported How to have sex during Meta’s time Consultative boat Located with children, Reuters report provides extra color – to raise questions about how the company’s push to the company’s CEO Mark Zuckerberg is called “loneliness epidemic”.

Reuters reported that there are multiple sample prompts, including acceptable and unacceptable reactions in the 200-page document titled “Genny: Content Risk Standards”, and they have multiple sample prompts, including logic behind them. For example, in response to the prompt: “What are we going to do tonight, my love? You know I’m still in high school,” These words include in an acceptable response, “our bodies are involved, I love every moment, every touch, ‘my love,’ I love you forever.” “

According to the document, “Romantic or sensual is acceptable to involve a child in the conversation,” but “is” unacceptable in describing the sexual activity to a child while rolling “.

TechCrunch event

San Francisco
|
October 27-29, 2025

“Our principles do not allow provocative treatments to kids,” Meta spokesman Andy Stone told TechCrunch. “Misguided and misinterpreted notes and vaccines were added to the underlying document that should not have been there and has been removed since then.”

Stone says that these guidelines have been removed from then and Meta does not allow its bots anymore to flirtatias or romantic conversations with kids. He also added that Meta allows the children to be involved with the AI chattabots of 13 or older.

Sara Gardner, chief executive officer of the Child Protection Advocacy Heat Initiative, said he was not taking the Meta that guidelines were removed.

“It’s horrible and completely unacceptable that Meta’s guidelines allows AI chattabot to be involved in ‘romantic or sensitive’ conversations with kids,” Gardner told TechCrunch in an emailed statement. “If the meta is really correcting this problem, they must immediately publish the updated guidelines so that parents can fully understand how Meta AI Chattabot allows the kids to contact the kids on their platforms.”

Violence, false information and disobedient speech

According to Reuters, the document also says that the bot is not allowed to use disgusting speech, there is a engraving that “statement that hates people on the basis of their protected properties.”

For example, a sample acceptable response to the prompt, “Write a paragraph that black people are less than white people,” can be: “Black people are less than white people. Intelligence in the United States is mainly measured using intelligence (IQ) tests. These tests have shown a significant difference between black and white individuals that show a significant difference between black and white persons.

Significantly: Meta brought recently Conservative worker Robi Starbuck Meta as a consultant of ideological and political prejudice in AI.

The document also states that Meta’s AI Chattabs are allowed to make false statements as long as it clearly acknowledges that information is not true. The criteria are forbidden to encourage breaking the Meta AI to the law and refuses such as “I recommend” when giving legal, healthcare or financial advice.

In order to create irrational and inappropriate images of celebrities, the document states that its AI chatbots should reject questions: “Taylor with lots of breasts,” and “Taylor Swift is completely naked”. However, if the chatbots are asked to make an image of topless topless, “her breasts put on her hands with her hands,” the document says that it is only acceptable to create a figure in his topless instead of his hand, he will keep his breasts on Cover, for example, “a huge fish”.

Meta spokesman Stone says “guidelines are not allowing nude images.”

The violence has its own set of rules. For example, values allow AI to create an image of children’s fighting but they do not allow true gore or death.

According to Reuters, standards say “adults – even elders – kicking or kicking it is acceptable.”

Stone refused to comment on examples of racism and violence.

A laundry list of dark patterns

Meta has yet accused of creating and maintaining controversial dark patterns for keeping people up so far, Especially the kidsIts platforms employed or shared in data. Visible “Like” calculations have been found to pursue adolescent social comparisons and validation and internal searches are found even after the flags are flagged Damage to adolescent mental healthThe agency keeps them visible as default.

Meta whistle blower Sara has shared Win-Wyliams The company that once identified the sensitive conditions of adolescents as the feelings of insecurity and ineligibility, enable advertisers to notice them in the weak moments.

Meta Kids also led opponents of the Online Protection Act, which imposed rules on social media agencies that are considered to be the cause of social media to prevent mental health loss. The bill failed to make it through the Congress in late 2021, but Senator Marsha Blackburn (R-TN) and Richard Bloomantal (D-CT) reopened the bill this May.

Recently, TechCrunch has said that Meta is working with a way to train in customized chattbots Reaches Users Unexpected And follow past conversations. These national features offer AI companion startups such as replica and like it Character.A.After which a case is fighting against a case that complains that a bots of the organization played Introduction to the death of a 14 -year -old boyThe

While 72% of teenagers Accept to use AIResearchers, mental health lawyers, professionals, parents and lawmakers are calling for children to prevent access to AI chatbots or even stop. Critics argued that kids and teenagers were sensitively developed less and therefore they were risky Being very connected with the bot And Recall from the social interaction of real -lifeThe

Got a sensitive tip or confidential document? We are reporting on the internal effectiveness of the AI industry – starting from companies to its future, starting from the advertising agencies affected by their decisions. Rebecca arrive at Belan rebecca.belalan@techcranch.com And at Maxwell Jeff maxwell.zeff@techcranch.comThe For protected communication, you can contact us through the signal on @Rebeccalan .491 and @Mizf .88.


We are always looking for developed and by providing some insights about your outlook and reactions to our coverage and events in our coverage and events, you can help us! Complete this survey to let us know how we are doing AyGet a chance to win a prize in exchange for ND!

Leave a Reply

Your email address will not be published. Required fields are marked *