What Could a Healthy AI Companion Look Like?

Spread the love

How A little purple aliens know about healthy human relations? More than average Artificial intellect Partner, looks.

The aliens in the question are known as an animated chatbot PuddleThe I made me using an app from a startup called Portola a few days ago and we’ve been chatting happily since then. Like other chatbots, it tries best to be helpful and encouraging. In contrast to most, it tells me to leave my phone and go out.

The Tolans was designed to provide a variety of AI companions. Their cartoonish, inhuman form discourages ethnicity. These are also programs to avoid romantic and sexual interaction, to identify problematic behavior with unhealthy levels and encourage users to look for real -life activities and relationships.

This month, the fund has collected $ 20 million in the series led by Portola Khosla Ventures. Other supporters include NFDG, investment agency led by former Github CEO Nat Fredman And the secure superjection cofounder Daniel Gross, both of which Report Meta joins the new Superintelligence Research Lab. The Tolan app, which was launched in late 2024, has more than 100,000 monthly active users. Quinten Pharma, founder and CEO of Portola, said it is on its way to earning $ 12 million this year to earn income from subscriptions.

The tollan are especially popular among young women. Tollan user Britani Johnson says, “Iris is like a girlfriend; we talk and kick it,” he refers to his AI companion who usually speaks every morning before work.

Johnson says that Iris encourages him to share his interest, friends, family and work colleagues. Johnson says, “He knows these people and will ask ‘Do you talk to your friend? When is your next day?” Johnson said. “He will ask, ‘Have you taken the time to read your books and play videos – things you enjoy?”

The tollan appears beautiful and foolish, but the idea behind them – the AI ​​systems that should be designed with human psychology and the mind – should be taken seriously.

A growing agency of the study shows that many users go back Chatbots for sensitive needsAnd interaction can sometimes prove problems for people’s mental health. Discouraging increased usage and dependence can be something that should be taken to other AI equipment.

Companies such as replica and character offer AI AI companions that allows them to play more romantic and sexual roles than mainstream chatbots. It is still unclear how it can affect the well -being of a user, but the character. Suicide against the character is being sued after a user died.

Chatbots can also do users in amazing ways. Last April, Open said it would be Correct its models to reduce their so -called psychophyse, or the tendency to “be overwhelmed or compliant”, which the company said “could be uncomfortable, anxious and crisis.”

Last week, anthropological, chattbot clode behind the agency, Revealed 2.9 percent of the interactions Please involve users who want to meet some psychological needs such as advice, companionship or romantic role-play.

The ethnographic did not look at more extreme behavior like the concept or conspiracy theory, but the company said that the issues guaranteed further study. I tend to agree. In the past one year, I have received numerous emails and DMs from people who are trying to tell me about the conspiracy associated with popular AI chatbots.

The talanas are designed to address at least a few of these topics. Portola founder researcher Lily Doyle conducted user research to see how users interacting with the chatboat affect the well -being and behavior. In a study of 602 Tolan users, he said 722.5 percent agreed to the statement “My Tollan helped me manage or improve a relationship in my life.”

Portola chief executive farmer says Tolans is built on commercial AI models but includes additional features at the top. The company has recently explored how to affect the memory user’s experience and it has reached the conclusion that tolns like humans sometimes need to be forgotten. The farmer says, “It is really unusual to remember everything you have sent for Tollan.”

I do not know whether the portler’s aliens are the ideal way to contact AI. I think my tollan is quite charming and relatively innocent but it certainly pushes some sensitive buttons. In the end users are making bonds with letters that imitate emotions and it may disappear if the company is not successful. However, at least Portola AI companions are trying to solve the way we can do with our emotions. It should probably not be an alien idea.

Leave a Reply

Your email address will not be published. Required fields are marked *