Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Millions of people are now using therapists, career advisors, fitness coaches, or sometimes just using ChatzP as a friend. In 2025, it is not uncommon to listen to the intimate details of their lives in the prompt bar of the AI chatboat, but also depends on the suggestion that it gives.
For the lack of better words for relationships with AI chatbots, and for large technical companies, people are to attract users to their chatboat platforms – and can never be competitive to keep them there. As the “AI Baghdan Race” is heated, companies are increasingly enthusiastic to create their chattbots’ reactions to prevent users from being transferred to rivals bots.
However, the answers designed to hold the chatboat answers that users like the answer – it cannot be the most accurate or helpful.
Now most of the Silicon Valley has focused on increasing chatabot use. Meta only claims her AI chattabot Has exceeded one billion Monthly active user (MAS), when Google’s Jemini recently hit 400 million mouseThe They both are trying to marginalize the chatzipt, which Now there are about 600 million mouse And since it was launched in 2022, consumers have dominated the place.
Although the AI chatbots were once fancy, they became a huge business. Google is starting GeminiOpenAI’s CEO Sam Altman indicated at Interview That he will be exposed to the “flavored advertisement”.
Silicon Valley has a history of depriving users’ well -being for fuel growth, especially on social media. For example, Meta researchers found in 2020 Instagram teenage girls have felt worse about their bodiesNevertheless, the company has downed the search internally and publicly.
Users may have a bigger impact on the AI chatbuts.
A feature that puts the users on a specific chatbot platform is psychophyse: an AI bot reaction makes excessively agreed and survey. When AI Chatbots praised users, agree with them and tell them what they want to hear, users like it – at least a little degree.
In April the Open was landed in hot water for a ChatzPT update which becomes extremely psychopanticAt a stage where uncomfortable Examples On Viral On social media. Or not, Opina is extra optimized for human approval, rather than to help people achieve their jobs, in accordance with a Blog post This month is from former OpenAI researcher Steven Adler.
Opina said in its own blog post that it was “over-indicators”Thumbs-up and thumbs-down data“To notify its AI chatboat behavior from ChatzPT users, and not enough assessment to measure psychophyse. OpenAI after the incident Promised to change To fight against psychophyse.
“The [AI] Companies have an incentive for busyness and use, and so users prefer psychophyse, which indirectly encourages them for it, “said in an interview with Adler TechCrunch.
Finding a balance between compliant and psychopantic behavior is easier than done.
A 2023 paperResearchers from anthropologists have found that OpenAI, Meta and even their own employer leads the AI chatbuts from the anthropologists, all display psychophynce on different degrees. This is probably the case, researchers have done theoretical, because all AI models are trained in signals from human users who like some psychopantic reactions.
“Although psychophyse is powered by a number of factors, we have shown people and favorite models playing a role in psychopantic reactions,” the co-authors of the study wrote. “Our job inspires the development of the model monitoring system that goes out of useless, non-expert human rating.”
Character.AI, a Google-backed chattbot company that claims that a few million users spend hours a day with its bots, are currently Face to face a case In which psychophyse can play any role.
A character that has been alleged in the case. According to the case, the boy created a romantic obsession with the chatboat. However, the character.A denies these allegations.
According to Dr. Nina Bhasan, a clinical assistant professor at the University of Stanford, the AI chatbut for the users’ busyness – is purposeful or not – there may be destructive consequences for mental health.
“Agreed […] In an interview with Techchen, “which is especially strong in the moments of loneliness or crisis,” taping the user’s desire for validity and connection. “
Although the character shows the extreme dangers of psychophyse for weak users, psychophyse can only strengthen negative behavior about someone, Vasan says.
“[Agreeability] He is not just a social lubricant – it becomes a psychological hook, “he added.” In therapeutic, the opposite of how it looks like good care ””
Amanda Asel, the lead of anthropic behavior and alignment, says that AI Chatbots users are part of the company’s strategy for its chattbott clad. Through training, a philosopher, Aisel says that he has tried to model cloud’s behavior in a theoretical “perfect human”. Sometimes, it means that users challenge their trust.
“We think our friends are good because they tell us it when they need to hear it,” said Ascal during a press briefing in May. “They not only try to catch our attention, but also enrich our lives.”
This may be ethnographic purpose, but in the aforementioned survey it is advisable to fight psychophyse, and to control the AI model behavior extensively, is truly challenging – especially when other considerations come on the way. This is not good for users; First of all, if the chatbots are designed to only agree with us, how much can we trust them?