Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Common knowledge mediaA kidnapped non-profit offers rating and media and technology reviews on Friday published the risk evaluation of the risk of Google’s Gemi AI products. The organization found that Google’s AI clearly told the kids it was a computer, not a friend – something that related to it Help Drive Confusion Thinking And Psychosis Among the emotional weaker people – it suggested that several other fronts had the opportunity to improve.
Significantly, Common Sense says that both Jemini’s “Under 13” and “Tin Experience” layers appeared as the adult version of Jemini under the hood, adding only some additional security features to the top. The company believes that AI products are truly safe for kids, they should be made in mind the protection of the baby from their ground -up.
For example, its analysis has shown that Gemini may still share “inappropriate and unsafe” material with children, for which they may not be prepared with information related to gender, drug, alcohol and other unsafe mental health advice.
The second may be a matter of special concern to parents, as some AIs have played a role in suicide in recent months. OpenAI is being facing his first Wrong death suit A 16-year-old boy died after suicide after consulting ChattGPT for several months after successfully bypassing the guardian of the chatboat. Earlier, AI associate manufacturer Character.A also was also sued On a teenager’s suicide.
Furthermore, the analysis indicates the news leak when it comes Apple is considering Gemini As LLM (Greater Speaking Model), which will help give its upcoming AI-enabled Cyrie strength, because of the end of the next year. It can highlight more adolescents at the risk of more adolescents, unless Apple somehow relaxes the anxiety of protection.
Common knowledge also said that Jemini’s products ignore the products for children and adolescents how users younger than older ones need different directions and information. As a result, both were identified as “high risk” in the overall rating despite adding filters to protection.
“Jemie got some basic correctly, but it stumbled on the details,” Robi Torney, senior director of Common Sense Media AI programs, said in one Statement About the new assessment. “An AI platform for kids should meet where they are, do not take one-size-fit-up method for children at different stages of development. AI must be safe and effective for the kids, it must be made with their needs and development, not just a modified version of any product created for adults,” Terne added.
TechCrunch event
San Francisco
|
October 27-29, 2025
Google has pushed back against these evaluations, while mentioning that its security features are improving.
It has informed TechCrunch that it has specific policies and protection to help users under the age of 18, and it helps consult experts to improve red-parties and its protection. However, it also acknowledged that some of Jemi’s response did not act as purpose, so it has added additional protection to solve these concerns.
The company mentions (as mentioned as common sense) that its models have a protection to prevent them from being involved in the conversation that can symbolize a real relationship. Also, Google suggested that the Common Sense report mentioned contains features that were not available to users under 18, but it was not access to the questions that the organization used in its examination to confirm.
Common Sense Media has performed other before Assessment Including AI services Open, Confusion, Clad, Meta AII And MoreThe It has been found that Meta Eye and Character.A. Was “unacceptable” – which means the risk was intense, not just high. The confusion was considered as high risk, the chatter was identified as “moderate” and Claud (targets of 18 years or more users) was shown as a minimum risk.