Close Menu
CrypThing
  • Directory
  • Slot
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
Facebook X (Twitter) Instagram Threads
CrypThingCrypThing
  • Directory
  • Slot
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
CrypThing
Home»AI»Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment
AI

Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment

adminBy adminSeptember 6, 20254 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link Bluesky Reddit Telegram WhatsApp Threads
Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment
Share
Facebook Twitter Email Copy Link Bluesky Reddit Telegram WhatsApp

Common Sense Media, a kids-safety-focused nonprofit offering ratings and reviews of media and technology, released its risk assessment of Google’s Gemini AI products on Friday. While the organization found that Google’s AI clearly told kids it was a computer, not a friend — something that’s associated with helping drive delusional thinking and psychosis in emotionally vulnerable individuals — it did suggest that there was room for improvement across several other fronts.

Notably, Common Sense said that Gemini’s “Under 13” and “Teen Experience” tiers both appeared to be the adult versions of Gemini under the hood, with only some additional safety features added on top. The organization believes that for AI products to truly be safer for kids, they should be built with child safety in mind from the ground up.

For example, its analysis found that Gemini could still share “inappropriate and unsafe” material with children, which they may not be ready for, including information related to sex, drugs, alcohol, and other unsafe mental health advice.

The latter could be of particular concern to parents, as AI has reportedly played a role in some teen suicides in recent months. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having successfully bypassed the chatbot’s safety guardrails. Previously, the AI companion maker Character.AI was also sued over a teen user’s suicide.

In addition, the analysis comes as news leaks indicate that Apple is considering Gemini as the LLM (large language model) that will help to power its forthcoming AI-enabled Siri, due out next year. This could expose more teens to risks, unless Apple mitigates the safety concerns somehow.

Common Sense also said that Gemini’s products for kids and teens ignored how younger users needed different guidance and information than older ones. As a result, both were labeled as “High Risk” in the overall rating, despite the filters added for safety.

“Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said in a statement about the new assessment viewed by TechCrunch. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” Torney added.

Techcrunch event

San Francisco
|
October 27-29, 2025

Google pushed back against the assessment, while noting that its safety features were improving.

The company told TechCrunch it has specific policies and safeguards in place for users under 18 to help prevent harmful outputs and that it red-teams and consults with outside experts to improve its protections. However, it also admitted that some of Gemini’s responses weren’t working as intended, so it added additional safeguards to address those concerns.

The company pointed out (as Common Sense had also noted) that it does have safeguards to prevent its models from engaging in conversations that could give the semblance of real relationships. Plus, Google suggested that Common Sense’s report seemed to have referenced features that weren’t available to users under 18, but it didn’t have access to the questions the organization used in its tests to be sure.

Common Sense Media has previously performed other assessments of AI services, including those from OpenAI, Perplexity, Claude, Meta AI, and more. It found that Meta AI and Character.AI were “unacceptable” — meaning the risk was severe, not just high. Perplexity was deemed high risk, ChatGPT was labeled “moderate,” and Claude (targeted at users 18 and up) was found to be a minimal risk.

2025 AI assessment Dubbed Gemini Google high kids October 27-29 risk safety San Francisco Techcrunch event TechCrunch|BProud teens Trumps
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link Bluesky WhatsApp Threads
Previous ArticleSOL Strategies secures Nasdaq listing under STKE
Next Article Tesla shareholders to vote on investing in Musk’s AI startup xAI
admin

Related Posts

Taylor Swift fans accuse singer of using AI in her Google scavenger hunt videos

October 7, 2025

California’s new AI safety law shows regulation and innovation don’t have to clash 

October 6, 2025

Could Trump’s $2,000 tariff rebates for Americans stimulate an altcoin surge?

October 5, 2025
Trending News

The last call before the lift off? Dogecoin coil for important breakouts

October 3, 2025

How To Use A Bitcoin Heatmap For Smarter Trading Decisions

October 2, 2025

SK Planet Acquires MOCA Coin for Decentralized Identity Integration

October 2, 2025

Horizen (ZEN) gains 12% to break above $7

October 1, 2025
About Us

At crypthing, we’re passionate about making the crypto world easier to (under)stand- and we believe everyone should feel welcome while doing it. Whether you're an experienced trader, a blockchain developer, or just getting started, we're here to share clear, reliable, and up-to-date information to help you grow.

Don't Miss

Reporters found that Zerebro founder was alive and inhaling his mother and father’ home, confirming that the suicide was staged

May 9, 2025

Openai launches initiatives to spread democratic AI through global partnerships

May 9, 2025

Stripe announces AI Foundation model for payments and introduces deeper Stablecoin integration

May 9, 2025
Top Posts

The last call before the lift off? Dogecoin coil for important breakouts

October 3, 2025

How To Use A Bitcoin Heatmap For Smarter Trading Decisions

October 2, 2025

SK Planet Acquires MOCA Coin for Decentralized Identity Integration

October 2, 2025
  • About Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 crypthing. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.