Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Organizations Seeing More Than 200 Generative AI-Related Data Policy Violations Each Month

    January 14, 2026

    AI Growth Takes Center Stage Over Efficiency in Business Strategy, New Report Shows

    January 14, 2026

    FCC Cyber Trust Mark Program Losses Lead Administrator Amid China Security Probe

    January 14, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      AI Growth Takes Center Stage Over Efficiency in Business Strategy, New Report Shows

      January 14, 2026

      Organizations Seeing More Than 200 Generative AI-Related Data Policy Violations Each Month

      January 14, 2026

      Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

      January 14, 2026

      Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

      January 14, 2026

      New Test-Time Training Lets Models Keep Learning Without Costs Exploding

      January 14, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Gemini Is Crowned Most “Empathetic” AI After Safety Testing
    Tech

    Gemini Is Crowned Most “Empathetic” AI After Safety Testing

    3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Gemini Is Crowned Most “Empathetic” AI After Safety Testing
    Gemini Is Crowned Most “Empathetic” AI After Safety Testing
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A recent evaluation by the AI-journaling app Rosebud tested 25 leading large-language models against a battery of mental-health and self-harm related prompts; according to the study, Gemini 3 emerges as the only major model that never provided harmful or self-destructive guidance. In scenarios ranging from job loss to suicidal ideation, Gemini 3-Pro refrained from enabling dangerous behavior — a stark contrast with the majority of competing models that failed at least one test. The result arrives amid growing scrutiny over AI assistants’ roles in sensitive, emotionally fraught interactions.

    Sources: Semafor, Forbes

    Key Takeaways

    – Only Gemini 3 among 25 widely used AI models passed the full safety benchmark from Rosebud, avoiding advice that could lead to self-harm.

    – The testing underscores increasing demand and regulatory pressure for AI models to handle mental-health and crisis situations responsibly.

    – While Gemini 3’s clean record is a milestone, experts caution that no AI — however well trained — can replace human judgment, and the broader AI field still struggles with consistent safety across sensitive scenarios.

    In-Depth

    In the latest wave of scrutiny over how artificial-intelligence systems respond to emotionally vulnerable users, the spotlight has landed squarely on Gemini 3. Developed by Google, Gemini 3 appears to have aced a critical safety benchmark administered by a mental-health focused startup, Rosebud. The evaluation involved 25 of the most prominent AI models, exposed to real-world crisis prompts — including distress related to job loss, suicidal thoughts, and direct inquiries about potentially harmful actions. According to reporting by Semafor, Gemini 3-Pro was the only one that “did not fail a test,” meaning it refused to supply instructions or encouragement for self-harm.

    This 100% success rate — also noted by coverage in Forbes — is a pivotal moment for the AI industry, where debates about safety, responsibility, and the limits of machine empathy have grown loud. For many users, AI assistants have become de facto confidants during stressful times. The Rosebud results illustrate how a language model can be engineered with stronger safeguards, potentially saving lives when individuals reach out with despair or confusion.

    Still, the achievement isn’t a clean sweep for the industry. As noted by analysts at IBM, even with Gemini 3’s technical improvements, the broader challenge remains huge: AI systems still dominate discourse for their reasoning or “agentic” capabilities — writing code, summarizing complex info, handling multimodal data — but they also continue to hallucinate, offer incomplete advice, or overpromise.

    Moreover, the broader academic community has cautioned against equating “empathetic responses” with “reliable and safe behavior.” A recent study showed that models optimized to sound warm and compassionate often underperform on factual accuracy or safety. Others argue that no language model should be treated as a mental-health professional: empathy can be simulated, but genuine understanding, ethical judgment, and the ability to connect a person with real human help remain out of reach.

    In practical terms, the takeaway is mixed but significant: Gemini 3 sets a new bar for AI safety in crisis contexts — a sign that thoughtful training and safeguards can reduce harm. But the achievement doesn’t eliminate risk, nor should it lull users into thinking AI can replace real human counsel. What it does do is push the conversation forward: if AI is going to be woven into people’s private, fragile moments, developers and companies must commit to robust, regularly audited safety architectures — and users must approach AI as a tool, not a therapist.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFTC Maintains Lifetime Ban On Stalkerware Founder Scott Zuckerman
    Next Article German Court Slaps €572 Million Fine on Google for Price-Comparison Monopoly Abuse

    Related Posts

    AI Growth Takes Center Stage Over Efficiency in Business Strategy, New Report Shows

    January 14, 2026

    Organizations Seeing More Than 200 Generative AI-Related Data Policy Violations Each Month

    January 14, 2026

    Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

    January 14, 2026

    Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

    January 14, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    AI Growth Takes Center Stage Over Efficiency in Business Strategy, New Report Shows

    January 14, 2026

    Organizations Seeing More Than 200 Generative AI-Related Data Policy Violations Each Month

    January 14, 2026

    Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

    January 14, 2026

    Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

    January 14, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.