Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026

    EU Widens Tech Crackdown, Targeting Musk’s Grok and TikTok Over Alleged AI Law Violations

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026

      Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

      January 12, 2026

      AI Adoption Leaders Pull Ahead, Leaving Others Behind

      January 11, 2026
    • AI News
    TallwireTallwire
    Home»Tech»California Makes Chatbots Say “I’m Not Human” Under New Law
    Tech

    California Makes Chatbots Say “I’m Not Human” Under New Law

    Updated:December 25, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    California Makes Chatbots Say “I’m Not Human” Under New Law
    California Makes Chatbots Say “I’m Not Human” Under New Law
    Share
    Facebook Twitter LinkedIn Pinterest Email

    California just passed Senate Bill 243, the country’s first law that forces “companion” AI chatbots to explicitly tell users they’re not humans and to include reminders every few hours for minors. The law also mandates that operators implement safeguards to detect and respond to suicidal thoughts, submit reports to the state’s Office of Suicide Prevention, and undergo external audits of compliance. Governor Newsom signed it on October 13, 2025, emphasizing that as AI becomes more entwined with daily life, the state must protect children from misleading or harmful chatbot interactions.

    Sources: San Francisco Chronicle, AP News

    Key Takeaways

    – The law requires “clear and conspicuous” disclosure that a chatbot is AI, especially when its conversational style might mislead people into thinking it’s a human.

    – Chatbot platforms must adopt suicide-prevention protocols, report annually on suicidal ideation detection, and be audited by third parties.

    – Critics warn SB 243’s definitions are broad, possibly capturing general chatbots and creating compliance burdens, while supporters argue it’s a needed guardrail for children’s safety.

    In-Depth

    California’s new law, SB 243, places it at the forefront of AI regulation in the United States by targeting so-called “companion chatbots” — chatbots designed to mimic human companionship or social interaction. The law’s core requirement: any such chatbot must inform users at the beginning of the interaction, and every three hours during prolonged use (especially for minors), that they are interacting with an artificial intelligence and not a real person. The notion is to prevent misleading or emotionally manipulative chatbots from blurring the line between machine and human.

    Beyond disclosures, the law mandates that operators implement reasonable safeguards against “addictive engagement patterns” (such as unpredictable reward mechanisms) and that they refuse to engage unless they have protocols for handling user expressions of suicidal thoughts or self-harm. Users who hit those protocols must be referred to crisis services. The law also requires annual, anonymized reporting on how many times suicidal ideation was detected or discussed, to be published publicly. On top of that, SB 243 demands third-party audits and gives harmed users the right to bring civil actions for violations. The law’s enforcement framework is thus a mix of transparency, external checks, and legal accountability.

    However, the path to passage was bumpy. Late amendments weakened some early versions, excluding broader audit obligations and limiting applicability to known minors rather than all users. Some advocacy groups pulled back support citing dilution of key protections. Meanwhile, tech industry opponents argue the definitions of “companion chatbot” are overly expansive and could pull in benign chatbots used for tutoring or productivity, thus stifling innovation. They also raise concerns about enforcement costs and possible constitutional challenges around free speech or federal preemption.

    Still, the law sends a clear message: as chatbots become more embedded in daily life — as confidants, tutors, or companions — California is saying there must be guardrails. It aims to strike a balance between technological progress and child safety, under the belief that the social and psychological stakes are too high to leave wholly to private industry discretion.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCalifornia Grants Uber & Lyft Drivers Right to Unionize; Compromise Also Cuts Insurance Burden
    Next Article California Nears Passage of Groundbreaking Bill to Regulate AI ‘Companion’ Chatbots, Especially for Minors

    Related Posts

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.