California just passed Senate Bill 243, the country’s first law that forces “companion” AI chatbots to explicitly tell users they’re not humans and to include reminders every few hours for minors. The law also mandates that operators implement safeguards to detect and respond to suicidal thoughts, submit reports to the state’s Office of Suicide Prevention, and undergo external audits of compliance. Governor Newsom signed it on October 13, 2025, emphasizing that as AI becomes more entwined with daily life, the state must protect children from misleading or harmful chatbot interactions.
Sources: San Francisco Chronicle, AP News
Key Takeaways
– The law requires “clear and conspicuous” disclosure that a chatbot is AI, especially when its conversational style might mislead people into thinking it’s a human.
– Chatbot platforms must adopt suicide-prevention protocols, report annually on suicidal ideation detection, and be audited by third parties.
– Critics warn SB 243’s definitions are broad, possibly capturing general chatbots and creating compliance burdens, while supporters argue it’s a needed guardrail for children’s safety.
In-Depth
California’s new law, SB 243, places it at the forefront of AI regulation in the United States by targeting so-called “companion chatbots” — chatbots designed to mimic human companionship or social interaction. The law’s core requirement: any such chatbot must inform users at the beginning of the interaction, and every three hours during prolonged use (especially for minors), that they are interacting with an artificial intelligence and not a real person. The notion is to prevent misleading or emotionally manipulative chatbots from blurring the line between machine and human.
Beyond disclosures, the law mandates that operators implement reasonable safeguards against “addictive engagement patterns” (such as unpredictable reward mechanisms) and that they refuse to engage unless they have protocols for handling user expressions of suicidal thoughts or self-harm. Users who hit those protocols must be referred to crisis services. The law also requires annual, anonymized reporting on how many times suicidal ideation was detected or discussed, to be published publicly. On top of that, SB 243 demands third-party audits and gives harmed users the right to bring civil actions for violations. The law’s enforcement framework is thus a mix of transparency, external checks, and legal accountability.
However, the path to passage was bumpy. Late amendments weakened some early versions, excluding broader audit obligations and limiting applicability to known minors rather than all users. Some advocacy groups pulled back support citing dilution of key protections. Meanwhile, tech industry opponents argue the definitions of “companion chatbot” are overly expansive and could pull in benign chatbots used for tutoring or productivity, thus stifling innovation. They also raise concerns about enforcement costs and possible constitutional challenges around free speech or federal preemption.
Still, the law sends a clear message: as chatbots become more embedded in daily life — as confidants, tutors, or companions — California is saying there must be guardrails. It aims to strike a balance between technological progress and child safety, under the belief that the social and psychological stakes are too high to leave wholly to private industry discretion.

