California’s legislature has passed SB 243 with bipartisan backing, putting in motion what would be the nation’s first serious regulation of AI companion chatbots. If Governor Gavin Newsom signs it by October 12, 2025, the law would go into effect January 1, 2026. The bill aims to protect minors and vulnerable users by requiring platforms to put in place safety protocols: e.g. barring discussions of self-harm, suicidal ideation or sexually explicit content; issuing reminders every three hours (for minors) that the entity they are talking to is an AI, not a human; and requiring transparency measures and reporting. Companies could also face legal liability—private lawsuits, damages up to $1,000 per violation, and attorney fees—for failing to comply. The legislation has been pushed forward after alarm over several incidents involving minors and AI chatbots, including tragic outcomes. It also complements California’s AB 1064 (the LEAD for Kids Act), which would more broadly restrict companion chatbots for children from doing things like counseling without oversight or using sensitive child data.
Sources: Datamation, TechCrunch, StateScoop
Key Takeaways
– The new SB 243 regulation would make California the first U.S. state to impose mandated safety, transparency, and accountability rules specifically aimed at AI companion chatbots, especially in interactions with minors.
– Legal liability is built in: users harmed by violations could bring lawsuits, and companies could be on the hook for damages; the law requires disclosures, reminders, and content restrictions aimed at self-harm, sexual content, and explicit interactions.
– There is a broader regulatory ecosystem forming: SB 243 is not alone. California is pursuing complementary bills like AB 1064 (LEAD for Kids), pressing tech companies and developers to guard against misuse of AI companion systems, especially concerning children’s privacy, safety, emotional health, and data.
In-Depth
California is on the cusp of enacting SB 243, a legislative milestone in AI regulation, focused squarely on “companion chatbots” and their effects on minors and vulnerable individuals.
The term “companion chatbot” refers to AI systems that are adaptive and capable of providing human-like social interaction—serving emotional, social, or companionship roles. Under SB 243, operators of such platforms would be required to institute safety protocols: forbidding chats involving self-harm, suicide ideation, or sexually explicit content with minors; implementing frequent reminders (every three hours) for minors that they are talking to AI; and overall transparency in how these systems operate. The law also anticipates holding companies legally accountable—allowing private lawsuits, damages, and covering attorney’s fees if the rules are broken. If signed, the law takes effect Jan 1, 2026, with certain reporting requirements kicking in the summer of 2027.
This bill follows real-world tragedies and disturbing reports of minors interacting with AI in harmful ways—such incidents have galvanized advocates, state legislators, and public opinion in favor of regulation. Complementary legislation—such as AB 1064, the LEAD for Kids Act—extends the scope by targeting companion chatbots when they collect sensitive child data or purport to provide mental health or emotional services, emphasizing protections for children’s privacy and emotional welfare.
Critics warn that overly rigid rules could stifle innovation, especially for smaller developers or startup companies, or lead to regulatory burdens that might push companies to relocate or limit features. Supporters argue that protecting vulnerable users is a moral and public health priority, that trust in AI depends on safe guardrails, and that California’s approach might serve as a model for national or even global standards. As the deadline for the Governor to act approaches, the balance between safety, accountability, and innovation will be tested.

