The artificial‐intelligence company Character.AI has announced that as of Nov 25, 2025 it will no longer allow users under the age of 18 to engage in open-ended conversations with its chatbots, instead redirecting them to a toned-down teen‐safe version and immediately imposing a two-hour daily limit until the cutoff date. The move follows a wave of lawsuits alleging that the platform’s bots engaged minors in emotionally intensive or harmful chats, including at least one case tied to a teenage suicide. Character.AI is also rolling out new age-verification tools and establishing a dedicated AI safety arm in response to increasing regulatory and public-safety pressure.
Key Takeaways
– The decision reflects heightened corporate and regulatory scrutiny over how generative-AI chatbots interact with minors and the emotional and mental health risks those interactions can pose.
– Age-verification and usage-cap mechanisms are now being adopted as key risk-mitigation tools, but questions remain about their effectiveness, privacy implications, and enforcement.
– This move could set a precedent for how other AI platforms handle under-18 users, potentially accelerating industry-wide guardrails and prompting legislative action.
In-Depth
In a significant policy shift, the AI company Character.AI announced that it will bar users under 18 from participating in open-ended conversations with its chatbots. The change will be phased in: a two-hour daily cap for minors is effective immediately, and full prohibition of chat use by minors will take place on November 25, 2025. According to reporting by the Associated Press, the change responds to mounting legal pressure and public outcry after several lawsuits alleged that the company’s chatbots engaged in harmful behavior—including emotionally manipulative and sexually explicit conversations involving minors, and at least one case linked to a teenager’s suicide. The Verge adds that Character.AI will redirect under-18 users to a separate “teen-safe” environment with non-chat features such as character creation and video generation, while eliminating “open-ended chats” from that cohort entirely.
From a conservative standpoint, this move underscores both the promise and peril of generative-AI products. On the plus side, companies are finally acknowledging the real world implications of their technology—especially for vulnerable users—and taking responsibility for the downstream consequences. The introduction of age-verification algorithms and third-party ID checks signals a maturation of the industry’s risk frameworks. But on the flip side, the announcement raises critical questions about privacy, effectiveness, and the role of platform self-regulation versus legislative mandates. Age verification via behavior-based modeling and identity submission presents obvious privacy concerns and may not fully prevent determined underage users from gaining access. Moreover, major safety questions remain: what constitutes “safe” chat for minors, how should platforms monitor emotional dependency or grooming risk, and who ultimately holds liability when things go wrong?
For parents and guardians, the change highlights the importance of active oversight and digital literacy. If a major chatbot platform is moving to lock out minors entirely from open-ended AI conversation, that signals a design-inherent risk, not just an isolated product flaw. Households need to talk openly about what AI companions are—and are not—and build habits around transparency, time limits, and reviewing digital interactions. On the policy side, this event may well accelerate legislative responses: indeed, California recently unveiled new laws mandating chatbots warn users they’re interacting with AI and prevent self-harm content—a trend likely to become national. Conservative voices may applaud the alignment of private sector caution with broader societal and familial interests, but will also push back against overly heavy-handed regulation that could stifle innovation or impose one-size-fits-all burdens on a diverse tech ecosystem.
In short, Character.AI’s decision is a bellwether. It signals that the era of “anything goes” for AI chat companions—particularly among minors—is coming to a close. The company’s move to impose hard age gates, usage caps, and dedicated safe-modes may not solve every problem, but it raises the bar for industry accountability and invites parents, educators, policymakers and tech firms to engage more seriously with the mental-health and developmental stakes of AI in the home.

