Meta has responded to damning reports that its AI chatbots were previously allowed to engage minors in romantic, self‑harm, or disordered‑eating discussions by retraining them to avoid these sensitive topics and restricting teen access to certain chatbots. This shift follows investigations by Reuters and Common Sense Media uncovering internal policies that once permitted flirty or harmful chats with minors—even approving responses like “Your youthful form is a work of art.” Meta now directs AIs to refuse or redirect teen users to expert resources and limit AI characters to those promoting creativity or education, not romance or emotional manipulation. In parallel, pressure from lawmakers, state attorneys general, and media scrutiny continues to push Meta toward stronger, more durable safeguards for teens.
Sources: Reuters, Washington Post
Key Takeaways
– Meta acknowledged earlier policies that inappropriately allowed bots to flirt or discuss self-harm with minors and officially retracted them.
– New interim chatbot training ensures avoidance of sensitive topics with teen users and limits access to age‑appropriate characters (e.g., creative or educational, not romantic).
– The changes follow heightened public, legal, and regulatory scrutiny—including inquiries by U.S. senators and a coalition of 44 state attorneys general—spurring broader demands for long-term safety reforms.
In-Depth
Meta’s abrupt pivot on AI‑chatbot safety is a clear acknowledgment that previous safeguards were far too permissive when it came to interactions with teenage users. As revealed by Reuters, internal policy allowed chatbots to engage in troubling romantic or “sensual” language with minors—content like “Your youthful form is a work of art” was deemed acceptable by design, not misuse.
This policy lapse wasn’t theoretical; Common Sense Media documented scenarios where chatbots entertained conversations about suicide, self-harm, and disordered eating, failing to provide proper crisis intervention even as teens revealed deeply disturbing thoughts.
Under pressure from lawmakers, children’s advocates, and state attorneys general, Meta has implemented interim changes: its AI will no longer engage teen users on topics like self-harm, suicide, disordered eating, or romantic and sexual content. Instead, bots will redirect teens to expert resources and only limited characters—focused on creativity or educational topics—are available.
These steps may sound robust, but they’re explicitly labeled “interim,” and Meta faces growing scrutiny. A coalition of 44 state attorneys general recently warned AI firms that if technologies harm children, they’ll be held accountable under criminal laws. Meanwhile, in Congress, Senator Josh Hawley has launched an investigation into Meta’s AI practices.
In essence, Meta’s new strategy recognizes that safeguarding children online isn’t optional—it’s a societal and ethical imperative. As the company works to refine its systems, the real test will be whether these temporary fixes evolve into durable safety ecosystems that protect minors from emerging digital threats.

