California Attorney General Rob Bonta has issued a cease-and-desist letter to Elon Musk’s AI company xAI, demanding it immediately halt the generation and distribution of nonconsensual sexually explicit and “deepfake” images created through the Grok AI model, including those depicting adults and minors in intimate situations without their consent, and has opened an investigation into whether the company violated state laws by facilitating the widespread creation of these harmful images. This action comes amid reports that the Grok image-editing features were being misused to produce thousands of sexually explicit and nonconsensual images, some involving children, and follows concerns from multiple jurisdictions about the lack of adequate safeguards against AI misuse. California’s letter emphasizes that producing or enabling these materials may violate criminal statutes and unfair business practice laws, and sets a strict compliance deadline for xAI to demonstrate actions taken to prevent further violations. Authorities have cited documented instances of Grok being used to “undress” people and generate sexualized content without permission, prompting legal warnings and broader regulatory scrutiny. The state’s demand underscores a growing legal focus on how generative AI platforms can be held accountable when their tools are exploited to produce illegal or harmful content. Sources for this report include official statements and independent media coverage.
Sources:
https://www.oag.ca.gov/news/press-releases/attorney-general-bonta-sends-cease-and-desist-letter-xai-demands-it-halt-illegal
https://www.reuters.com/legal/litigation/california-ag-sends-cease-desist-letter-xai-deepfake-images-2026-01-16/
https://www.theepochtimes.com/us/california-ag-sends-cease-and-desist-letter-to-musks-xai-over-nonconsensual-explicit-images-5972475
Key Takeaways
• California AG Rob Bonta has formally demanded xAI stop generating and distributing nonconsensual sexually explicit AI images and deepfakes, particularly those involving minors.
• The cease-and-desist order stems from widespread misuse of Grok’s image capabilities to create intimate images without consent, raising legal and ethical concerns under state law.
• This move highlights increasing regulatory pressure on AI companies to implement safeguards and comply with laws protecting individuals from harmful digital content.
In-Depth
In a decisive legal move, California Attorney General Rob Bonta has stepped in to confront what state officials describe as widespread misuse of generative artificial intelligence tools by Elon Musk’s AI startup xAI, particularly its Grok chatbot, which has been used to produce nonconsensual sexually explicit images. The cease-and-desist letter, sent on January 16, 2026, directs xAI to immediately stop the creation, distribution, or facilitation of deepfake content that depicts individuals—especially minors—in intimate scenarios without their consent. This action represents a growing trend in regulatory scrutiny being applied to next-generation AI systems as governments grapple with how to enforce existing laws in the context of rapidly evolving digital technologies.
According to the official press release from the California Attorney General’s office, state investigators have documented numerous instances where ordinary images, often of women and children, were digitally altered using Grok’s image generation and editing features to portray subjects in suggestive or explicit situations. These modifications allegedly occurred without the knowledge or permission of the people depicted. The letter cites specific provisions of California’s civil and criminal statutes, including laws against child sexual abuse material and unfair business practices, arguing that xAI’s facilitation of such content not only harms individuals but also contravenes state legal protections designed to prevent exploitation and abuse.
The investigation into xAI began after an “avalanche of reports” detailing explicit AI-generated material surfaced, prompting Bonta’s office to evaluate whether the company’s operations violated California law. The legal demand underscores that the creation, distribution, or exhibition of such content—especially when depicting minors—constitutes a crime and carries significant legal penalties. Beyond the direct legal consequences, the case has broader implications for how generative AI platforms will be regulated and held accountable when their tools are adapted or misused by users.
Critics of xAI argue that the company did not implement adequate safeguards to prevent its technology from being exploited for harm, despite ample warning signs and rising complaints. Reports suggest the Grok tool’s “spicy mode” and image-editing capabilities were marketed in ways that may have encouraged or enabled the production of such explicit content. Once the controversy intensified, xAI attempted to impose restrictions, like limiting certain image edits to paying subscribers or geoblocking features in jurisdictions where such content is illegal. However, regulators maintain these measures are insufficient and that the company must take immediate, enforceable steps to prevent further abuse.
The crisis with Grok is part of a larger international backlash against AI-generated deepfakes and sexually explicit materials. Other countries and regulatory bodies have similarly expressed concern and initiated investigations or sanctions related to the misuse of generative AI tools. The California action reflects a broader push to ensure that technological innovation does not outpace society’s ability to protect individual rights and safety. It also signals to other AI developers that the absence of robust safeguards against harmful content could result in legal challenges and regulatory intervention.
For now, xAI faces a firm compliance deadline, with the state demanding concrete proof of steps taken to mitigate and stop the production and dissemination of nonconsensual explicit imagery. How the company responds will likely shape future legal precedents and enforcement strategies in the AI landscape. Experts and legal observers alike are watching closely, as outcomes here could influence national and global policies governing AI safety, content moderation, and accountability at a time when digital technology continues reshaping how information is produced and shared.

