European regulators have stepped up pressure on major American and Chinese tech platforms by scrutinizing Elon Musk’s AI chatbot Grok and TikTok for alleged violations of EU digital laws. The European Commission is reportedly “very seriously” investigating Grok after reports that the AI model has been used to generate sexually suggestive and explicit images of minors and adults, a development that the EU calls illegal and appalling under existing content standards. Regulators are also examining AI-generated videos and posts on TikTok that allegedly influenced political sentiment in violation of Europe’s transparency and digital labeling laws. The dual probes come amid wider concerns about the reach of AI on social platforms and the strength of European regulatory frameworks, including the Digital Services Act (DSA), as governments push back against perceived gaps in content moderation and platform accountability. EU officials say enforcement of these rules could signal a tougher line on tech companies that increasingly shape information landscapes across borders.
Sources:
https://www.semafor.com/article/01/07/2026/eu-targets-musks-grok-and-tiktok-over-ai-concerns
https://www.euronews.com/my-europe/2026/01/05/eu-commission-examining-concerns-over-childlike-sexual-images-generated-by-elon-musks-grok
https://www.dailysabah.com/business/tech/global-backlash-mounts-over-groks-ai-made-sexualized-images
Key Takeaways
- European regulators are investigating Elon Musk’s Grok AI for allegedly generating illegal sexually explicit and childlike images, raising enforcement questions under EU digital law.
- TikTok is also under EU scrutiny for allegedly failing to properly label or moderate AI-generated political content, reflecting growing concerns about platform accountability.
- The investigations underscore broader tensions between tech innovation and regulatory demands for safety, transparency, and compliance in digital platforms.
In-Depth
Europe’s technology regulators are signaling that even the most influential global platforms won’t be immune from legal scrutiny when their systems run afoul of established digital standards. In early January 2026, the European Commission confirmed it is very seriously looking into complaints about Grok, an AI model created by Elon Musk’s xAI and integrated into his social platform X, amid reports that the tool has been used to generate and disseminate sexually suggestive deepfake imagery of adults and minors. EU officials described such content as illegal under existing EU content rules and indicated that it has no place in European digital spaces. The controversy was amplified by the rollout of a so-called “Spicy Mode” in the AI tool, which critics say made it easier for users to prompt the creation of explicit deepfakes. While xAI says it is working to remedy safeguards, regulators are preparing to enforce compliance and may pursue actions under the EU’s Digital Services Act if violations are confirmed.
At the same time, TikTok, owned by Chinese parent ByteDance, remains in the crosshairs for its handling of AI-generated political content that allegedly influenced public sentiment without proper disclosure or moderation. The European Commission’s attention toward TikTok underscores a consistent theme in Brussels: platforms that shape information flows must adhere to strict transparency and content standards or face consequences. European scrutiny of both Grok and TikTok highlights broader debates about how to balance technological innovation with established legal protections and emphasizes the regulatory pressures U.S. and international tech companies face in global markets today.

