Former British Prime Minister Liz Truss has publicly pushed back against recent moves by the United Kingdom, Australia, and Canada to restrict access to Elon Musk’s social media platform X over concerns about its built-in generative AI tool Grok being used to create non-consensual and sexually explicit images. Truss dismissed the regulatory pressure as unwarranted and framed it as part of a broader ideological clash with censorious governments that are increasingly pushing for regulatory action or outright bans on platforms that host or enable harmful AI-generated content. Meanwhile, governments and regulators from London to Ottawa and Canberra are weighing legal action or enhanced oversight under existing online safety laws to address mounting reports that Grok has been misused to produce deepfake pornography and related illegal content. International scrutiny has also spurred countries like Indonesia and Malaysia to take emergency measures by restricting the AI tool entirely over concerns about human rights and digital safety.
Sources:
https://www.theepochtimes.com/world/liz-truss-rejects-moves-by-uk-australia-canada-to-restrict-elon-musks-x-5969414
https://apnews.com/article/c7cb320327f259c4da35908e1269c225
https://www.theguardian.com/australia-news/2026/jan/13/grok-x-anthony-albanese-australia-politicians-condemn-post-platform
Key Takeaways
• Liz Truss rejects coordinated government pressure from the UK, Australia, and Canada to restrict or regulate Elon Musk’s X platform over AI safety concerns.
• Governments worldwide are escalating actions against Grok’s misuse, with countries like Indonesia and Malaysia temporarily blocking or restricting the AI chatbot due to non-consensual explicit imagery concerns.
• Regulatory bodies in the UK and Australia are threatening fines or potential bans under online safety laws, highlighting tensions between free speech advocacy and digital safety enforcement.
In-Depth
The debate over AI regulation and online safety has reached a critical juncture as governments grapple with how to balance individual freedoms against emerging threats posed by generative technologies. At the center of the latest controversy is Grok, an AI chatbot developed by Elon Musk’s xAI and integrated into the X platform (formerly Twitter). Designed as a conversational AI, Grok’s image generation capabilities have come under fire after users exploited them to create sexually explicit deepfake images of individuals — including non-consensual depictions involving women and minors.
In response, regulators from Canberra to London have signaled intentions to hold platforms accountable under existing online safety legislation. In Australia, Prime Minister Anthony Albanese publicly condemned the misuse of AI tools as “abhorrent,” aligning with calls for stricter enforcement of digital content laws. Similar sentiments have emerged in the UK, where the communications regulator Ofcom has launched an investigation into X’s compliance with the Online Safety Act, and legislators are considering new measures to criminalize the production and dissemination of non-consensual intimate content. These actions reflect growing political will to confront the darker side of AI innovation, particularly where digital abuse intersects with child safety and privacy.
Yet, former UK Prime Minister Liz Truss has pushed back against this regulatory trend, dismissing what she frames as heavy-handed interventions by Western governments. Truss argues that attempts to restrict technological platforms like X amount to an overreach that suppresses free expression and undermines innovation. Her stance underscores a broader ideological rift within conservative circles about the appropriate role of government in policing online spaces versus defending free speech principles.
Complicating matters, other nations have already taken decisive steps: Indonesia and Malaysia temporarily blocked Grok’s services entirely after deeming them incompatible with domestic human rights and digital safety standards. These international reactions intensify pressure on Western regulators to align their approaches and confirm that new technologies must be held to account in protecting citizens — particularly the most vulnerable — from exploitation and abuse.
The dispute illustrates an ongoing global struggle over how democracies should adapt legal frameworks to address the rapid evolution of AI, with leaders and policymakers balancing innovation, free speech, and public safety in an increasingly interconnected digital arena.

