Australia’s eSafety Commissioner, Julie Inman Grant, has come under scrutiny over her sweeping authority to order removal of content from global platforms following her notices to X and Meta to take down footage of killings including that of Iryna Zarutska and a Dallas motel manager. Critics point out that these removal orders, which extend beyond Australia’s borders, risk overreach into free-speech territory and lack adequate checks and balances. Senator Alex Antic and others warn that the eSafety role may now exceed what citizens would view as reasonable regulatory power. This controversy builds on existing debates over the Online Safety Act and its enforcement, including court challenges over “all reasonable steps” and geo-blocking as compliance, and has reignited concerns about accountability and transparency in online content governance.
Sources: eSafety.gox.au, Epoch Times
Key Takeaways
– The eSafety Commissioner’s current remit allows issuing removal notices not just domestically, but also demanding takedowns of content hosted overseas — a scope that critics say may amount to censorship overreach.
– Legal challenges have already tested these powers (e.g. whether geo-blocking counts as compliance under the “all reasonable steps” rule), highlighting vagueness in the law’s enforcement mechanics.
– Opponents argue that stronger oversight, clearer definitions, transparency in decision making, and legislative checks are needed to balance protecting Australians from harmful content with safeguarding freedom of expression.
In-Depth
Australia has been pushing hard to modernize regulation of online content under its Online Safety Act 2021, giving the eSafety Commissioner significant authority to police digital platforms for illegal or highly harmful material. The law enables removal notices, ISP blocks, and accountability mechanisms for platforms failing to act. But in recent weeks, Commissioner Julie Inman Grant’s efforts to order content removal by global services like X and Meta have drawn fresh fire over the boundaries of her authority.
At issue is footage depicting violent acts (for instance, killings in Dallas, videos of Iryna Zarutska’s death) that eSafety has flagged as “class 1 material” — meaning content that would receive a “Refused Classification” (RC) rating or is likely to do so under Australia’s classification regime. Under the law, once a removal notice is issued, platforms must take “all reasonable steps” to comply. One method has been geo-blocking: making the content inaccessible to Australian IP addresses, even if it remains visible elsewhere. A recent court case found that geo-blocking could satisfy the “reasonable steps” standard, yet left ambiguities about when further action (e.g. full deletion or deindexing) is warranted. In effect, the law gives the Commissioner discretion over how far to push enforcement, with limited guidelines for platform compliance.
The eSafety office has also issued notices to commentators and public figures whose content the Commissioner deems violative of the standard, even though eSafety claims it is not targeting opinion or political speech. These actions underscore how blurred the line may become between moderating graphic violence and suppressing speech. Critics argue that when a regulator demands removal of content worldwide — not just within Australia’s jurisdiction — it raises serious questions about sovereignty, censorship creep, and free speech beyond Australia’s borders.
Senator Alex Antic, among others, has publicly warned that the Commissioner’s powers are becoming too broad, going beyond what the public would regard as acceptable. Some in Parliament have called for parliamentary oversight, clearer legislative guardrails, and accountability mechanisms. Already, the Office has come under pressure over a draft code requiring login for search engine access, age verification mandates, and efforts to extend regulation to search engines, raising fears of creeping control over general internet use (not just social media).
The tension here is complex: Australia aims to protect citizens, including minors, from graphic and harmful content, but the instruments chosen concentrate considerable power in an unelected regulator, with few transparent constraints. As platforms, free speech advocates, and lawmakers push back, the coming months may prove pivotal in defining — and possibly restricting — how governments regulate speech online in a digital era.

