European Union member states have agreed on a common draft for online child-protection legislation that stops short of requiring global tech companies to automatically detect and remove child sexual abuse material. The revised stance from the European Council gives companies such as Google and Meta the option—but not the obligation—to scan for illicit content; instead enforcement will be left to national authorities. The compromise reflects pressure from privacy advocates and mirrors a broader regulatory retreat amid rising concerns over government overreach.
Key Takeaways
– EU member states have rejected mandatory scanning/removal of child sexual abuse material by Big Tech, opting instead for a risk-assessment framework enforced at the national level.
– The softened legislation is seen as a victory for U.S. tech firms and privacy-advocacy groups, reducing the specter of mass surveillance that critics warned could arise under stricter rules.
– The decision reflects a wider trend in 2025 of regulatory pullback across the West, influenced by political pressure and concerns over civil-liberties protections.
In-Depth
Late November 2025 marked a notable shift in Brussels: the European Council quietly backed away from a previously ambitious plan to force major technology firms into continuous surveillance of their platforms for child sexual abuse content. Under the new draft framework, the kind of sweeping, automated content scanning once proposed—effectively requiring firms like Google, Meta and others to police private messages, file uploads, and shared content in real time—won’t be mandated. Instead, each company is merely required to conduct periodic risk assessments, while enforcement and penalties fall under the purview of individual member states.
This development comes after heavy pressure from privacy advocates who warned the earlier proposal violated fundamental rights to encryption and private communication. The earlier version—backed by the 2023 draft from the European Parliament—would have required immediate removal and reporting of known and new images or videos of abuse, including grooming attempts. Under today’s compromise, that level of oversight is no longer the norm.
Supporters of the rollback—including anti-surveillance organizations and pro-tech industry voices—are calling it a win for civil liberties and for American tech firms embroiled in a complex web of global regulatory scrutiny. Critics, in contrast, warn that by relegating enforcement to 27 separate national authorities operating under different legal standards, the effectiveness of the legislation could be severely compromised, potentially leaving loopholes wide enough for predators to exploit. They argue the patchwork approach may result in uneven enforcement and inconsistent protections for victims.
In its new form, the legislation still establishes an EU-wide mechanism: a proposed EU Centre on Child Sexual Abuse meant to coordinate efforts and assist with compliance. But even this effort could be more symbolic than effective if national governments don’t commit to rigorous implementation. The voluntary nature of content scanning—especially past the April 2026 expiration of the current privacy exemptions—means this law may fall short of its original ambition.
Viewed in the broader political context of 2025, the shift seems emblematic of a broader regulatory retrenchment. As U.S. platforms push back, and with increasing public unease over mass digital surveillance, Brussels appears to be recalibrating: balancing child-protection priorities against growing demands for privacy and tech flexibility.

