Internal documents from Meta Platforms, parent of Facebook and Instagram, indicate that the company estimated about 10.1% of its 2024 annual revenue—roughly US$16 billion—originated from advertisements tied to scams and banned goods. The documents further show that Meta’s platforms served an estimated 15 billion “higher-risk” scam ads daily in December 2024 and that Meta’s systems only flagged advertisers as fraudulent when certainty exceeded 95%.
Sources: Reuters, ABC.net.au
Key Takeaways
– Meta estimated that ~10 % of its 2024 revenue stemmed from ads for scams and prohibited goods, signaling a massive latent profit source tied to illicit or grey-market advertising.
– The scale of exposure is vast: internal figures show 15 billion high-risk scam ads served daily (December 2024) across Meta’s platforms and an estimated one-third of all successful U.S. scam schemes involved Meta’s network.
– Although Meta claims to be fighting fraud more aggressively, the internal documents reveal constraints on enforcement—for example limiting vetting when revenue risk exceeded ~0.15 % of total revenue—and suggest systemic weakness in preventing scammers from using its ad auctions.
In-Depth
In a revealing disclosure of internal company data, Meta Platforms finds itself under intensified scrutiny following reports that it projected around 10 % of its 2024 revenue would come from ads tied to scams, banned goods and other illicit content. With estimated total ad revenue for that year reaching beyond US$160 billion, the figure translates into roughly US$16 billion in revenue derived from what the company labels “higher-risk” advertising. The documents also indicate that, at one point, Meta’s own assessment was that its platforms were involved in about a third of all successful online scams in the U.S. This raises serious questions about the balance between growth‐driven ad monetization and integrity of the advertising ecosystem.
From a conservative viewpoint, the optics are troubling: a major advertising platform appears to have allowed, or at least tolerated, large-scale distribution of scam ads because they helped drive revenue. Meta’s internal threshold for enforcement reveals that advertisers falling below 95 % certainty of fraud could continue to buy ads, with the only “punishment” being higher ad rates rather than outright suspension. In effect, the company monetised suspected fraudulent advertising. This approach, while profitable in the short term, calls into question the ethical stance Meta has publicly taken around protecting users and advertisers.
Moreover, the documents show enforcement decisions were constrained by revenue-impact guardrails. The vetting team was reportedly directed not to take actions that could cost the firm more than 0.15 % of revenue—a figure that demonstrates how financial incentives may have constrained stricter enforcement. While Meta’s spokesperson maintains the numbers presented offer a selective view and that the final figures were lower, the fact remains that the internal estimate was formally generated and discussed at senior levels.
For users, legitimate advertisers and regulators alike, this signals a need for tighter oversight and clearer accountability. If a platform repeatedly allows scam ads to reach users, the trust that users place in the platform’s ad ecosystem becomes eroded. From a regulatory perspective, the fact that fines anticipated by Meta were dwarfed by the revenue extracted from scam ads suggests a misalignment between risk and consequence. In a conservative lens, this underscores the need for regulatory frameworks that impose stronger deterrents and for companies to treat user protection not just as a compliance checkbox but as a core business priority.
Meta’s acknowledgment that it has removed over 134 million pieces of scam‐ad content in 2025 is a step, but the size of the issue reveals systemic scale. The ruling question now: will Meta pivot its business model and enforcement posture in a way that genuinely prioritises ad integrity over ad volume, or will profit imperatives continue to blunt the strength of its user-protection commitments? For investors, regulators and policymakers, the moment demands transparency, stronger guardrails and a realignment of platform incentives with public trust and safety.

