Meta Platforms, Inc. has filed a motion to dismiss a lawsuit brought by Strike 3 Holdings LLC, a Miami-based adult-film distributor, which alleges that Meta illegally downloaded at least 2,396 adult videos from 2018 onward via corporate and “hidden” IP addresses to train its AI systems. Meta argues the allegations lack factual support and claims the downloads average only about 22 per year across 47 addresses, suggesting instead that the activity reflects disparate individuals’ “personal use” rather than coordinated data-harvesting for AI. The company further contends that its research into multimodal and video-generation AI did not begin until 2022—rendering the earlier downloads irrelevant—and that Strike 3 has provided no evidence linking the downloaded files to any Meta AI model. Meta dismisses the plaintiff’s theory as built on “guesswork and innuendo,” asserts its employee network policy prohibits adult content generation, and questions the plaintiff’s attribution of more than 2,500 “hidden” IPs to Meta.
Sources: Decrypt, ARS Technica
Key Takeaways
– Meta’s defense hinges on scale and timing: the low volume of alleged downloads (approximately 22 per year) and the fact that much pre-dates its AI video-generation research undercuts the plaintiff’s claim of systematic data harvesting.
– The company frames the lawsuit as speculative and tied to a plaintiff it characterizes as a “copyright troll,” suggesting the impetus is financial leverage rather than firm empirical evidence of AI-model training from the adult content.
– The case spotlights broader legal and reputational risks for large tech firms: ensuring clear audit trails of data used in AI training, defensive readiness against allegations of infringing content use, and the importance of corporate policies and network monitoring for employee/contractor activity.
In-Depth
In a high-stakes legal confrontation that reflects the unsettled terrain of AI training and copyright liability, Meta Platforms Inc. has moved to dismiss a lawsuit brought by Strike 3 Holdings LLC, alleging that Meta secretly downloaded thousands of adult-film titles to train its AI models. According to Strike 3’s complaint, dating from July 2025, Meta used both corporate and concealed IP addresses to torrent at least 2,396 copyrighted adult-film works beginning in 2018, an effort allegedly tied to the development of its video-generation system (“Movie Gen”) and broader multimodal AI.
Meta’s legal response, filed in the U.S. District Court for the Northern District of California, argues that the evidence offered by Strike 3 fails to establish that Meta directed or controlled the downloads, let alone incorporated the material into any AI model. Among its arguments: the downloads average only about 22 titles per year across dozens of IP addresses, a number too small for the massive datasets typically required to train generative AI. Moreover, Meta contends that its work on video-generation AI did not begin until around 2022, making earlier alleged downloads too early to support the plaintiff’s narrative.
The company further emphasizes the complications inherent in network attribution: tens of thousands of employees, contractors, vendors, and visitors access Meta’s global network daily, making it implausible to attribute ambiguous IP-address activity to Meta’s AI-training operations. Meta’s policy prohibits employee use of adult content for AI generation, it says, and the complaint does not identify any specific individual at Meta who engaged in torrenting or approved it.
Strike 3, for its part, argues that the alleged pattern of torrents and the existence of a “stealth network” of over 2,500 hidden IP addresses point to coordinated, systematic infringement—and it seeks more than $350 million in damages. Meta pushes back by characterizing Strike 3’s litigation strategy as characteristic of so-called “copyright-troll” behaviour: pursuing large claims with limited factual grounding.
Beyond the parties, this case underscores a broader shift in how courts and firms will handle AI-training data: companies must now contend not only with the legitimacy of using copyrighted content but also with the traceability of data sources, internal oversight of employee activity, and the implications of network-based evidence (such as torrent logs) for corporate liability. Even if Meta succeeds in dismissing this suit, the battle draws a line in the sand for how generative-AI developers document, monitor—and if necessary, defend—the provenance of their training datasets.

