Developers trusting AI-based coding assistants are now facing a fresh and subtle software-supply-chain risk called “slopsquatting,” where large language models (LLMs) hallucinate entirely plausible but non-existent package names and malicious actors pre-register those names in public repositories, causing unwitting installations of malware-laden dependencies. According to research, roughly 20 % of AI-generated code samples contain such phantom packages. Firms like Chainguard note that the shift toward “vibe coding”—developers quickly accepting AI-crafted code without thorough review—magnifies the danger, as fewer humans eyeball every dependency and traditional vetting steps get bypassed. To mitigate the threat, experts advise layering security controls: verifying package provenance, employing Software Bill of Materials (SBOM) tracking, performing sandboxed installations, adjusting AI-assistant prompts, and retaining human oversight in development workflows.
Sources; TrendMicro, IT Pro
Key Takeaways
– Slopsquatting arises from AI-assistant hallucinations of library names which attackers exploit by registering those names with malicious payloads.
– The shift to rapid-ai/“vibe” coding workflows diminishes human review of dependencies, increasing vulnerability to supply-chain compromise.
– Strong mitigation demands a dual approach of AI-tool tuning plus robust pipeline controls (dependency audits, SBOMs, sandboxing) rather than relying solely on legacy practices.
In-Depth
With software development increasingly relying on AI-powered coding assistants, a new threat vector has quietly emerged: slopsquatting. This term describes the process where an AI model suggests a library or package name that does not, in fact, exist, the developer installs it or trusts it, and an attacker has already registered that name in a public repository (for example, PyPI or npm) and embedded malicious code. The attacker then essentially subverts the developer’s dependency chain, allowing malware, backdoors or data-exfiltration tools to slip into production code under the guise of a legitimate dependency.
Why is this happening now? AI coding assistants have changed the equation. Instead of writing every line, developers increasingly rely on natural-language prompts and generative tools to scaffold entire blocks of code. Known as “vibe coding,” this workflow emphasises speed and creativity, sometimes at the expense of deeper validation. The problem is that AI models, while astonishingly capable, still hallucinate — generating output that appears valid but isn’t grounded in reality. When an AI says “import superfastjson” (for example) and no such package exists, yet a developer installs it nevertheless, an attacker could have pre-emptively published that package with malicious intent.
Research bears this out: one study found that of more than 700 000 AI-generated code snippets, roughly 19.7 % referenced packages that did not exist. Even more concerning, nearly half of those hallucinated names occurred repeatedly. That means attackers can predict which package names to register and weaponise.
Traditional supply-chain defences were designed around typosquatting or dependency-confusion — where human error or ambiguous naming lets attackers slip in. Slopsquatting is different: it originates in AI’s mistaken creativity and exploits the sheer trust developers place in AI-assisted output.
One article published by IT Pro highlights how the SVP of Engineering at Chainguard described slopsquatting as “a modern twist on typosquatting,” noting that as AI enables massive code generation, the human review element shrinks, elevating risk. Defensive strategies must evolve accordingly. It’s no longer sufficient to rely on a familiar lock-file and known-vulns database. Instead, organisations should adopt a layered approach:
– Mandate human review of every AI-suggested dependency.
– Integrate real-time verification of whether a package exists in trusted registries.
– Employ SBOM-generation in build pipelines so that every dependency’s provenance is traceable.
– Sandbox installations of newly referenced libraries and monitor runtime behaviour for anomalies.
– Tune AI assistants: use stricter prompting, lower creativity (temperature), and where possible have the AI cross-check its own suggestions against known package lists.
From a conservative planning perspective, the message is clear: progress is good, but risk remains. The trend toward using AI in development isn’t going away — nor should it. But we cannot accept that speed should override security. As the stakes rise (with software underpinning critical systems, financial processes and enterprise operations), letting unverified dependencies into your build is simply irresponsible. For organisations that pride themselves on reliability and resilience, slopsquatting represents both a new frontier of threat and a call-to-action: maintain discipline in your tech stack, retain human judgment alongside AI, and treat every dependency as if it could be an attack vector until proven otherwise.
In summary: slopsquatting is not science fiction, it’s real, and it’s manageable — but only if you assume the worst, ask the tough questions, and don’t let the buzz of AI lull you into a false sense of security.

