Generative AI attacks are accelerating at an alarming rate, according to a recent ITPro article citing two Gartner reports that warn 29% of organizations have experienced an attack on their AI application infrastructure over the past year. Threat actors are increasingly leveraging LLMs to automate phishing, code generation for malware, deepfake impersonation, and social engineering at scale. At the same time, many organizations admit their security posture isn’t keeping up: outdated defenses, lack of robust access controls, and insufficient oversight of AI models are leaving critical vulnerabilities exposed.
Sources: ClaytonRice.com, IBM
Key Takeaways
– Generative AI is being weaponized in multiple dimensions—code generation, phishing, impersonation, and prompt attacks—raising both speed and scale of cyberthreats.
– Security gaps are not merely technical: organizational readiness, model governance, access control, and monitoring are lagging behind emerging AI-threat vectors.
– Defenses are evolving, but the balance is shifting: without more aggressive adoption of AI-based detection, better oversight, and updated risk frameworks, many enterprises are exposed to growing risk.
In-Depth
We’re in a moment where generative AI has gone from being a promising tool to a prominent adversary in the cybersecurity landscape. The ITPro piece makes clear that nearly a third of organizations (29 %) have seen attacks targeting their AI infrastructure in the last 12 months, per new Gartner data. But that’s just scratching the surface of what’s happening.
Attackers are increasingly abusing large language models (LLMs) and related generative tools to automate tasks that used to require skilled human hackers: writing malicious code, crafting highly personalized phishing lures, conducting voice impersonations and deepfake-style attacks. Google’s “Adversarial Misuse of Generative AI” blog describes how most of the observed misuse involves accelerating malicious campaigns—e.g. LLMs being asked to generate malware snippets or phishing content. Meanwhile, social engineering has become a particularly reliable vector: in recent reports, a large share of incidents begin via phishing or impersonation, now made more convincing and scalable thanks to AI.
From the defensive side, the problems are mounting. Many organizations are still using cybersecurity tools designed for yesterday’s threats—signature-based malware detection, rigid perimeter defenses, and limited model oversight. IBM’s mapping of AI-attack vectors highlights issues like prompt injection (where attackers manipulate input to an AI to force it to bypass its safeguards), model extraction/inversion (revealing or copying sensitive internal model behavior), data poisoning (corrupting training data), and supply chain risks. Also, access control and governance around models often lag policy, leaving systems vulnerable. Moreover, budget, talent shortages, and legacy infrastructure make tightening these gaps difficult.
On the upside, awareness is rising—and so are more advanced defensive responses. Enterprises and security vendors are starting to build defensive AI tools to monitor, detect, and respond to attacks in real time. Frameworks for securing models, monitoring prompt inputs, and tightening identity management are being proposed. But the consensus in the field is: we need to move beyond reactive patchwork. Without proactive governance, tighter operational controls, and AI-aware architecture, organizations risk being outpaced by attackers who are already weaponizing AI.
In short: generative AI has dramatically changed the risk profile in cybersecurity. Attackers are faster, more adaptive, and their tools are scaling. Defenders are scrambling. The urgent question for organizations now is not whether they’ll face AI-enabled attacks—but how prepared they are to detect, respond to, and contain them before the damage becomes irreversible.

