A recent article in The Epoch Times warns that artificial intelligence (AI) is increasingly becoming a liability in cybersecurity rather than a blessing. The piece cites a May McKinsey & Company report noting that AI systems are vulnerable to adversarial attacks like data poisoning and privacy breaches, meaning AI tools once seen as bolsters to defenses can themselves become weak links. Additionally, complementary reporting underscores how threat actors are rapidly weaponizing AI—using generative models for phishing, deep-fakes and polymorphic malware—to bypass traditional defenses. And in a separate vein, research points out that even the browser-based “agent” AI tools used by employees pose elevated risk, as they often lack oversight, training, or built-in safeguards. Together, these sources raise growing alarm about how AI’s dual-use nature — both protection and threat — is shifting cybersecurity dynamics, leaving enterprises and governments exposed unless they adapt rapidly.
Sources: Epoch Times, Security.com
Key Takeaways
– AI systems, initially adopted as defensive tools, are increasingly exposed to adversarial manipulation, and therefore can become the weakest link in a cybersecurity chain.
– Threat actors are leveraging AI to scale and sophisticate attacks — including deep-fakes, hyper-personalized phishing, polymorphic malware — which traditional defenses struggle to address.
– Even internal AI tools (e.g., employee browser agents) and organizational reliance on AI without proper oversight/training introduce novel vulnerabilities; securing AI infrastructure and human-AI interaction is critical.
In-Depth
It’s odd to admit, but AI — once the marquee hope for ramping up cybersecurity defences — is now being flagged as the very weak link organizations must watch. The report from The Epoch Times highlights how vulnerabilities like data poisoning, model inversion, and privacy inference are not just academic threats, but real-world risks to systems thought to be secure. This flips the narrative: instead of AI merely helping fend off bad actors, AI is now being weaponized, subverted, or repurposed by those same actors to launch more creative, large-scale attacks. And the conservative side of me sees this as a validation of the old adage: human judgment, process discipline and layered defence still matter — perhaps more than ever.
What makes this particularly concerning is the speed and scale at which adversaries are adapting. The security commentary shows that generative AI is being used for social engineering (e.g., very convincing phishing messages), deep-fake voices and videos for impersonation, and increasingly polymorphic malware that mutates to evade detection. Traditional signature-based defences simply cannot keep up when the attack surface evolves dynamically. Also, the internal dynamics are shifting: not just the attackers are using AI, but organizations themselves are deploying AI tools (like browser agents, task-automation bots) with little thought given to their security posture. That means employees, once the weakest link, might be replaced by AI agents that allow a different kind of weakness — an agent carries out tasks but lacks training, context awareness or critical judgement. That’s a vulnerability.
From a conservative viewpoint, this suggests that we cannot place blind faith in “deploy AI and we’re secure.” Instead, we must emphasise robust governance, human oversight, accountability and resilience. Training staff, establishing audit trails, compartmentalizing AI functions, and maintaining fallback manual processes become more vital. Moreover, while innovation is vital, equities like national defence, critical infrastructure and business-continuity demand that we do not rush AI deployment simply for novelty. The weaponisation of AI by foreign adversaries and criminals means that defensive strategies must get ahead of the curve. Organizations must ask: do we fully understand the model’s training data, adversarial vulnerabilities, supply-chain dependencies, and what happens if our AI tools themselves are compromised?
In short, AI’s promise remains real — but so do its risks. And the message now is not only to use AI for defence, but to defend the AI itself. Because if your AI can be flipped, your strongest defender can instantly become your weakest link.

