Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

    January 13, 2026

    Researchers Push Boundaries on AI That Actually Keeps Learning After Training

    January 13, 2026

    Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

    January 13, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

      January 13, 2026

      Researchers Push Boundaries on AI That Actually Keeps Learning After Training

      January 13, 2026

      UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

      January 13, 2026

      Joby Aviation Expands Ohio Footprint to Ramp Up U.S. Air Taxi Production

      January 13, 2026

      Amazon Rolls Out Redesigned Dash Cart to Whole Foods, Expands Smart Grocery Shopping

      January 13, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Agentic AI Takes Centre Stage in Cyber Defence – But Governance Gaps Raise Alarms
    Tech

    Agentic AI Takes Centre Stage in Cyber Defence – But Governance Gaps Raise Alarms

    4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Agentic AI Takes Centre Stage in Cyber Defence – But Governance Gaps Raise Alarms
    Agentic AI Takes Centre Stage in Cyber Defence – But Governance Gaps Raise Alarms
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Enterprises are accelerating the adoption of “agentic” AI—systems capable of autonomous decision-making and action—into their cybersecurity arsenal, signaling a major shift in digital defence strategy. According to a recent e-book from Snyk and Enterprise Strategy Group, cited by ITPro, securing these agentic systems is now just as critical as securing traditional code. Meanwhile, reporting from TechRadar notes that by 2028 roughly one-third of enterprise applications may include agentic AI—yet misuse risks like prompt-injection attacks and logic-layer corruption are already surfacing. At the same time, research highlighted by TechRadar reveals 98% of organizations plan to expand AI agent use in the next year, yet 96% view them as a rising security threat—highlighting a worrying readiness gap in governance and visibility.

    Sources: TechRadar, IT Pro

    Key Takeaways

    – The shift to agentic AI represents a transformational change in cybersecurity from human-triggered responses to autonomous actions—offering efficiency gains but also significantly increasing risk exposure.

    – The governance, visibility, and control frameworks for agentic AI are lagging enterprise deployments, leaving many organizations vulnerable to misuse, privilege escalation, and “objective drift.”

    – While the business case for agentic AI in defence is strong (boosting speed, scale and autonomy), failure to integrate identity management, clear oversight, and auditing mechanisms could lead to more high-profile security breaches.

    In-Depth

    The era of agentic AI is here, and it’s reshaping cybersecurity much more rapidly than many organisations realise. At its core, agentic AI refers to systems not merely executing defined prompts, but thinking, planning and acting on their own terms towards specific goals. This is a departure from traditional generative AI where a human prompt triggers a response; agentic AI blends autonomy with intent and decision-making at machine speed.

    According to a white paper referenced by ITPro, enterprises are being urged to treat agentic AI with the same level of rigour used for their code and software assets—visibility, testing, runtime control and security all matter. The argument is straightforward: if an AI agent is making decisions on your behalf, about your data, your networks or your workflows, then it becomes a crown-jewel asset in need of crown-jewel protection.

    On the upside, there’s plenty of drive behind this change. TechRadar reports that adoption is accelerating, and projections suggest by 2028 about one in three enterprise apps could embed agentic AI. For defenders, that means the perennial burden of alert overload, understaffed SOC teams and silent breaches could begin to give way to faster, machine-enabled reactions and fewer missed threats.

    Yet, with great speed comes great danger. One recent survey indicated that while 98% of organisations plan to expand AI-agent usage soon, 96% already consider these agents a growing security threat. Alarmingly, only just over half of those firms have full visibility into what their agents can access or do. These gaps in oversight, auditability and identity control create precisely the kind of environment adversaries love.

    Misuse scenarios are already emerging: prompt injection attacks, logic-layer manipulation, and “objective drift” where an agent’s goals subtly shift beyond its initial remit. In fact, one security-industry executive recently warned that agentic AI projects may fail at rates far higher than traditional AI initiatives, due largely to the weak governance around them.

    From a conservative-leaning viewpoint, the message is clear: embracing technology leaps is smart—but doing so without robust frameworks is reckless. Businesses must not just experiment with agentic AI for defensives sake, but do so with clear guardrails. Identity and access controls should treat agents as first-class actors in the enterprise security model. Audit trails, runtime monitoring, red-teaming of agentic workflows and strict boundaries on agent capabilities are non-negotiable.

    In short: Agentic AI offers a powerful opportunity for cybersecurity teams to leap ahead of adversaries—but only if the governance, analytics and operational oversight catch up. Failing to deal with the “autonomous defender” shift could create new vulnerabilities that adversaries will exploit faster than organisations realise. The direction is set: machine-enabled defence is arriving. The question now is whether businesses will be ready for the autonomy that comes with it.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAgentic AI Pushes From Assistant Tools To Full-Blown Autonomous Business Systems
    Next Article AI App Lets Users “Text with Jesus” on Demand

    Related Posts

    Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

    January 13, 2026

    Researchers Push Boundaries on AI That Actually Keeps Learning After Training

    January 13, 2026

    UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

    January 13, 2026

    Amazon Rolls Out Redesigned Dash Cart to Whole Foods, Expands Smart Grocery Shopping

    January 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

    January 13, 2026

    Researchers Push Boundaries on AI That Actually Keeps Learning After Training

    January 13, 2026

    UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

    January 13, 2026

    Joby Aviation Expands Ohio Footprint to Ramp Up U.S. Air Taxi Production

    January 13, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.