Enterprises are facing a growing danger from employees using unsanctioned artificial-intelligence tools—so-called “shadow AI”—which allows sensitive data and internal workflows to be exposed without the knowledge of IT or security teams. According to a report by IT Pro published November 17 2025, more than 90 % of companies now have workers deploying chatbots or AI assistants, while only about 40 % formally track those tools, resulting in a legacy of chat logs, prompt histories and metadata that can be weaponized by attackers. The article outlines real-world incidents including a database leak of AI conversation history from a third-party service and prompt-injection attacks in enterprise Slack environments. Meanwhile, additional research from security-focused blogs and industry reports echoes the same themes: AI-native systems are evolving faster than traditional governance frameworks and creating new attack surfaces that many companies do not yet understand, monitor or secure. These developments signal an urgent wake-up call for businesses to update their AI governance, train personnel on acceptable usage, and implement continuous visibility into AI workflows and tools.
Sources: IT Pro, Material Security
Key Takeaways
– Employees are increasingly using consumer-grade AI tools in the workplace without IT oversight, creating unmanaged entry points for data leakage and intellectual-property exposure.
– Traditional security controls and audits—designed for static applications—are insufficient for AI-native ecosystems, where models, prompts, plugins and workflows evolve dynamically and invisibly.
– Organizations that delay establishing formal policies, AI inventory tracking and continuous monitoring may face elevated breach costs, regulatory exposure and a widening competitive gap.
In-Depth
In today’s corporate world, the emergence of generative AI tools—tools like chatbots, internal copilots and large-language-model assistants—has introduced a new dimension to enterprise productivity. Many companies enthusiastically embrace the convenience: staff can rapidly draft reports, automate summaries, or generate code snippets. But beneath that convenience lies a significant security hazard, one that many firms are only now beginning to grasp. The term “shadow AI” aptly describes this phenomenon: the use of AI tools within organizations that bypass formal IT approval, oversight, or governance. While reminiscent of the older “shadow IT” paradigm, shadow AI brings far greater risk because it often involves sensitive data flows, unmonitored prompts and unseen model-driven decision-making.
According to the IT Pro analysis published November 17 2025, enterprises face worryingly high levels of uncontrolled AI usage. The article notes that more than 90 % of companies surveyed reported employees using chatbots or AI assistants for work tasks, whereas only about 40 % of firms acknowledged tracking or subscribing to those tools in an approved capacity. Some of the most alarming details include a court order in June 2025 requiring a major AI vendor to retain all chat logs—even deleted ones—highlighting the depth of data retention and audit gaps. And in one case, a prompt-engineering attack forced a corporate Slack AI tool to leak sensitive internal data—revealing how even approved platforms can be manipulated.
But these examples are just the tip of the iceberg. Complementary industry articles paint a broader landscape of AI-native risk: Unvetted developer installs of model-agent frameworks, databases of prompt history left exposed, plugin or tool misuse granting access to system credentials, and AI workflows running for months outside approved channels. For example, a blog by Lasso Security defines shadow AI as unsanctioned generative-AI tools used by employees without oversight and warns of data leaks, compliance failures and untraceable decisions. Meanwhile, research cited in LeadDev indicates that 62 % of organizations have no visibility into where large-language models (LLMs) are used, and that shadow AI may surpass shadow IT in terms of risk.
What makes shadow AI such a formidable threat is the combination of invisibility and velocity. Traditional audits and security tools are built around static codebases, defined APIs and known workflows. But AI-driven workflows are dynamic: they may involve invisible prompts, models with unknown lineage, retrieval APIs that pull from vector embeddings, plugins that execute off-lifecycle or browser extensions that bypass firewall controls. As one article notes, the problem is not just hidden components—it’s living, shifting ones. For example, the “AI-Bill-of-Materials” (AI-BOM) concept has been introduced as a way to catalog all models, prompts, datasets, tools and flows in an AI stack—but very few organizations have robust systems in place to implement it.
From a conservative governance perspective, the path forward involves several essential steps. First, companies must accept that unsanctioned AI usage isn’t a hypothetical—it is already prevalent. The conversation should shift from “blocking AI” to “bringing AI into the fold where it can be managed, logged, and audited.” Second, boards and C-suite executives must ensure that AI-usage policies are established: who may use what tools, for what data, with what controls and retention policies. Third, security teams need to extend their visibility: asset inventory must include AI models, prompts and data flows just like servers and endpoints. Audit trails, logging, role-based access and change-control must cover these too. Fourth, employee training must emphasise the risks of pasting confidential spreadsheets or internal documents into AI chat windows. Finally, incident-response programs must evolve to include AI-native threats: prompt-injection, model inversion, unauthorized plugin access and drift in model behaviour over time.
In a climate where cyber adversaries increasingly leverage AI-driven methods, organizations that fail to secure their AI surface now may find themselves with exploitable “time-bomb” data stores—chat logs, prompt histories and AI workflows that reveal patterns, strategies and proprietary information. From a conservative risk-management standpoint, governing and controlling AI tools before they become entrenched is far preferable to cleaning up after a breach occurs. The transparency and traceability that enterprises demand for fiscal audits, regulatory compliance and corporate governance must now extend into the AI domain. It’s not just about controlling new tools—it’s about protecting trust, protecting data, and preserving competitive advantage in a world where “shadow AI” may turn out to be one of the largest blind spots in modern security.

