In a recent SiliconANGLE feature, “Shadow AI: Companies struggle to control unsanctioned use of new tools,” enterprises across the board are grappling with the rampant use of unsanctioned artificial intelligence tools—what’s being called shadow AI. Employees often operate outside IT oversight, with a survey by Prompt Security Inc. reporting an average of 67 AI tools used per organization and 90 % lacking IT approval. The informal use of tools like ChatGPT, Grammarly, and Canva has introduced pressing risks of data exposure, regulatory breaches, and intellectual property leakage, especially as generative AI models may retain and propagate proprietary inputs. Governance remains underdeveloped: only 14 % of employees say their companies’ AI-use policies are “very clear,” and 39 % find it surprisingly easy to use unapproved tools. Together, these trends highlight how rapidly shadow AI is proliferating—and how unprepared many organizations are to manage it.
Sources: SiliconANGLE
Key Takeaways
– Shadow AI is ubiquitous. Employees across various roles are increasingly incorporating AI tools into workflows without formal validation or oversight, signaling a widespread and growing trend.
– Risk magnification. Unsanctioned AI usage opens enterprises to data breaches, IP leaks, regulatory non-compliance, and the propagation of hallucinated or incorrect outputs.
– Governance gaps are glaring. Few organizations have clear AI usage policies or the means to enforce them, and banning tools outright has proven ineffective—suggesting a better path lies in visibility, education, and strategic enablement.
In-Depth
Shadow AI has quietly become one of the thorniest challenges for modern organizations, and not because employees are trying to undermine security—but because they’re striving to stay productive.
In a world where AI tools are just a browser tab away, workers routinely tap ChatGPT, Grammarly, and Canva to polish emails, draft presentations, or craft content—often without IT knowing. A survey from Prompt Security Inc. found that, on average, organizations run 67 different AI tools, with 90 % lacking formal IT approval. That’s not just sloppy—it’s a potential regulatory and security minefield. Generative AI’s unpredictability adds to the danger. LLMs can hallucinate, leak intellectual property, or even incorporate sensitive inputs into retrained models, exposing enterprises to all sorts of governance and compliance breaches.
It’s worse that governance hasn’t caught up. Only 14 % of employees say their workplace has crystal-clear AI-use policies, and 39 % say they can easily use unsanctioned tools anyway. Attempts to outright ban these tools aren’t working—clamping down just forces the practice underground, further eroding visibility. Instead, organizations should aim to illuminate and integrate shadow AI, not extinguish it. That means rolling out approved, secure AI platforms; implementing visibility and detection tools like AI-Governance dashboards; educating employees on safe AI usage; and adopting a zero-trust mindset around data handling in AI contexts.
Firms like Vorlon are already offering platforms to surface and map AI usage across SaaS and data flows, giving security teams real-time awareness of hidden tools. By reframing shadow AI not as a defiant foe but as a sign of employee ingenuity, companies can turn an ungoverned sprawl into a structured strategic advantage—empowering innovation while keeping risk in check.

