Organizations worldwide are reporting a dramatic increase in generative artificial intelligence (GenAI)-related data policy violations, with recent research from Netskope Threat Labs showing that the average organization now logs roughly 223 such incidents each month, more than double year-over-year as AI tools infiltrate business workflows without adequate oversight; this surge is driven in large part by “shadow AI,” where employees use unsanctioned personal AI tools that bypass corporate governance and expose sensitive information—despite growing efforts to block high-risk applications and tighten controls, nearly half of users still rely on unmanaged AI accounts, amplifying risks to regulated data, source code, and proprietary information across sectors.
Sources:
https://www.techradar.com/pro/security/average-organization-now-reporting-over-200-genai-related-data-policy-violations-each-month https://securitybrief.asia/story/generative-ai-drives-surge-in-workplace-data-breaches https://betanews.com/2026/01/06/genai-data-policy-violations-more-than-doubled-in-2025/
Key Takeaways
- GenAI misuse is widespread: The typical organization is now experiencing more than 200 monthly data policy violations tied to generative AI use, a significant uptick reflecting both sanctioned and unsanctioned tool usage.
- Shadow AI fuels risk: A large share of violations stems from employees using personal or unmanaged AI tools that fall outside corporate security and compliance frameworks, leaving sensitive data vulnerable.
- Security controls are tightening but lagging: While most companies now block multiple GenAI applications and expand governance, enforcement and education often lag behind adoption, leaving gaps that policy and IT teams must prioritize closing.
In-Depth
In the past year, the corporate landscape has witnessed an unprecedented rise in the use of generative artificial intelligence—not just as a productivity booster but, increasingly, as a source of data governance headaches. According to a recent report from Netskope Threat Labs, organizations today are logging an average of roughly 223 GenAI-related data policy violations every month. That’s more than double the rate from just a year earlier, underscoring how deeply these tools have penetrated business processes and how poorly many companies are prepared to govern them.
A key driver of this surge is “shadow AI”—the deployment and use of AI tools by employees without formal approval or oversight from IT and security teams. In these scenarios, well-intentioned workers seeking efficiency gains upload internal documents, code, and regulated information to external AI platforms via personal accounts, often unaware of the compliance and security implications. The result? Sensitive data ends up outside the protective boundaries of corporate controls, creating exposures that range from intellectual property leakage to regulatory noncompliance.
At the same time, companies are reacting. According to industry reporting, nine in ten organizations now actively block at least one GenAI application, with an average of ten tools blocked per firm. Despite this tightening posture, nearly half of all AI tool usage still occurs through unmanaged channels, highlighting a persistent gap between policy and practice.
Addressing this challenge requires more than just blocking rogue apps; it demands comprehensive governance frameworks that combine clear policies, employee education, and real-time monitoring of AI activity. Without such measures, the tension between rapid innovation and responsible data stewardship is likely to deepen, leaving organizations grappling with both the promise and peril of generative AI.

