A new report from Concentric AI reveals that Microsoft Copilot has accessed nearly three million sensitive data records per organization during the first half of 2025, raising alarms about data governance and oversight. According to the report, about 57 percent of shared organization-wide files contained privileged or confidential information—and in sectors like healthcare and financial services that number approaches 70 percent. Moreover, on average two million sensitive business records were shared without restrictions, and over 400,000 were shared externally to personal accounts, many of which included confidential data. The report also highlights persistent “data sprawl” issues: companies maintain millions of duplicate, stale, orphaned, or inactive records—making governance and oversight harder. Meanwhile, in deploying Copilot broadly, businesses often struggle to contain oversharing, limit permissions, and ensure that AI outputs respect classification labels. In sum: as organizations lean more on generative AI tools like Copilot, the risks of unintentional exposure, intellectual property leakage, and compliance failures increase.
Sources: TechRadar, HelpNet Security
Key Takeaways
– Copilot is interacting with vastly more confidential and privileged data than many organizations anticipate, often inheriting permissions that exceed necessity.
– Weak data hygiene—duplicate, stale, orphaned records and lax sharing policies—compounds the exposure risk when AI tools are layered on top of legacy systems.
– Effective governance, granular access controls, and output classification strategies must evolve alongside AI adoption or the ripple effects could be severe for compliance, security, and reputation.
In-Depth
As enterprises adopt more AI tools to boost productivity, they often underestimate how much trust they’re placing in those systems to manage sensitive information responsibly. The Concentric AI Data Risk Report for 2025 spotlights this blind spot—showing that Microsoft’s Copilot, deeply integrated with Microsoft 365 environments, has accessed upwards of three million sensitive records per organization in just six months. This isn’t just a theoretical risk: in sectors like healthcare and finance, a large majority of files shared internally or externally already contain privileged content.
What makes Copilot especially concerning is that it inherits the access rights of the user and the Microsoft 365 tenant configuration. If employees or systems already have overly permissive access, Copilot can “see” and manipulate data those users don’t even realize they have. Even more worrying: the tool does not consistently propagate classification labels or enforce the security postures of the original files. That means output might surface sensitive data without the necessary protections or warnings.
Then there’s the backdrop of poor data hygiene—organizations surveyed averaged tens of millions of duplicate data records, millions of stale files, and large pools of orphaned or inactive data. This clutter makes it harder to track which data matters and who owns it. When Copilot is layered atop that mess, the potential for accidental leaks or misuse grows exponentially.
The pressures to deploy AI fast also exacerbate the risk. Gartner-surveyed companies admit that governance and deployment costs are often higher than anticipated, forcing many to limit Copilot use to “low risk” groups or delay full rollout. Oversharing and content sprawl are already major pain points in many Microsoft 365 setups, and AI only accelerates their impact.
To manage this safely, organizations need to rethink governance in three dimensions: permissions, prevention, and post-processing. Permissions must be as restrictive as possible (least privilege), monitored and audited regularly. Prevention strategies must include automated detection of overly shared files, “Anyone” link misuse, and suspicious access activity. Finally, post-processing governance should ensure that any AI output is reclassified, checked, and governed before wider sharing. Without these safeguards, enterprises risk undermining their own security in the name of productivity.

