A recently discovered bug in Microsoft‘s Copilot AI allowed the system to read and summarize emails marked as confidential in users’ draft and sent folders — even though policies were put in place to stop it from doing so — sparking serious privacy concerns among enterprises and cybersecurity experts; Microsoft has acknowledged the issue, tracked it as CW1226324, and begun rolling out a fix while asserting no unauthorized access occurred outside authorized user rights, but the incident highlights broader risks in rapid AI deployments.
Sources
https://www.itpro.com/technology/artificial-intelligence/microsoft-copilot-bug-saw-ai-snoop-on-confidential-emails-after-it-was-told-not-to
https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails
https://www.thenews.com.pk/latest/1393111-microsoft-copilot-bug-exposes-confidential-emails-to-ai
Key Takeaways
• A software flaw in Microsoft 365 Copilot Chat enabled access to emails labeled “confidential” in Sent and Draft folders, bypassing Data Loss Prevention (DLP) safeguards.
• Microsoft has deployed a global configuration update to fix the problem but has not disclosed comprehensive data on the number of affected organizations.
• The bug underscores ongoing security and privacy challenges as AI capabilities are rapidly integrated into enterprise environments.
In-Depth
Microsoft has publicly confirmed that a bug in its Copilot AI software — specifically in the Microsoft 365 Copilot Chat feature — allowed its generative AI assistant to access and summarize emails that organizations had explicitly labeled as confidential, even though such content was supposed to be blocked by existing data loss prevention and sensitivity policies. The problem, first detected in late January this year and cataloged internally as CW1226324, affected how Copilot handled email in users’ Sent Items and Drafts folders, and resulted in the AI processing material it was not meant to handle for summarization tasks. Microsoft acknowledged the issue to multiple outlets, saying that while access controls and protection policies remained intact, the behavior was inconsistent with the intended experience and violated internal expectations for content filtering. In response, the company says it has begun rolling out a configuration update globally to stop future occurrences and is actively monitoring the effectiveness of the fix.
The bug has raised red flags among enterprise IT administrators and cybersecurity observers because it illustrates how quickly AI features can be deployed into core business workflows without adequate safeguards for sensitive information. The system in question is integrated deeply into Microsoft’s productivity suite, allowing AI-assisted summaries of email content across Outlook and other Microsoft 365 apps — a feature that, under normal policy configurations, should skip emails marked with confidentiality labels. Reports from affected parties and independent monitoring sites indicate that the issue persisted for weeks before being acknowledged, prompting organizations to review and tighten their data protection strategies. While Microsoft has emphasized that no unauthorized access outside authorized user rights occurred, the incident nonetheless feeds into broader concerns about the pace at which AI tools are being introduced into enterprise systems and the potential for unexpected behavior to undermine corporate privacy commitments. Critics argue that this situation reinforces the importance of extensive testing and cautious rollout of generative AI capabilities, especially when they intersect with highly sensitive business communication.

