In a recently disclosed incident, OpenAI confirmed that a security breach at its third-party analytics partner Mixpanel exposed limited user data for some customers of OpenAI’s API platform. According to OpenAI, this breach did not impact its own systems or affect users of services like ChatGPT. However, for API users, exposed data may include names, email addresses, approximate locations, browser and operating-system metadata, referring websites, and account or organization IDs—though no sensitive details like passwords, API keys, payment info, or chat logs were compromised. OpenAI has terminated its use of Mixpanel, notified those impacted, and warned of potential phishing risks for affected accounts.
Key Takeaways
– The breach originated at Mixpanel, not within OpenAI’s internal infrastructure, and thus core systems like ChatGPT remained secure.
– Exposed data was limited to non-sensitive metadata tied to some API account profiles (names, emails, coarse location, OS/browser info, referring site, account IDs).
– OpenAI responded promptly by cutting ties with Mixpanel, notifying affected users, and urging everyone—especially API customers—to enable multi-factor authentication and remain alert for phishing.
In-Depth
On November 8 2025, Mixpanel detected and reported that an attacker had carried out a “smishing” attack—an SMS-based phishing campaign—on its systems, resulting in unauthorized access and export of analytics data tied to several customers of its services, one of which was OpenAI’s API platform. After reviewing the exposed dataset, Mixpanel informed OpenAI on November 25; OpenAI then began notifying affected developers and organizations. While the incident triggered alarm due to the scale of analytics-sharing across tech firms, the scope of exposed data appears limited. Specifically, according to OpenAI’s disclosure, compromised fields included: user-provided names on API accounts, associated email addresses, coarse browser-derived location (city, state, country), the browser and operating-system used, referring website metadata, and organization or user IDs linked to API accounts. Critically, the breach did not expose passwords, payment details, authentication tokens, API keys, or actual API usage data or chat logs.
Given that the breach occurred entirely within Mixpanel’s environment, OpenAI stressed that its own infrastructure was not penetrated—so users of ChatGPT, DALL-E, or OpenAI’s other mainstream services remain unaffected. Still, the exposed metadata could be exploited by bad actors for phishing or social-engineering attacks, especially given that email addresses and names were involved. Recognizing the risk, OpenAI immediately disabled Mixpanel in its production services and commenced a broader vendor-security review to prevent similar incidents in the future.
In terms of damage control, OpenAI has reached out to impacted customers directly and set up channels for questions. The company is also urging all API users—even those unaffected—to enable multi-factor authentication as a precaution. For developers and organizations that integrate OpenAI’s API into their applications, the breach represents a wake-up call about the risks of third-party analytics: even when the core service remains uncompromised, metadata sharing can open doors to potential exploitation.
The incident highlights a fundamental challenge in today’s AI ecosystem: as companies outsource analytics and telemetry to third parties for performance monitoring, they broaden their attack surface. Users and organizations should treat even “low-sensitivity” metadata as potentially valuable to cybercriminals, especially when linked to names or contact information. While OpenAI’s swift response and transparency deserve credit, this serves as a reminder that trust in vendor security should never be assumed.

