Instagram head Adam Mosseri publicly denied that Meta ever uses users’ phone microphones to “listen in” on conversations for ad targeting, calling the claim a “myth” and stressing it would violate privacy and drain battery life. Meanwhile, Meta is rolling out a looming shift: starting December 16, 2025, it plans to use data from users’ interactions with its AI tools—voice and text—to personalize content and ads across Facebook and Instagram. (Mosseri’s statement is from TechCrunch) The Verge covered his denial and the timing as Meta moves deeper into AI-driven ad systems. Reuters reported on the new ad targeting plan and the fact that users will not have an opt-out, though some regions (like the EU and UK) are initially exempt.
Key Takeaways
– Meta (via Instagram) maintains it does not use phone microphones to eavesdrop for ad targeting, labeling the idea a privacy violation and impractical.
– The company is shifting toward using user interactions with its AI features (chat, voice commands) as new signals for tailoring content and ad delivery.
– Users will not have a full opt-out from this AI-based personalization (though some regions are initially exempt), raising fresh privacy concerns.
In-Depth
There’s a long-standing suspicion that tech platforms are secretly listening through your phone’s microphone to target ads. Meta has always denied this, and Instagram’s head is revisiting the claim at a moment when the company is poised to rely more heavily on AI interactions. Let’s walk through the arguments, the context, and what this means going forward.
Adam Mosseri’s recent statement is straightforward: Instagram does not tap your microphone to tune your ads. He argues that if Meta did, people would see mic indicators (on iPhones or Android), and battery life would suffer — things that users would notice right away. He calls the microphone-listening theory a myth and says Meta’s ad systems are highly effective without audio data. This assertion is consistent with Meta’s long history of rejecting such claims.
Still, the timing of Mosseri’s comment is notable. On the same day, Meta revealed plans to integrate data from users’ interactions with its AI tools starting December 16, 2025. That means your voice or text conversations with Meta’s AI assistants will feed into how the company figures out what content and ads to show you. This isn’t exactly the same as always-on mic surveillance — but it does mark a significant pivot toward monetizing deeper behavioral signals.
Here’s the tricky part: many users have felt uneasy when they talk about something and suddenly see related ads. They take that as proof their devices are listening. But a lot of analysts say that effect can be explained by other mechanisms: data shared with advertisers (e.g. someone visited a product site earlier), social graph overlap (friends or like-minded users searching similar things), subliminal exposure to ads, algorithmic prediction, or coincidence. In fact, experiments aimed at exposing devices “listening in” have struggled to show a clear signal. For instance, some data suggests that constant audio capture would result in detectable battery or data use, which hasn’t been reliably found on major platforms.
That said, there’s also a darker corner of the discussion. In 2023 and 2024, a marketing firm (Cox Media Group) released pitch materials that claimed a technology called “Active Listening” could use device microphones to collect ambient audio and match it to ad targeting. Cox later removed those claims and its partners denied involvement, but the episode shows how tempting such capabilities are for ad networks. There’s no solid evidence that those promises were real, but they’ve kept suspicion alive.
Going forward, the line between “listening via mic” and “learning from your AI chats” may look fine in theory, but for many users the privacy implications feel similar. If you talk about your vacation plans to an AI assistant and then see related travel ads, it’ll raise questions: did the system genuinely derive that insight from patterns, or did it “listen” (in a permitted context) and act on it? Meta says it won’t use data from topics like health, religion, politics, or orientation for targeting, which is a partial safeguard.
The lack of an opt-out is another tension point. Meta says users won’t be able to fully resist this new signal sourcing, though regions with stricter data privacy rules (like the EU and the UK) are initially exempt. That creates a patchwork approach and opens the door for regulatory scrutiny.
In short: Meta is doubling down on its denial of microphone eavesdropping, but is simultaneously leaning into collecting richer behavioral signals via AI interactions. The ethical and privacy questions this raises are real, and whether users feel comfortable with the shift may depend on how transparently Meta executes it—and how much control (or regulation) users retain in practice.

