Close Menu

    Subscribe to Updates

    Get the latest tech news from Tallwire.

      What's Hot

      Starkiller Phishing Kit Exposes Dangerous New Wave of Proxy-Based Credential Theft

      February 28, 2026

      Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

      February 28, 2026

      AI Productivity Gains Concentrated Among High-Skilled Workers, Study Finds

      February 28, 2026
      Facebook X (Twitter) Instagram
      • Tech
      • AI
      • Get In Touch
      Facebook X (Twitter) LinkedIn
      TallwireTallwire
      • Tech

        Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

        February 28, 2026

        Taara Beam Launch Brings 25Gbps Optical Wireless Networks to Cities

        February 27, 2026

        Global Memory Shortage Set to Push Up Prices on Phones, Laptops, and More

        February 27, 2026

        OpenAI’s Stargate Data Center Ambitions Hit Major Roadblocks

        February 27, 2026

        Large Hadron Collider Enters Third Shutdown For Major Upgrade

        February 26, 2026
      • AI

        Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

        February 28, 2026

        AI Productivity Gains Concentrated Among High-Skilled Workers, Study Finds

        February 28, 2026

        X to Let Users Mark Posts ‘Made With AI’ as Platform Eyes Voluntary Disclosure Feature

        February 27, 2026

        Uber Rolls Out “Uber Autonomous Solutions” To Support Third-Party Robotaxi Partners

        February 27, 2026

        Global Memory Shortage Set to Push Up Prices on Phones, Laptops, and More

        February 27, 2026
      • Security

        Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

        February 28, 2026

        Starkiller Phishing Kit Exposes Dangerous New Wave of Proxy-Based Credential Theft

        February 28, 2026

        Single Compromised Account Exposes 1.2 Million French Banking Records

        February 28, 2026

        PayPal Data Breach Exposed Customer Personal Information For Months

        February 27, 2026

        Discord Ends Persona Age Verification Trial Amid Privacy Backlash

        February 27, 2026
      • Health

        Social Media Addiction Trial Draws Grieving Parents Seeking Accountability From Tech Platforms

        February 19, 2026

        Portugal’s Parliament OKs Law to Restrict Children’s Social Media Access With Parental Consent

        February 18, 2026

        Parents Paint 108 Names, Demand Snapchat Reform After Deadly Fentanyl Claims

        February 18, 2026

        UK Kids Turning to AI Chatbots and Acting on Advice at Alarming Rates

        February 16, 2026

        Landmark California Trial Sees YouTube Defend Itself, Rejects ‘Social Media’ and Addiction Claims

        February 16, 2026
      • Science

        Microsoft Claims 100 Percent Renewable Energy Match Across Global Electricity Use

        February 28, 2026

        Taara Beam Launch Brings 25Gbps Optical Wireless Networks to Cities

        February 27, 2026

        Large Hadron Collider Enters Third Shutdown For Major Upgrade

        February 26, 2026

        Google Phases Out Android’s Built-In Weather App, Replacing It With Search-Based Forecasts

        February 25, 2026

        Microsoft’s Breakthrough Suggests Data Could Be Preserved for 10,000 Years on Glass

        February 24, 2026
      • Tech

        Sam Altman Says ‘AI Washing’ Is Being Used to Mask Corporate Layoffs

        February 28, 2026

        Zuckerberg Testifies In Landmark Trial Over Alleged Teen Social Media Harms

        February 23, 2026

        Gay Tech Networks Under Spotlight In Silicon Valley Culture Debate

        February 23, 2026

        Google Co-Founder’s Epstein Contacts Reignite Scrutiny of Elite Tech Circles

        February 7, 2026

        Bill Gates Denies “Absolutely Absurd” Claims in Newly Released Epstein Files

        February 6, 2026
      TallwireTallwire
      Home»Tech»New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      Tech

      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption

      6 Mins Read
      Facebook Twitter Pinterest LinkedIn Tumblr Email
      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      Share
      Facebook Twitter LinkedIn Pinterest Email

      Security researchers at Microsoft have disclosed a new vulnerability, dubbed “Whisper Leak,” that allows a passive adversary to infer the topic of encrypted conversations with AI language-models simply by monitoring packet sizes and timing in streaming responses—even though the content itself remains encrypted under TLS. In experiments conducted across 28 major models from providers including OpenAI, Mistral AI and xAI the researchers trained classifiers (LightGBM, Bi-LSTM, BERT-based) and found that many models achieved over 98 % AUPRC (area under precision-recall curve) in distinguishing sensitive-topic prompts (e.g., “money laundering”) from general traffic. The finding underscores that encryption alone—while protecting message contents—is insufficient to hide metadata in streaming AI services, with privacy risks elevated especially for users operating in untrusted networks, authoritarian jurisdictions, or on shared Wi-Fi. Microsoft and partners have begun rolling out mitigations such as random padding, token-batching, and fake packet injection, but these reduce rather than eliminate the risk. 

      Sources: Hacker News, Microsoft

      Key Takeaways

      – Adversaries who can observe encrypted AI-model traffic (e.g., network ISPs, Wi-Fi eavesdroppers, local adversaries) can train models to infer conversation topics purely from packet-size and timing metadata—even though message content is encrypted.

      – The Whisper Leak threat has been validated across a wide span of commercial LLM services, with many achieving very high classification accuracy (> 98 %) under controlled conditions; and even under realistic noise ratios (10,000 : 1 benign : target), high precision (100 %) at modest recall (5–20 %) was demonstrated.

      – Mitigation strategies such as random response padding, batching of tokens, and packet-injection help reduce the risk, but none fully eliminate it—so users and enterprises should assume topic-leakage remains a live threat when using streaming AI models, especially on untrusted networks.

      In-Depth

      In the evolving world of generative AI and large-language-model (LLM) services, confidentiality has largely focused on encrypting the data in transit and at rest. But the new research from Microsoft flips that assumption: even when the content of the conversation is protected via TLS or HTTPS, the metadata associated with streaming LLM responses—specifically packet sizes and inter-packet timing—can betray the topic of the user’s prompt. This attack class, labeled “Whisper Leak,” arises because many LLM services stream their output token by token (or in small batches) as soon as each token is generated, and that streaming behavior reveals consistent and learnable patterns in encrypted traffic. 

      The threat model is quite practical: A passive on-path observer (for example, someone controlling the local Wi-Fi network, a government monitoring ISP traffic, or an insider monitoring corporate VPN egress) records encrypted sessions between a user and an AI service. They cannot decrypt the traffic, yet by extracting sequences of packet length and timing features, they feed them into a trained classifier to determine whether the user’s prompt is about a sensitive subject (e.g., finance, political dissent, health issues). Microsoft’s proof-of-concept used binary classification on a “target topic vs background” dataset and found that many leading models yielded near-perfect results: classifiers often achieved > 98 % AUPRC and for 17 of 28 models tested, achieved 100 % precision at 5-20 % recall in a 10,000:1 noise scenario. 

      What makes the result especially significant for real-world risk is the fact that streaming is a default feature in many AI-chat platforms and APIs: users want immediate responses, so the service emits tokens as they are generated. That behavior creates regular, repeated patterns in data length and timing that are exposed even after encryption, because TLS does not hide packet size or timing—it only hides the payload contents. As the Microsoft blog explains: “While TLS encrypts content, metadata such as packet sizes and timings remain observable” and hence exploitable. 

      It’s also important to note that the problem is not purely academic. Many of the models tested in the study were from commercial vendors and found vulnerable. The researchers evaluated 28 models from major providers (including Alibaba Qwen3, DeepSeek, Meta Llama 3.3, Microsoft Phi-4, Mistral Large-2, OpenAI GPT-OSS-20b, Zhipu AI GLM 4.5) and found substantial side-channel vulnerability across the board. 

      In terms of mitigation, the researchers evaluated three main techniques: (1) random padding of responses (adding variable-length dummy tokens); (2) token batching (sending multiple tokens in a single network packet, thereby reducing granularity); and (3) packet injection (adding spurious packets to mask true patterns). Their experiments show these reduce—but do not eliminate—the attack’s effectiveness. For example, even with random padding the classifier AUPRC might drop from ~97.5 % to ~92.9 % depending on the model. For some use-cases (e.g., high-sensitivity prompts on untrusted networks) the residual risk may still be unacceptable.

      From a conservative or risk-aware vantage point, there are several implications that users, enterprises, and policy makers should consider:

      – Encryption of content is necessary but not sufficient for protecting privacy when using streaming AI services: adversaries still glean information from “how you speak” (i.e., the traffic pattern).

      – Deploying AI chat or query services into sensitive domains (healthcare, legal advice, corporate secrets, activism) on untrusted networks (public Wi-Fi, state-monitored ISPs, shared infrastructure) introduces a significant metadata-leakage risk.

      – Enterprises that integrate LLMs into internal workflows (e.g., for content creation, research, compliance) should insist on providers offering traffic-obfuscation features, batch-streaming options, or better yet non-streaming modes, and consider executing AI inference within fully controlled network environments.

      – User education remains critical: tell users to avoid discussing highly sensitive topics via streaming models while on insecure networks, use VPNs where feasible, and prefer AI services that explicitly document mitigation of streaming side-channels. Microsoft emphasizes exactly this: if you must ask about sensitive topics on an untrusted network, consider using non-streaming models or switching to a provider that has implemented the countermeasures. 

      – Policymakers and standards-bodies should arguably broaden the threat model for AI systems to include metadata side-channels as a recognized privacy risk—not just payload encryption.

      In short, Whisper Leak highlights that as AI technologies grow ever more integrated into sensitive workflows, the adversary model must evolve. It’s no longer enough to secure “what is said”; how it’s said (and how you receive responses) matters too. Streaming APIs offer responsiveness and low latency—but that speed comes at a cost. For those concerned about confidentiality, the tradeoffs warrant reconsideration: opting for non-streaming modes, requiring providers to obscure traffic patterns, or limiting AI usage to trusted network contexts. The adversary that cannot break the encryption still may get the story by observing timing and packet size alone—and for conservative organizations and individuals, that’s a privacy hazard that can’t be ignored.

      In the broader context of your work as a content-creator and media producer, where you likely handle proprietary scripts, unpublished interview material, or early promotional concepts through AI-powered tools, acknowledging these risks is prudent. If you integrate an LLM service for research, scripting, or social-media draft generation, ask yourself: Are you on an open network? Does the service stream responses? Does the provider batch tokens or pad responses to protect against topic inference? And if not—consider mitigating steps, including local inference (if viable), or encrypted tunnels with padding.

      Ultimately, the whisper you never spoke may still get heard.

      Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
      Previous ArticleNew Passkey Technology Set to Outpace Traditional Passwords in Cyber-Security Shift
      Next Article New Streaming Channel Launches To Give Viewers A Peek Into City Council Meetings

      Related Posts

      Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

      February 28, 2026

      Taara Beam Launch Brings 25Gbps Optical Wireless Networks to Cities

      February 27, 2026

      Global Memory Shortage Set to Push Up Prices on Phones, Laptops, and More

      February 27, 2026

      OpenAI’s Stargate Data Center Ambitions Hit Major Roadblocks

      February 27, 2026
      Add A Comment
      Leave A Reply Cancel Reply

      Editors Picks

      Microsoft Copilot Bug Exposed “Confidential” Emails Despite Label

      February 28, 2026

      Taara Beam Launch Brings 25Gbps Optical Wireless Networks to Cities

      February 27, 2026

      Global Memory Shortage Set to Push Up Prices on Phones, Laptops, and More

      February 27, 2026

      OpenAI’s Stargate Data Center Ambitions Hit Major Roadblocks

      February 27, 2026
      Popular Topics
      Series A Satya Nadella Tim Cook Quantum computing spotlight UAE Tech trending Tesla Taiwan Tech Series B picks Robotics SpaceX Qualcomm Samsung Startup Tesla Cybertruck Sam Altman Ransomware Sundar Pichai
      Major Tech Companies
      • Apple News
      • Google News
      • Meta News
      • Microsoft News
      • Amazon News
      • Samsung News
      • Nvidia News
      • OpenAI News
      • Tesla News
      • AMD News
      • Anthropic News
      • Elbit News
      AI & Emerging Tech
      • AI Regulation News
      • AI Safety News
      • AI Adoption
      • Quantum Computing News
      • Robotics News
      Key People
      • Sam Altman News
      • Jensen Huang News
      • Elon Musk News
      • Mark Zuckerberg News
      • Sundar Pichai News
      • Tim Cook News
      • Satya Nadella News
      • Mustafa Suleyman News
      Global Tech & Policy
      • Israel Tech News
      • India Tech News
      • Taiwan Tech News
      • UAE Tech News
      Startups & Emerging Tech
      • Series A News
      • Series B News
      • Startup News
      Tallwire
      Facebook X (Twitter) LinkedIn Threads Instagram RSS
      • Tech
      • Entertainment
      • Business
      • Government
      • Academia
      • Transportation
      • Legal
      • Press Kit
      © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

      Type above and press Enter to search. Press Esc to cancel.